CN113115018A - Self-adaptive display method and display equipment for image - Google Patents
Self-adaptive display method and display equipment for image Download PDFInfo
- Publication number
- CN113115018A CN113115018A CN202110257615.2A CN202110257615A CN113115018A CN 113115018 A CN113115018 A CN 113115018A CN 202110257615 A CN202110257615 A CN 202110257615A CN 113115018 A CN113115018 A CN 113115018A
- Authority
- CN
- China
- Prior art keywords
- resolution
- video frame
- optimal
- horizontal
- vertical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 239000012634 fragment Substances 0.000 claims abstract description 22
- 238000005070 sampling Methods 0.000 claims abstract description 18
- 230000003044 adaptive effect Effects 0.000 claims description 8
- 230000002159 abnormal effect Effects 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 15
- 238000005516 engineering process Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000009877 rendering Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/15—Processing image signals for colour aspects of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4122—Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440263—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
Abstract
The application relates to the technical field of virtual reality, and provides an image self-adaptive display method and display equipment, wherein in the method, aiming at any video frame, when the resolution of the video frame is greater than the monocular resolution of the display equipment, the optimal horizontal resolution of a three-dimensional projection plane for displaying the video frame is determined according to a first corresponding relation, and the optimal vertical resolution of the three-dimensional projection plane is determined according to a second corresponding relation; according to the optimal horizontal resolution, the optimal vertical resolution and the horizontal and vertical resolutions of the video frame, down-sampling the video frame to obtain a sampled video frame; and obtaining color values from the sampled video frame according to the UV coordinates of each fragment to render a rectangular grid, obtaining and displaying the rendered video frame, and reducing the abnormal phenomenon of image flicker because the resolution of the sampled video frame is matched with the monocular resolution of the display device.
Description
Technical Field
The present disclosure relates to the field of Virtual Reality (VR) technologies, and in particular, to an adaptive display method and a display device for an image.
Background
The VR technology is a research hotspot in the field of computer application at present, and is a man-machine interaction technology integrating multiple advanced technologies such as a real-time three-dimensional computer graphics technology, a man-machine interaction technology, a sensing technology, a multimedia technology, a wide-angle stereo display technology, a network technology and the like, and can vividly simulate various perceptual behaviors of people in natural environment. A user may be immersed in a computer-created virtual environment through a stereoscopic helmet, data gloves, three-dimensional mouse, etc., and may engage in various interactive activities with objects in the virtual environment with the natural behavior and perception of humans.
In a VR scene, situations (such as video playing and UI picture displaying) in which the resolution of an image is greater than that of a screen of a VR display device are often encountered, because a screen of the VR display device is a two-dimensional plane, an image in the VR scene is a three-dimensional image, and a perspective relationship between the three-dimensional image and the two-dimensional screen is not a parallel relationship, when a Graphics Processing Unit (GPU) performs rendering sampling, the three-dimensional image is not uniformly sampled, and a color value obtained after sampling an adjacent pixel on the two-dimensional screen on the three-dimensional image has a large difference, which causes an abnormal phenomenon of flicker in the image on the screen.
Disclosure of Invention
The application provides a self-adaptive display method and display equipment of an image, which are used for reducing the abnormal phenomenon of image flicker.
In a first aspect, the present application provides a display device for adaptively displaying an image, comprising: display, memory, and graphics processor:
the display, coupled to the graphics processor, configured to display a Virtual Reality (VR) video;
the memory, coupled to the graphics processor, configured to store computer instructions;
the graphics processor configured to perform the following operations in accordance with the computer instructions:
when the resolution of the video frame is greater than the monocular resolution of a display device, determining the optimal horizontal resolution of a three-dimensional projection plane for displaying the video frame according to a first corresponding relation and determining the optimal vertical resolution of the three-dimensional projection plane according to a second corresponding relation, wherein the first corresponding relation is determined according to the distance from a user viewpoint to the three-dimensional projection plane, the maximum horizontal field angle of the display device and the monocular horizontal resolution, and the second corresponding relation is determined according to the distance from the user viewpoint to the three-dimensional projection plane, the maximum vertical field angle of the display device and the monocular vertical resolution;
according to the optimal horizontal resolution, the optimal vertical resolution and the horizontal and vertical resolutions of the video frame, down-sampling the video frame to obtain a sampled video frame;
and acquiring a color value from the sampled video frame according to the UV coordinate of each fragment to render a rectangular grid, so as to obtain and display the rendered video frame, wherein the UV coordinate of each fragment is obtained by interpolation according to the UV coordinate of each grid vertex in the rectangular grid.
The above-mentioned display device, in the VR scene, when the resolution of the video frame is greater than the monocular resolution of the display device, determining the optimal horizontal resolution of the three-dimensional projection plane for displaying the video frame according to a first corresponding relationship determined according to the distance from the user viewpoint to the three-dimensional projection plane, the maximum horizontal field angle of the display device, and the monocular horizontal resolution, and determining the optimal vertical resolution of the three-dimensional projection plane according to a second corresponding relationship determined according to the distance from the user viewpoint to the three-dimensional projection plane, the maximum vertical field angle of the display device, and the monocular vertical resolution, so that the determined optimal horizontal resolution and optimal vertical resolution are adapted to the viewable range of the user viewpoint in the VR scene; and downsampling the video frame according to the optimal horizontal resolution, the optimal vertical resolution and the horizontal and vertical resolutions of the video frame, rendering a rectangular grid according to the sampled video frame, and displaying the rendered video frame by the display device, wherein the resolution of the sampled video frame is matched with the monocular resolution of the display device, and the resolutions of screens corresponding to the left and right eyes in the VR scene are the same, so that the abnormal phenomenon of image flicker caused by the fact that the resolution of the video frame is greater than the resolution of the screen of the display device is effectively reduced.
In an alternative embodiment, the graphics processor determines the first correspondence by:
determining the maximum horizontal width of the three-dimensional projection plane according to the distance from the user viewpoint to the three-dimensional projection plane and the maximum horizontal field angle of the display device;
determining a first ratio of the maximum horizontal width of the three-dimensional projection plane to the horizontal width of the three-dimensional projection plane as a ratio of the monocular horizontal resolution of the display device to the optimal horizontal resolution of the three-dimensional projection plane, so as to obtain the first corresponding relationship;
the graphics processor determines the second correspondence by:
determining the maximum vertical height of the three-dimensional projection plane according to the distance from the user viewpoint to the three-dimensional projection plane and the maximum vertical field angle of the display device;
and determining a second ratio of the maximum vertical height of the three-dimensional projection plane to the vertical height of the three-dimensional projection plane as a ratio of the monocular vertical resolution of the display device to the optimal vertical resolution of the three-dimensional projection plane, so as to obtain the second corresponding relation.
According to the display device, the determined maximum horizontal width and the maximum vertical width of the three-dimensional projection plane are adapted to the visual range of the viewpoint of the user, and the monocular horizontal resolution and the monocular vertical resolution of the display device are combined, so that the first corresponding relation and the second corresponding relation are determined.
In an alternative embodiment, the graphics processor determines the optimal horizontal resolution of the three-dimensional projection plane for displaying the video frame according to the first correspondence, and is specifically configured to:
determining a quotient of the monocular horizontal resolution of the display device and the first ratio as an optimal horizontal resolution of the three-dimensional projection plane; and
the graphics processor determines an optimal perpendicular bisector resolution of the three-dimensional projection plane according to the second correspondence, and is specifically configured to:
and determining the quotient of the monocular vertical resolution of the display device and the second ratio as the optimal vertical resolution of the three-dimensional projection plane.
The display device determines the optimal horizontal resolution according to the first corresponding relation, and determines the vertical resolution according to the second corresponding relation.
In an optional implementation manner, the graphics processor downsamples the video frame according to the optimal horizontal resolution, the optimal vertical resolution, and the horizontal and vertical resolutions of the video frame to obtain a sampled video frame, and is specifically configured to:
and comparing the optimal horizontal resolution with the horizontal resolution of the video frame and the optimal vertical resolution with the vertical resolution of the video frame, and performing down-sampling on the video frame in the corresponding direction according to the comparison result to obtain a sampled video frame.
The display device compares the optimal horizontal resolution with the horizontal resolution of the video frame and the optimal vertical resolution with the vertical resolution of the video frame, and performs down-sampling processing on the video frame in the corresponding direction according to the comparison result, so that the resolution of the sampled video frame is matched with the screen resolution of the display device.
In an alternative embodiment, the graphics processing appliance is configured to:
if the optimal horizontal resolution is larger than the horizontal resolution of the video frame and the optimal vertical resolution is smaller than or equal to the vertical resolution of the video frame, downsampling the video frame in the vertical direction to obtain the video frame of which the horizontal resolution is equal to the horizontal resolution of the video frame and the vertical resolution is equal to the optimal vertical resolution; or
If the optimal horizontal resolution is less than or equal to the horizontal resolution of the video frame and the optimal vertical resolution is greater than the vertical resolution of the video frame, downsampling the video frame in the horizontal direction to obtain the video frame with the horizontal resolution equal to the optimal horizontal resolution and the vertical resolution equal to the vertical resolution of the video frame; or
If the optimal horizontal resolution is greater than the horizontal resolution of the video frame and the optimal vertical resolution is greater than the vertical resolution of the video frame, downsampling the video frame in the horizontal direction and the vertical direction to obtain the video frame with the horizontal resolution equal to the optimal horizontal resolution and the vertical resolution equal to the optimal vertical resolution; or
And if the optimal horizontal resolution is less than or equal to the horizontal resolution of the video frame and the optimal vertical resolution is less than or equal to the vertical resolution of the video frame, taking the video frame as a down-sampled video frame.
According to the display device, the video frame is subjected to down-sampling processing in the corresponding direction according to the comparison result of the optimal horizontal resolution and the horizontal resolution of the video frame and the comparison result of the optimal vertical resolution and the vertical resolution of the video frame, so that the resolution of the sampled video frame is matched with the screen resolution of the display device.
In a second aspect, the present application provides a method for adaptively displaying an image, including:
when the resolution of the video frame is greater than the monocular resolution of a display device, determining the optimal horizontal resolution of a three-dimensional projection plane for displaying the video frame according to a first corresponding relation and determining the optimal vertical resolution of the three-dimensional projection plane according to a second corresponding relation, wherein the first corresponding relation is determined according to the distance from a user viewpoint to the three-dimensional projection plane, the maximum horizontal field angle of the display device and the monocular horizontal resolution, and the second corresponding relation is determined according to the distance from the user viewpoint to the three-dimensional projection plane, the maximum vertical field angle of the display device and the monocular vertical resolution;
according to the optimal horizontal resolution, the optimal vertical resolution and the horizontal and vertical resolutions of the video frame, down-sampling the video frame to obtain a sampled video frame;
and acquiring a color value from the sampled video frame according to the UV coordinate of each fragment to render a rectangular grid, so as to obtain and display the rendered video frame, wherein the UV coordinate of each fragment is obtained by interpolation according to the UV coordinate of each grid vertex in the rectangular grid.
In an alternative embodiment, the first correspondence is determined by:
determining the maximum horizontal width of the three-dimensional projection plane according to the distance from the user viewpoint to the three-dimensional projection plane and the maximum horizontal field angle of the display device;
determining a first ratio of the maximum horizontal width of the three-dimensional projection plane to the horizontal width of the three-dimensional projection plane as a ratio of the monocular horizontal resolution of the display device to the optimal horizontal resolution of the three-dimensional projection plane, so as to obtain the first corresponding relationship;
determining the second correspondence by:
determining the maximum vertical height of the three-dimensional projection plane according to the distance from the user viewpoint to the three-dimensional projection plane and the maximum vertical field angle of the display device;
and determining a second ratio of the maximum vertical height of the three-dimensional projection plane to the vertical height of the three-dimensional projection plane as a ratio of the monocular vertical resolution of the display device to the optimal vertical resolution of the three-dimensional projection plane, so as to obtain the second corresponding relation.
In an alternative embodiment, the determining the optimal horizontal resolution of the three-dimensional projection plane for displaying the video frame according to the first correspondence comprises:
determining a quotient of the monocular horizontal resolution of the display device and the first ratio as an optimal horizontal resolution of the three-dimensional projection plane; and
determining the optimal perpendicular bisector resolution of the three-dimensional projection plane according to the second corresponding relationship, including:
and determining the quotient of the monocular vertical resolution of the display device and the second ratio as the optimal vertical resolution of the three-dimensional projection plane.
In an optional implementation manner, the downsampling the video frame according to the optimal horizontal resolution, the optimal vertical resolution, and the horizontal and vertical resolutions of the video frame to obtain a sampled video frame includes:
and comparing the optimal horizontal resolution with the horizontal resolution of the video frame and the optimal vertical resolution with the vertical resolution of the video frame, and performing down-sampling on the video frame in the corresponding direction according to the comparison result to obtain a sampled video frame.
In an optional embodiment, the down-sampling the video frame in the corresponding direction according to the comparison result to obtain a sampled video frame includes:
if the optimal horizontal resolution is larger than the horizontal resolution of the video frame and the optimal vertical resolution is smaller than or equal to the vertical resolution of the video frame, downsampling the video frame in the vertical direction to obtain the video frame of which the horizontal resolution is equal to the horizontal resolution of the video frame and the vertical resolution is equal to the optimal vertical resolution; or
If the optimal horizontal resolution is less than or equal to the horizontal resolution of the video frame and the optimal vertical resolution is greater than the vertical resolution of the video frame, downsampling the video frame in the horizontal direction to obtain the video frame with the horizontal resolution equal to the optimal horizontal resolution and the vertical resolution equal to the vertical resolution of the video frame; or
If the optimal horizontal resolution is greater than the horizontal resolution of the video frame and the optimal vertical resolution is greater than the vertical resolution of the video frame, downsampling the video frame in the horizontal direction and the vertical direction to obtain the video frame with the horizontal resolution equal to the optimal horizontal resolution and the vertical resolution equal to the optimal vertical resolution; or
And if the optimal horizontal resolution is less than or equal to the horizontal resolution of the video frame and the optimal vertical resolution is less than or equal to the vertical resolution of the video frame, taking the video frame as a down-sampled video frame.
The beneficial effects of the second aspect are referred to the first aspect, and are not described in detail herein.
In a third aspect, embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the method of the second aspect provided by embodiments of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 schematically illustrates an application scenario provided by an embodiment of the present application;
fig. 2 schematically illustrates a block diagram of a VR head mounted display device 300 provided by an embodiment of the present application;
fig. 3a is a schematic perspective view illustrating a display device and a projection plane in a VR scene provided by an embodiment of the present application;
fig. 3b is a schematic top view diagram illustrating a user viewpoint and a projection plane in a VR scene provided by an embodiment of the present application;
FIG. 4 is a flow chart illustrating a method for adaptively displaying an image according to an embodiment of the present application;
FIG. 5 is a diagram illustrating an interactive process of a complete adaptive display image provided by an embodiment of the present application;
fig. 6 illustrates a hardware structure diagram of a display device according to an embodiment of the present application.
Detailed Description
To make the objects, embodiments and advantages of the present application clearer, the following description of exemplary embodiments of the present application will clearly and completely describe the exemplary embodiments of the present application with reference to the accompanying drawings in the exemplary embodiments of the present application, and it is to be understood that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments described herein without inventive step, are intended to be within the scope of the claims appended hereto. In addition, while the disclosure herein has been presented in terms of one or more exemplary examples, it should be appreciated that aspects of the disclosure may be implemented solely as a complete embodiment.
It should be noted that the brief descriptions of the terms in the present application are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
Furthermore, the terms "comprises" and "comprising," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The terms "first", "second", and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between similar or analogous objects or entities and are not necessarily meant to imply a particular order or sequence Unless otherwise indicated (Unless other indicated). It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
The term "module" as used herein refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The application relates to an image self-adaptive display method and display equipment. Taking a VR head-mounted display device as an example, fig. 1 exemplarily shows an application scenario diagram provided in an embodiment of the present application. As shown in fig. 1, the camera 100 captures a video and transmits the captured VR video to the server 200 via the Internet (Internet), where the Internet may be a local area network, a wide area network, etc., and the server 200 may be an enterprise server, a cloud server, etc.; the VR head-mounted display device 300 responds to the video playing instruction, the playing instruction carries an identifier (such as a video name, a website, and the like) of a VR video to be played by the user, a video acquisition request is sent to the server 200 according to the playing instruction, the server 200 receives the request and then sends the VR video to be played to the VR head-mounted display device 300, and the VR video is played by the VR head-mounted display device 300.
Fig. 1 also shows that the VR video shot by the camera 100 can be directly sent to the VR head-mounted display device 300 in a sending manner including, but not limited to, bluetooth transmission and WiFi transmission, when the VR head-mounted display device 300 responds to a video playing instruction, the VR video is first obtained from a local cache, if the VR video is obtained, the VR video to be played by the user in the local is played, and if the VR video is not obtained, the VR video is obtained from the server 200, so that the video transmission speed is increased, and the user experience is further improved.
Fig. 2 schematically illustrates a structure diagram of a VR head mounted display device 300 provided in an embodiment of the present application. As shown in fig. 2, VR head-mounted display device 300 includes a lens group 301 and a two-dimensional display screen 302 (equivalent to a display in a display device) disposed directly in front of lens group 301, where lens group 301 is composed of a left display mirror 301_1 and a right display mirror 301_ 2. When a user wears the VR head-mounted display device 300, human eyes can watch videos displayed on the display screen 302 through the lens group 301, and the VR effect is experienced.
It should be noted that, the display device in the embodiment of the present application may be, besides the VR head-mounted display device, a device capable of playing and interacting with videos, such as a smart phone, a tablet computer, a desktop computer, a notebook computer, and a smart television.
In a VR scene, VR video is typically played in a rectangular plane (also called a projection plane, such as a screen in a movie theater) in three-dimensional space, and the projection plane can be regarded as a part of the VR scene. Because VR videos shot by cameras of different models have different resolutions, cases (such as video playing, UI picture display, and the like) in which the resolution of a shot video frame is greater than that of a VR display device screen are often encountered, and a screen of the VR display device is a two-dimensional plane, a video frame in a VR scene is a three-dimensional image, and in a pixel shader, if a color value of a fragment is directly obtained from an original video frame to render a rectangular grid, a difference between color values of adjacent pixels in the rendered video frame is large, and an abnormal phenomenon of image flicker exists.
In order to solve the above problem, embodiments of the present application provide an adaptive display method and a display device for an image, where an optimal resolution of a three-dimensional projection plane corresponding to a monocular screen of a display device in a current VR environment is adaptively calculated according to information such as a distance between a user viewpoint (corresponding to a virtual camera in the display device) in a VR scene and a size of the three-dimensional projection plane (a virtual size in the VR scene, a generalized size may correspond to a resolution, a size in the present application includes a width and a height of the three-dimensional projection plane), a maximum field angle of the display device, and a monocular resolution of the display device, and a video frame is downsampled according to the size of the optimal resolution and the resolution of the video frame, and is finally rendered in an image processor (GPU), the rendered video is displayed by a display device. The method can enable the resolution of the video frame to be matched with the monocular resolution of the display device, thereby effectively reducing the abnormal phenomenon of image flicker caused by the fact that the resolution of the video frame (image) is larger than the resolution of the screen of the display device and improving the user experience.
It should be noted that, for the VR head-mounted display device, the resolutions of the screens corresponding to the left and right eyes are the same, and when the resolution of the video frame matches the monocular resolution, the resolution of the video frame matches the resolution page of the screen of the display device, so as to reduce the abnormal phenomenon of image flicker.
It should be noted that the method in the embodiment of the present application is applicable to playing and displaying a local VR video, and is also applicable to playing and displaying an online VR video (including VR videos in both on-demand and live modes).
For the sake of clarity of the description of the embodiments of the present application, the following explains the fragments in the embodiments of the present application.
In a three-dimensional rendering pipeline, geometric vertices are grouped into primitives, the primitives including: points, line segments, polygons. And outputting a fragment sequence after the primitive is rasterized. A fragment is not a true pixel but a collection of states that are used to calculate the final color of each pixel. These states include, but are not limited to, screen coordinates of the fragment, depth information, and other vertex information output from the geometry stage, such as normal, texture coordinates, and the like.
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
In some embodiments of the present application, a configuration file may be configured in advance, and since in a VR scene, contents of video frames viewed by a user through a left display lens and a right display lens are substantially the same, and screen resolutions corresponding to left and right eyes are the same, only a monocular horizontal resolution and a monocular vertical resolution of a display device may be configured in the configuration file. Since screen resolutions and viewing angles of different models of devices are different, a monocular horizontal resolution of the display device (denoted as w _ monitor1), a monocular vertical resolution of the display device (denoted as h _ monitor1), a maximum horizontal viewing angle of the display device (w _ fov), and a maximum vertical viewing angle of the display device (h _ fov) may be included in the configuration file. The configured configuration file can be stored in the display device in advance, or the corresponding configuration file is downloaded from the server when the display device is started according to the corresponding relation between the model of the display device and the configuration file.
In a VR scene, according to the perspective principle of far, small and near, the closer the user viewpoint is to the three-dimensional projection plane, the larger the size of the projection plane corresponding to the display device screen in the three-dimensional space (i.e. the larger the proportion of the display device screen when displaying video frames) is, that is, the closer the user viewpoint is to the three-dimensional projection plane, the larger the resolution of the three-dimensional projection plane for displaying video frames or images should be.
Fig. 3a schematically illustrates a perspective view of a display device and a three-dimensional projection plane in a VR scene provided by an embodiment of the present application. The three-dimensional projection plane is used as a carrier for playing video frames, and a user wearing the VR display equipment watches the video frames displayed by the three-dimensional projection plane. Corresponding to fig. 3a, fig. 3b exemplarily shows a schematic top view of a user viewpoint and a three-dimensional projection plane in a VR scene provided by an embodiment of the present application. As shown in fig. 3b, AB is the horizontal width of the three-dimensional projection plane in the VR scene, denoted as w _ screen2, d is the distance from the user's viewpoint to the projection plane, CD is the intersection point of the three-dimensional projection plane AB extending left and right and the maximum horizontal angle of view w _ fov, CD represents the maximum horizontal width of the three-dimensional projection plane in the VR scene corresponding to the monocular screen of the display device at the position d from the user's viewpoint, denoted as w _ monitor2, i.e., the maximum horizontal width of the three-dimensional projection plane for displaying the video frame. Similarly, the vertical height of the three-dimensional projection plane in the VR scene is denoted as h _ screen2, and the maximum vertical height of the three-dimensional projection plane in the VR scene corresponding to the monocular screen of the display device at the d position from the viewpoint of the user is denoted as h _ monitor 2.
As can be seen from the geometric relationship in fig. 3b, the maximum horizontal width w _ monitor2 of the three-dimensional projection plane corresponds to the monocular horizontal resolution w _ monitor1 of the display device, the resolution corresponding to the horizontal width w _ monitor2 of the three-dimensional projection plane is the optimal horizontal resolution of the three-dimensional projection plane, which is denoted as w _ monitor1, at this time, the maximum vertical height h _ monitor2 of the three-dimensional projection plane corresponds to the monocular vertical resolution h _ monitor1 of the display device, and the resolution corresponding to the vertical height h _ monitor2 of the three-dimensional projection plane is the optimal vertical resolution of the three-dimensional projection plane, which is denoted as h _ monitor 1.
Based on the schematic diagrams shown in fig. 3a and fig. 3b, fig. 4 exemplarily shows a flowchart of a method for adaptively displaying an image provided by an embodiment of the present application. As shown in fig. 4, the process is executed by a display device, and may be implemented in a software manner, or in a combination of software and hardware manner, and mainly includes the following steps:
s401: and when the resolution of the video frame is greater than the monocular resolution of the display device for any video frame in the VR video, determining the optimal horizontal resolution of the three-dimensional projection plane for displaying the video frame according to the first corresponding relation, and determining the optimal vertical resolution of the three-dimensional projection plane according to the second corresponding relation.
In this step, the first corresponding relationship is determined according to the distance from the user viewpoint to the three-dimensional projection plane, the maximum horizontal field angle of the display device, and the monocular horizontal resolution, and the second corresponding relationship is determined according to the distance from the user viewpoint to the three-dimensional projection plane, the maximum vertical field angle of the display device, and the monocular vertical resolution.
When the process of S401 is executed, first, the VR video playing program is started through the "on/off key", the configuration file is read, the monocular horizontal resolution w _ monitor1 of the display device, the monocular vertical resolution h _ monitor1 of the display device, the maximum horizontal field angle w _ fov of the display device, and the maximum vertical field angle h _ fov of the display device are obtained, and the initialization operation of the display device is completed. Then, for any video frame in the VR video, the distance d from the viewpoint of the user to the three-dimensional projection plane, and the horizontal width w _ screen2 and the vertical height h _ screen2 of the corresponding three-dimensional projection plane at the position of d are obtained according to the set duration. The distance d may be preset, may also be dynamically set by a user, and may also be implemented in a software manner. Such as the distance d between two points calculated from the three-dimensional coordinates from the user's viewpoint to the center point of the three-dimensional projection plane (the center point representing the position of the projection plane). And finally, determining a first corresponding relation according to the distance d from the user viewpoint to the three-dimensional projection plane, the maximum horizontal field angle w _ fov of the display device and the monocular horizontal resolution w _ monitor1, determining the optimal horizontal resolution w _ screen1 of the three-dimensional projection plane according to the first corresponding relation, determining a second corresponding relation according to the distance d from the user viewpoint to the three-dimensional projection plane, the maximum vertical field angle h _ fov of the display device and the monocular vertical resolution h _ monitor1, and determining the optimal vertical resolution h _ screen1 of the three-dimensional projection plane according to the second corresponding relation.
In particular, the maximum horizontal width w _ monitor2 and the maximum vertical height h _ monitor2 of the three-dimensional projection plane in the VR scene change with the distance d from the viewpoint of the user to the three-dimensional projection plane. After the distance d from the user viewpoint to the three-dimensional projection plane is obtained, the maximum horizontal width w _ monitor2 of the three-dimensional projection plane is determined by combining the maximum horizontal field angle of the display device, and the calculation formula is as follows:
w _ monitor2 ═ 2 × d × tan (w _ fov/2) formula 1
Determining the maximum vertical height h _ monitor2 of the three-dimensional projection plane in combination with the maximum vertical field angle of the display device, wherein the calculation formula is as follows:
h _ monitor2 ═ 2 × d × tan (h _ fov/2) formula 2
As can be seen from fig. 3b, when the maximum horizontal width w _ monitor2 of the three-dimensional projection plane corresponds to the monocular horizontal resolution w _ monitor1 of the display device, the three-dimensional projection plane has the best horizontal resolution. Similarly, the three-dimensional projection plane has the best vertical resolution when the maximum vertical height h _ monitor2 of the three-dimensional projection plane corresponds to the monocular vertical resolution h _ monitor1 of the display device. That is, in the first correspondence, the ratio (also referred to as a first ratio) of the maximum horizontal width w _ monitor2 of the three-dimensional projection plane to the horizontal width w _ screen2 of the three-dimensional projection plane is equal to the ratio of the monocular horizontal resolution w _ monitor1 of the display device to the optimal horizontal resolution w _ screen1 of the three-dimensional projection plane; in the second correspondence, the ratio (also referred to as a second ratio) of the maximum vertical height h _ monitor2 of the three-dimensional projection plane to the vertical height h _ screen2 of the three-dimensional projection plane is equal to the ratio of the monocular vertical resolution h _ monitor1 of the display device to the optimum vertical resolution h _ screen of the three-dimensional projection plane.
Therefore, after determining the maximum horizontal width w _ monitor2 and the maximum vertical height h _ monitor2 of the three-dimensional projection plane, the optimal horizontal resolution w _ screen1 of the three-dimensional projection plane is determined according to the maximum horizontal width w _ monitor2 of the three-dimensional projection plane, the monocular horizontal resolution w _ monitor1 of the display device, and the horizontal width w _ screen2 of the three-dimensional projection plane, that is, the quotient of the monocular horizontal resolution of the display device and the first ratio is determined as the optimal horizontal resolution of the three-dimensional projection plane, and the calculation formula is as follows:
w _ screen1 ═ w _ screen2 × w _ monitor1/w _ monitor2 formula 3
Determining the optimal vertical resolution h _ screen1 of the three-dimensional projection plane according to the maximum vertical height h _ monitor2 of the three-dimensional projection plane, the monocular vertical resolution h _ monitor1 of the display device and the vertical height h _ screen2 of the three-dimensional projection plane, namely determining the quotient of the monocular vertical resolution of the display device and the second ratio as the optimal vertical resolution of the three-dimensional projection plane, wherein the calculation formula is as follows:
h _ screen1 ═ h _ screen2 × h _ monitor1/h _ monitor2 equation 4
As can be obtained by combining equation 1 and equation 2,
w _ screen1 ═ w _ monitor1 × w _ screen2/(2 × d × tan (w _ fov/2)) formula 5
h _ screen1 ═ h _ monitor1 × h _ screen2/(2 × d × tan (h _ fov/2)) formula 6
At this point, the calculation of the optimal horizontal resolution and the optimal vertical resolution is completed.
It should be noted that the embodiments of the present application support equivalent variations of the first correspondence and the second correspondence. For example, in the first corresponding relationship, the third ratio of the maximum horizontal width w _ monitor2 of the three-dimensional projection plane to the monocular horizontal resolution w _ monitor1 of the display device is equal to the ratio of the horizontal width w _ screen2 of the three-dimensional projection plane to the optimal horizontal resolution w _ screen1 of the three-dimensional projection plane, and then the optimal horizontal resolution is the quotient of the horizontal width of the three-dimensional projection plane and the third ratio; in the second relationship, a fourth ratio of the maximum vertical height h _ monitor2 of the three-dimensional projection plane to the monocular vertical resolution h _ monitor1 of the display device is equal to a ratio of the vertical height h _ screen2 of the three-dimensional projection plane to the optimal vertical resolution h _ screen1 of the three-dimensional projection plane, and the optimal horizontal resolution is a quotient of the vertical height of the three-dimensional projection plane and the fourth ratio.
S402: and according to the optimal horizontal resolution, the optimal vertical resolution and the horizontal and vertical resolutions of the video frame, down-sampling the video frame to obtain a sampled video frame.
In this step, for each video frame, the following operations are performed: and acquiring the horizontal and vertical resolutions of the video frame, wherein the horizontal resolution is recorded as w _ video, and the vertical resolution is recorded as h _ video. After the horizontal resolution and the vertical resolution of the video frame are obtained, the optimal horizontal resolution and the horizontal resolution of the video frame, and the optimal vertical resolution and the vertical resolution of the video frame are compared, and the video frame is down-sampled in the corresponding direction according to the comparison result to obtain the sampled video frame.
In specific implementation, if the optimal horizontal resolution w _ screen1 is greater than the horizontal resolution w _ video of the video frame and the optimal vertical resolution h _ screen1 is less than or equal to the vertical resolution h _ video of the video frame, the video frame is downsampled in the vertical direction to obtain the video frame with the horizontal resolution equal to the horizontal resolution w _ video of the video frame and the vertical resolution equal to the optimal vertical resolution h _ screen 1; if the optimal horizontal resolution w _ screen1 is less than or equal to the horizontal resolution w _ video of the video frame and the optimal vertical resolution h _ screen1 is greater than the vertical resolution h _ video of the video frame, downsampling the video frame in the horizontal direction to obtain the video frame with the horizontal resolution equal to the optimal horizontal resolution w _ screen1 and the vertical resolution equal to the vertical resolution h _ video of the video frame; if the optimal horizontal resolution w _ screen1 is greater than the horizontal resolution w _ video of the video frame and the optimal vertical resolution h _ screen1 is greater than the vertical resolution h _ video of the video frame, downsampling the video frame in the horizontal direction and the vertical direction to obtain the video frame with the horizontal resolution equal to the optimal horizontal resolution w _ screen1 and the vertical resolution equal to the optimal vertical resolution h _ screen 1; if the optimal horizontal resolution w _ screen1 is less than or equal to the horizontal resolution w _ video of the video frame and the optimal vertical resolution h _ screen1 is less than or equal to the vertical resolution h _ video of the video frame, it indicates that the video frame is taken as the downsampled video frame without any processing.
S403: and acquiring color values from the sampled video frame according to the UV coordinates of each fragment to render a rectangular grid, obtaining and displaying the rendered video frame, wherein the UV coordinates of each fragment are obtained by interpolation according to the UV coordinates of the vertexes of each grid in the rectangular grid.
In the step, a rectangular grid comprising a plurality of grids is created in the GPU, each grid comprises a pair of triangles and comprises a plurality of grid vertexes, and the UV coordinates of each fragment are obtained through rasterization operation and interpolation according to the UV coordinates of the grid vertexes. And according to the UV coordinates of each fragment in the rectangular grid, obtaining a color value from the sampled video frame, rendering the rectangular grid according to the color value of each fragment in a pixel color generator to obtain a rendered video frame, and displaying the rendered video frame by a display of the display equipment.
Before S401, the method further includes acquiring a VR video. Specifically, a user selects a VR video to be played from an application program of display equipment through an interaction process, the display equipment responds to a VR video playing instruction, the playing instruction carries a VR video website to be played by the user, the display equipment sends a video acquiring request to a server according to the website, and the server sends the corresponding VR video to the display equipment after receiving the video acquiring request.
In some embodiments, in order to increase the playing speed of the VR video, the display device may first obtain the VR video from the local, and if not, send a video obtaining request to the server.
Fig. 5 illustrates an interactive process of a complete adaptive display image provided by the embodiment of the present application. As shown in fig. 5, the process mainly includes the following steps:
s501: and the display equipment responds to the VR video playing instruction.
In the step, a user selects a VR video to be played through a display interface or a function key, a video playing instruction is sent to display equipment according to the selected VR video, and the display equipment responds, wherein the playing instruction carries a website of the VR video to be played.
S502: and the display equipment sends a VR video acquisition request to the server according to the website carried by the playing instruction.
S503: and the server receives the VR video acquisition request and sends the corresponding VR video and the horizontal and vertical resolution of the VR video to the display equipment.
S504: and the display equipment receives the VR video sent by the server to obtain the horizontal and vertical resolutions of the VR video.
S505: and for any video frame in the VR video, determining whether the resolution of the video frame is greater than the monocular resolution of the display device, if so, executing S506, otherwise, executing S509.
In this step, the horizontal resolution of the video frame is compared with the monocular horizontal resolution of the display device, and the vertical resolution of the video frame is compared with the monocular vertical resolution of the display device, if the horizontal resolution of the video frame is greater than the monocular horizontal resolution of the display device, and/or the vertical resolution of the video frame is greater than the monocular vertical resolution of the display device, it is indicated that the resolution of the video frame is greater than the monocular resolution of the display device, the video frame needs to be downsampled, and S506 is executed, otherwise, it is indicated that the resolution of the video frame is less than or equal to the monocular resolution of the display device, the video frame does not need to be downsampled, and S509 is executed.
S506: determining an optimal horizontal resolution of a three-dimensional projection plane for displaying the video frame according to the first correspondence, and determining an optimal vertical resolution of the three-dimensional projection plane according to the second correspondence.
The detailed description of this step is referred to S401 and will not be repeated here.
S507: and comparing the optimal horizontal resolution with the horizontal resolution of the video frame and the optimal vertical resolution with the vertical resolution of the video frame, and performing down-sampling on the video frame in the corresponding direction according to the comparison result to obtain the sampled video frame.
The detailed description of this step is referred to S402 and will not be repeated here.
S508: and obtaining color values from the sampled video frame according to the UV coordinates of each fragment to render a rectangular grid, and obtaining and displaying the rendered video frame.
The detailed description of this step is referred to S403, and will not be repeated here.
S509: and directly acquiring color values from the video frame according to the UV coordinates of the fragments to render the rectangular grid, and obtaining and displaying the rendered video frame.
The detailed description of each fragment in this step is referred to S403, and is not repeated here.
In the above embodiment of the application, for each video frame, the distance from the viewpoint of the user to the three-dimensional projection plane and the horizontal width and the vertical height of the three-dimensional projection plane corresponding to the distance are obtained, the maximum horizontal field angle, the maximum vertical field angle, the monocular horizontal resolution and the monocular vertical resolution of the display device in the configuration file are combined, the optimal resolution of the three-dimensional projection plane corresponding to the monocular screen in the current VR environment is calculated in a self-adaptive manner, the video frame is subjected to downsampling processing according to the size of the optimal resolution and the video frame resolution, a rectangular grid is rendered according to the sampled video frame, and the rendered video frame is displayed by the display device. The sampled video frame is obtained by up-and-down sampling in the corresponding direction according to the comparison result of the optimal horizontal resolution, the horizontal resolution of the video frame and the optimal vertical resolution and the vertical resolution of the video frame, so that the resolution of the sampled video frame is matched with the screen resolution of the display device, the abnormal phenomenon of image flicker caused by the fact that the resolution of the video frame is larger than the resolution of the screen of the display device is effectively reduced, and user experience is improved.
It should be noted that the image adaptive display method provided by the present application may be used in a process of playing a video in a VR scene, and may also be used in other extended applications, for example, when a picture is displayed through a UI, the best resolution of the picture at a corresponding position may be calculated, and then the picture is downsampled (or the original resolution is maintained), and then the picture is rendered and displayed on the UI.
It should be noted that the languages used by the shaders in the embodiments of the present application include, but are not limited to, GLSL (Shader Language of OpenGL), HLSL (Shader Language of microsoft DirectX), CG (C for Graphics, Shader Language commonly proposed by microsoft and NVIDIA), Unity3D Shader (Shader Language of Unity 3D).
Based on the same technical concept, embodiments of the present application provide a display device for adaptively displaying an image, where the display device can implement the adaptive display method for an image in the foregoing embodiments, and can achieve the same technical effects, which are not described herein again.
Referring to fig. 6, the display device includes a display 601, a memory 602, and a graphic processor 603. Wherein the display 601, in connection with the graphics processor 603, is configured to display VR video; the memory 602 is connected to the image processor 603 and is configured to store computer instructions; a graphics processor 603 configured to perform an adaptive display method of an image according to computer instructions stored in the memory 602.
The embodiment of the application also provides a computer-readable storage medium, and computer-executable instructions are stored in the computer-readable storage medium and used for enabling a computer to execute the self-adaptive display method of the image provided by the embodiment of the application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. A display device for adaptively displaying an image, comprising a display, a memory, and a graphics processor:
the display, coupled to the graphics processor, configured to display a Virtual Reality (VR) video;
the memory, coupled to the graphics processor, configured to store computer instructions;
the graphics processor configured to perform the following operations in accordance with the computer instructions:
when the resolution of the video frame is greater than the monocular resolution of a display device, determining the optimal horizontal resolution of a three-dimensional projection plane for displaying the video frame according to a first corresponding relation and determining the optimal vertical resolution of the three-dimensional projection plane according to a second corresponding relation, wherein the first corresponding relation is determined according to the distance from a user viewpoint to the three-dimensional projection plane, the maximum horizontal field angle of the display device and the monocular horizontal resolution, and the second corresponding relation is determined according to the distance from the user viewpoint to the three-dimensional projection plane, the maximum vertical field angle of the display device and the monocular vertical resolution;
according to the optimal horizontal resolution, the optimal vertical resolution and the horizontal and vertical resolutions of the video frame, down-sampling the video frame to obtain a sampled video frame;
and acquiring a color value from the sampled video frame according to the UV coordinate of each fragment to render a rectangular grid, so as to obtain and display the rendered video frame, wherein the UV coordinate of each fragment is obtained by interpolation according to the UV coordinate of each grid vertex in the rectangular grid.
2. The display device of claim 1, wherein the graphics processor determines the first correspondence by:
determining the maximum horizontal width of the three-dimensional projection plane according to the distance from the user viewpoint to the three-dimensional projection plane and the maximum horizontal field angle of the display device;
determining a first ratio of the maximum horizontal width of the three-dimensional projection plane to the horizontal width of the three-dimensional projection plane as a ratio of the monocular horizontal resolution of the display device to the optimal horizontal resolution of the three-dimensional projection plane, so as to obtain the first corresponding relationship;
the graphics processor determines the second correspondence by:
determining the maximum vertical height of the three-dimensional projection plane according to the distance from the user viewpoint to the three-dimensional projection plane and the maximum vertical field angle of the display device;
and determining a second ratio of the maximum vertical height of the three-dimensional projection plane to the vertical height of the three-dimensional projection plane as a ratio of the monocular vertical resolution of the display device to the optimal vertical resolution of the three-dimensional projection plane, so as to obtain the second corresponding relation.
3. The display device of claim 2, wherein the graphics processor determines the optimal horizontal resolution of the three-dimensional projection plane for displaying the video frame according to the first correspondence, particularly configured to:
determining a quotient of the monocular horizontal resolution of the display device and the first ratio as an optimal horizontal resolution of the three-dimensional projection plane; and
the graphics processor determines an optimal perpendicular bisector resolution of the three-dimensional projection plane according to the second correspondence, and is specifically configured to:
and determining the quotient of the monocular vertical resolution of the display device and the second ratio as the optimal vertical resolution of the three-dimensional projection plane.
4. The display device of claim 1, wherein the graphics processor downsamples the video frame according to the optimal horizontal resolution, the optimal vertical resolution, and the horizontal and vertical resolutions of the video frame to obtain a sampled video frame, and is specifically configured to:
and comparing the optimal horizontal resolution with the horizontal resolution of the video frame and the optimal vertical resolution with the vertical resolution of the video frame, and performing down-sampling on the video frame in the corresponding direction according to the comparison result to obtain a sampled video frame.
5. The display device of claim 4, wherein the graphics processing appliance is configured to:
if the optimal horizontal resolution is larger than the horizontal resolution of the video frame and the optimal vertical resolution is smaller than or equal to the vertical resolution of the video frame, downsampling the video frame in the vertical direction to obtain the video frame of which the horizontal resolution is equal to the horizontal resolution of the video frame and the vertical resolution is equal to the optimal vertical resolution; or
If the optimal horizontal resolution is less than or equal to the horizontal resolution of the video frame and the optimal vertical resolution is greater than the vertical resolution of the video frame, downsampling the video frame in the horizontal direction to obtain the video frame with the horizontal resolution equal to the optimal horizontal resolution and the vertical resolution equal to the vertical resolution of the video frame; or
If the optimal horizontal resolution is greater than the horizontal resolution of the video frame and the optimal vertical resolution is greater than the vertical resolution of the video frame, downsampling the video frame in the horizontal direction and the vertical direction to obtain the video frame with the horizontal resolution equal to the optimal horizontal resolution and the vertical resolution equal to the optimal vertical resolution; or
And if the optimal horizontal resolution is less than or equal to the horizontal resolution of the video frame and the optimal vertical resolution is less than or equal to the vertical resolution of the video frame, taking the video frame as a down-sampled video frame.
6. An adaptive display method of an image, comprising:
when the resolution of the video frame is greater than the monocular resolution of a display device, determining the optimal horizontal resolution of a three-dimensional projection plane for displaying the video frame according to a first corresponding relation and determining the optimal vertical resolution of the three-dimensional projection plane according to a second corresponding relation, wherein the first corresponding relation is determined according to the distance from a user viewpoint to the three-dimensional projection plane, the maximum horizontal field angle of the display device and the monocular horizontal resolution, and the second corresponding relation is determined according to the distance from the user viewpoint to the three-dimensional projection plane, the maximum vertical field angle of the display device and the monocular vertical resolution;
according to the optimal horizontal resolution, the optimal vertical resolution and the horizontal and vertical resolutions of the video frame, down-sampling the video frame to obtain a sampled video frame;
and acquiring a color value from the sampled video frame according to the UV coordinate of each fragment to render a rectangular grid, so as to obtain and display the rendered video frame, wherein the UV coordinate of each fragment is obtained by interpolation according to the UV coordinate of each grid vertex in the rectangular grid.
7. The method of claim 6, wherein the first correspondence is determined by:
determining the maximum horizontal width of the three-dimensional projection plane according to the distance from the user viewpoint to the three-dimensional projection plane and the maximum horizontal field angle of the display device;
determining a first ratio of the maximum horizontal width of the three-dimensional projection plane to the horizontal width of the three-dimensional projection plane as a ratio of the monocular horizontal resolution of the display device to the optimal horizontal resolution of the three-dimensional projection plane, so as to obtain the first corresponding relationship;
determining the second correspondence by:
determining the maximum vertical height of the three-dimensional projection plane according to the distance from the user viewpoint to the three-dimensional projection plane and the maximum vertical field angle of the display device;
and determining a second ratio of the maximum vertical height of the three-dimensional projection plane to the vertical height of the three-dimensional projection plane as a ratio of the monocular vertical resolution of the display device to the optimal vertical resolution of the three-dimensional projection plane, so as to obtain the second corresponding relation.
8. The method of claim 7, wherein determining the optimal horizontal resolution of the three-dimensional projection plane for displaying the video frame according to the first correspondence comprises:
determining a quotient of the monocular horizontal resolution of the display device and the first ratio as an optimal horizontal resolution of the three-dimensional projection plane; and
determining the optimal perpendicular bisector resolution of the three-dimensional projection plane according to the second corresponding relationship, including:
and determining the quotient of the monocular vertical resolution of the display device and the second ratio as the optimal vertical resolution of the three-dimensional projection plane.
9. The method of claim 6, wherein downsampling the video frame according to the optimal horizontal resolution, the optimal vertical resolution, and the horizontal and vertical resolutions of the video frame to obtain a sampled video frame comprises:
and comparing the optimal horizontal resolution with the horizontal resolution of the video frame and the optimal vertical resolution with the vertical resolution of the video frame, and performing down-sampling on the video frame in the corresponding direction according to the comparison result to obtain a sampled video frame.
10. The method of claim 9, wherein the down-sampling the video frame in the corresponding direction according to the comparison result to obtain a sampled video frame comprises:
if the optimal horizontal resolution is larger than the horizontal resolution of the video frame and the optimal vertical resolution is smaller than or equal to the vertical resolution of the video frame, downsampling the video frame in the vertical direction to obtain the video frame of which the horizontal resolution is equal to the horizontal resolution of the video frame and the vertical resolution is equal to the optimal vertical resolution; or
If the optimal horizontal resolution is less than or equal to the horizontal resolution of the video frame and the optimal vertical resolution is greater than the vertical resolution of the video frame, downsampling the video frame in the horizontal direction to obtain the video frame with the horizontal resolution equal to the optimal horizontal resolution and the vertical resolution equal to the vertical resolution of the video frame; or
If the optimal horizontal resolution is greater than the horizontal resolution of the video frame and the optimal vertical resolution is greater than the vertical resolution of the video frame, downsampling the video frame in the horizontal direction and the vertical direction to obtain the video frame with the horizontal resolution equal to the optimal horizontal resolution and the vertical resolution equal to the optimal vertical resolution; or
And if the optimal horizontal resolution is less than or equal to the horizontal resolution of the video frame and the optimal vertical resolution is less than or equal to the vertical resolution of the video frame, taking the video frame as a down-sampled video frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110257615.2A CN113115018A (en) | 2021-03-09 | 2021-03-09 | Self-adaptive display method and display equipment for image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110257615.2A CN113115018A (en) | 2021-03-09 | 2021-03-09 | Self-adaptive display method and display equipment for image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113115018A true CN113115018A (en) | 2021-07-13 |
Family
ID=76711599
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110257615.2A Pending CN113115018A (en) | 2021-03-09 | 2021-03-09 | Self-adaptive display method and display equipment for image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113115018A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105916022A (en) * | 2015-12-28 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | Video image processing method and apparatus based on virtual reality technology |
CN105933726A (en) * | 2016-05-13 | 2016-09-07 | 乐视控股(北京)有限公司 | Virtual reality terminal and video resolution adaptation method and device thereof |
CN107220925A (en) * | 2017-05-05 | 2017-09-29 | 珠海全志科技股份有限公司 | A kind of real accelerating method and device of real-time virtual |
CN107317987A (en) * | 2017-08-14 | 2017-11-03 | 歌尔股份有限公司 | The display data compression method and equipment of virtual reality, system |
CN108174174A (en) * | 2017-12-29 | 2018-06-15 | 暴风集团股份有限公司 | VR image display methods, device and terminal |
CN108347647A (en) * | 2018-02-12 | 2018-07-31 | 深圳创维-Rgb电子有限公司 | Video picture displaying method, device, television set and storage medium |
CN108470369A (en) * | 2018-03-26 | 2018-08-31 | 城市生活(北京)资讯有限公司 | A kind of water surface rendering intent and device |
US20190266802A1 (en) * | 2016-10-14 | 2019-08-29 | Nokia Technologies Oy | Display of Visual Data with a Virtual Reality Headset |
CN110536176A (en) * | 2019-07-31 | 2019-12-03 | 深圳银澎云计算有限公司 | A kind of video resolution method of adjustment, electronic equipment and storage medium |
CN110602475A (en) * | 2019-05-29 | 2019-12-20 | 珠海全志科技股份有限公司 | Method and device for improving image quality, VR display equipment and control method |
-
2021
- 2021-03-09 CN CN202110257615.2A patent/CN113115018A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105916022A (en) * | 2015-12-28 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | Video image processing method and apparatus based on virtual reality technology |
CN105933726A (en) * | 2016-05-13 | 2016-09-07 | 乐视控股(北京)有限公司 | Virtual reality terminal and video resolution adaptation method and device thereof |
US20190266802A1 (en) * | 2016-10-14 | 2019-08-29 | Nokia Technologies Oy | Display of Visual Data with a Virtual Reality Headset |
CN107220925A (en) * | 2017-05-05 | 2017-09-29 | 珠海全志科技股份有限公司 | A kind of real accelerating method and device of real-time virtual |
CN107317987A (en) * | 2017-08-14 | 2017-11-03 | 歌尔股份有限公司 | The display data compression method and equipment of virtual reality, system |
CN108174174A (en) * | 2017-12-29 | 2018-06-15 | 暴风集团股份有限公司 | VR image display methods, device and terminal |
CN108347647A (en) * | 2018-02-12 | 2018-07-31 | 深圳创维-Rgb电子有限公司 | Video picture displaying method, device, television set and storage medium |
CN108470369A (en) * | 2018-03-26 | 2018-08-31 | 城市生活(北京)资讯有限公司 | A kind of water surface rendering intent and device |
CN110602475A (en) * | 2019-05-29 | 2019-12-20 | 珠海全志科技股份有限公司 | Method and device for improving image quality, VR display equipment and control method |
CN110536176A (en) * | 2019-07-31 | 2019-12-03 | 深圳银澎云计算有限公司 | A kind of video resolution method of adjustment, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10096157B2 (en) | Generation of three-dimensional imagery from a two-dimensional image using a depth map | |
US6429867B1 (en) | System and method for generating and playback of three-dimensional movies | |
KR101697184B1 (en) | Apparatus and Method for generating mesh, and apparatus and method for processing image | |
US20170155885A1 (en) | Methods for reduced-bandwidth wireless 3d video transmission | |
US10403045B2 (en) | Photorealistic augmented reality system | |
EP3643059B1 (en) | Processing of 3d image information based on texture maps and meshes | |
CN101189643A (en) | 3D image forming and displaying system | |
JP7197451B2 (en) | Image processing device, method and program | |
WO2012094076A9 (en) | Morphological anti-aliasing (mlaa) of a re-projection of a two-dimensional image | |
WO2015196791A1 (en) | Binocular three-dimensional graphic rendering method and related system | |
JP2022179473A (en) | Generating new frame using rendered content and non-rendered content from previous perspective | |
US9754398B1 (en) | Animation curve reduction for mobile application user interface objects | |
Queguiner et al. | Towards mobile diminished reality | |
JP2008243046A (en) | Texture processing device, method, and program | |
CN113206993A (en) | Method for adjusting display screen and display device | |
CN107562185B (en) | Light field display system based on head-mounted VR equipment and implementation method | |
EP3057316B1 (en) | Generation of three-dimensional imagery to supplement existing content | |
US6559844B1 (en) | Method and apparatus for generating multiple views using a graphics engine | |
Rasool et al. | Haptic interaction with 2D images | |
US20200410767A1 (en) | Content generation system and method | |
CA3155612A1 (en) | Method and system for providing at least a portion of content having six degrees of freedom motion | |
KR20120119774A (en) | Stereoscopic image generation method, device and system using circular projection and recording medium for the same | |
KR101227155B1 (en) | Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image | |
CN113093903B (en) | Image display method and display equipment | |
CN113115018A (en) | Self-adaptive display method and display equipment for image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210713 |
|
RJ01 | Rejection of invention patent application after publication |