CN114095655A - Method and device for displaying streaming data - Google Patents

Method and device for displaying streaming data Download PDF

Info

Publication number
CN114095655A
CN114095655A CN202111359884.6A CN202111359884A CN114095655A CN 114095655 A CN114095655 A CN 114095655A CN 202111359884 A CN202111359884 A CN 202111359884A CN 114095655 A CN114095655 A CN 114095655A
Authority
CN
China
Prior art keywords
data
texture data
eye
texture
streaming
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111359884.6A
Other languages
Chinese (zh)
Inventor
王智利
于全夫
郝冬宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202111359884.6A priority Critical patent/CN114095655A/en
Publication of CN114095655A publication Critical patent/CN114095655A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2625Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
    • H04N5/2627Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect for providing spin image effect, 3D stop motion effect or temporal freeze effect

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

The application relates to the technical field of display, and provides a method and equipment for displaying streaming data, wherein when first frame of streaming data sent by external equipment is received, the type of the streaming data is determined, judgment is not needed to be carried out on each frame, and computing resources are saved; the texture data is obtained by carrying out frame decoding on the streaming data, and is copied into a rendering pipeline of the GPU for rendering and displaying, so that the parallel processing capability of the GPU is fully utilized, and CPU resources are not occupied. In the rendering process, for different types of streaming data, a mode matched with the types is adopted, left eye texture data and right eye texture data are extracted from the texture data at one time, left eye texture data and right eye texture data are drawn respectively and displayed simultaneously based on the extracted left eye texture data and right eye texture data, the rendering capacity and the display efficiency of VR equipment are improved, and then the immersive experience of users is improved.

Description

Method and device for displaying streaming data
Technical Field
The present application relates to the field of display technologies, and in particular, to a method and an apparatus for displaying streaming data.
Background
With the development of Virtual Reality (VR) technology, immersive experiences gradually spread throughout various industries of modern life, such as industries of live broadcasting, games, and the like. The VR streaming data is used as a high-frequency use scene in the VR field, and the rendering display of the streaming data directly influences the immersive experience of the user.
Taking a game scene as an example, after receiving game data of the external device, the VR device renders the game data to the left and right eye display screens, and the rendering efficiency is higher, so that the precision of the competition of the player is higher in the game experience.
Therefore, improving rendering display efficiency of the streaming data has important research significance on immersive experience of the VR device.
Disclosure of Invention
The embodiment of the application provides a method and equipment for displaying streaming data, which are used for improving rendering and displaying efficiency of VR equipment on the streaming data.
In a first aspect, an embodiment of the present application provides a method for displaying streaming data, which is applied to a VR device, and includes:
when first frame streaming data sent by external equipment is received, determining the type of the streaming data;
for each frame of received streaming data, performing frame decoding on the streaming data, and acquiring texture data from the decoded streaming data;
respectively acquiring left-eye texture data and right-eye texture data from the acquired texture data according to the type of the streaming data;
rendering a left eye picture and a right eye picture respectively according to the left eye texture data and the right eye texture data, and displaying the left eye picture and the right eye picture simultaneously.
In a first aspect, an embodiment of the present application provides a VR device, which includes a processor, a memory, a display, and an external communication interface, where the external communication interface, the display, and the memory are connected to the processor through a bus;
the memory stores computer program instructions, and the processor executes the following according to the computer program instructions:
determining the type of the streaming data when first frame streaming data sent by an external device is received through the external communication interface;
for each frame of received streaming data, performing frame decoding on the streaming data, and acquiring texture data from the decoded streaming data;
respectively acquiring left-eye texture data and right-eye texture data from the acquired texture data according to the type of the streaming data;
and rendering a left eye picture and a right eye picture respectively according to the left eye texture data and the right eye texture data, and simultaneously displaying the left eye picture and the right eye picture by the display.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where computer-executable instructions are stored, and the computer-executable instructions are configured to cause an XR device to perform the method for displaying streaming data provided in the embodiment of the present application.
In the above embodiment of the application, after frame decoding is performed on the streaming data sent by the external device, the VR device obtains texture data, and determines the type of the streaming data according to the first frame of streaming data, without performing judgment on each frame, so that computing resources are saved; and aiming at different types of streaming data, extracting left-eye texture data and right-eye texture data from the texture data respectively in a mode matched with the types, and drawing left-eye and right-eye pictures respectively and simultaneously displaying the pictures based on the extracted left-eye and right-eye texture data. Through the texture data that obtains, once only draw left eye texture data and right eye texture data, promoted the rendering capability of VR equipment, the rendering display efficiency of streaming data is higher, and then has promoted user's immersive experience.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 schematically illustrates an application scenario provided by an embodiment of the present application;
fig. 2 is a schematic diagram illustrating a display process of commonly used binocular data;
fig. 3 is a schematic diagram illustrating a display process of binocular data provided by an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a display process of monocular data provided by an embodiment of the present application;
fig. 5 is a flowchart illustrating a method for displaying streaming data according to an embodiment of the present application;
fig. 6 is a flowchart illustrating a method for displaying streaming data in an interaction process between a VR device and an external device according to an embodiment of the present application;
fig. 7 is a block diagram schematically illustrating a VR device provided in an embodiment of the present application.
Detailed Description
To make the objects, embodiments and advantages of the present application clearer, the following description of exemplary embodiments of the present application will clearly and completely describe the exemplary embodiments of the present application with reference to the accompanying drawings in the exemplary embodiments of the present application, and it is to be understood that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments described herein without inventive step, are intended to be within the scope of the claims appended hereto. In addition, while the disclosure herein has been presented in terms of one or more exemplary examples, it should be appreciated that aspects of the disclosure may be implemented solely as a complete embodiment.
It should be noted that the brief descriptions of the terms in the present application are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 schematically illustrates an application scenario provided by an embodiment of the present application; as shown in fig. 1, external device 100 interacts with VR device 300 through a network 200. The external device 100 acquires the streaming data, transmits the streaming data to the network 200, encodes the streaming data by the network, and transmits the encoded streaming data to the VR device 300. The VR device 300 decodes the received streaming data, and renders and displays left and right eye pictures based on the decoded streaming data.
In the embodiment of the application, the VR device is installed with a specially developed streaming application, and only in the running process of the streaming application, the streaming data sent by the external device can be received and displayed. The external equipment is provided with a streaming assistant, and the streaming assistant is used for acquiring a real-time picture of the external equipment, encoding the real-time picture into streaming data and then starting the streaming data to the VR equipment.
In the VR scenario shown in fig. 1, the streaming data mainly originates from external devices such as a desktop Computer 101, a Personal Computer (PC) 102, and a Television (TV) 103. When the streaming data comes from a picture displayed on a TV, the streaming data is monocular data, a frame picture is not divided into a left eye and a right eye, and texture data of the left eye and the right eye are obtained from the monocular data when the VR device renders and displays the frame picture. When the streaming data is derived from a picture displayed on a computer, the streaming data is binocular data, there is a slight difference in left and right eye data, and the left and right eye data are transmitted separately.
For dual-purpose streaming data, there are generally two ways to render and display left and right eye pictures.
In a first mode
The external device encodes the frame picture and separately transmits left and right eye data. And after receiving the data streams of the left eye and the right eye, the VR equipment separately decodes the data streams and renders and displays the data streams based on the decoded data. By adopting the scheme, the transmission and the decoding of the binocular streaming data are separately executed, so that the required time is doubled, and the data of the left eye and the right eye are required to be synchronously processed, which is more complex.
Mode two
The external device encodes the binocular streaming eye data into a frame of picture and transmits the frame of picture to the VR device. After decoding the frame picture, the VR device artificially divides the left and right eye data, acquires the left and right eye data into a Graphics Processing Unit (GPU) twice, and renders and displays the left and right eye pictures, respectively, as shown in fig. 2. By adopting the scheme, large manpower and material resources are needed, automatic display cannot be realized, acquisition is carried out twice, and rendering efficiency is low.
In view of this, the present application provides a method and an apparatus for displaying streaming data. Firstly, determining the type of the streaming data, directly copying the decoded whole frame of streaming data and sending the streaming data to a rendering pipeline for rendering, occupying no CPU resource, and fully utilizing the rendering speed of a GPU by a fragment shader in the rendering pipeline; and texture data of the left eye and the right eye share one frame of streaming data, and the texture data of the left eye and the right eye can be directly extracted on the GPU to respectively draw pictures of the left eye and the right eye through one-time copying, so that the rendering display efficiency of VR equipment is improved, and the immersive experience of a user is further improved. By adopting the method of the embodiment of the application, the drawing capability of each frame is 16ms lower than the refresh rate of 60Hz without considering the network transmission delay.
It should be noted that, the programming Language used by the Shader in the present application is not limited, and may include a Shader Language of an Open Graphics Library (OpenGL), a High Level Shader Language (HLSL), (C for Graphics, CG) Shader Language, and a Unity 3D Shader Language.
In the embodiment of the present application, the type of the streaming data may be binocular data or monocular data.
Taking the streaming data as an example of binocular data, fig. 3 exemplarily shows a schematic diagram of a display process of the streaming data, as shown in fig. 3, after receiving the streaming data, the VR device copies the streaming data into the GPU, then extracts left half texture data and right half texture data in a rendering pipeline of the GPU, respectively, draws a left-eye picture according to the left half texture data, draws a right-eye picture according to the right half texture data, and simultaneously displays the left-eye picture and the right-eye picture to present a stereoscopic visual effect to a user.
Taking the stream data as the monocular data as an example, fig. 4 exemplarily shows a schematic diagram of a display process of the stream data, as shown in fig. 4, after receiving the stream data, the VR device copies the stream data into the GPU, and then, in a rendering pipeline of the GPU, takes texture data obtained from the monocular data as left-eye texture data and right-eye texture data, respectively, draws a left-eye picture according to the left-eye texture data, draws a right-eye picture according to the right-eye texture data, and simultaneously displays the left-eye picture and the right-eye picture.
In an embodiment of the present application, referring to fig. 5, a method for displaying streaming data is executed by a VR device, and mainly includes the following steps:
s501: when first frame streaming data sent by an external device is received, the type of the streaming data is determined.
In the embodiment of the application, the VR device is installed with a streaming application, and can receive streaming data sent by an external device, where the type of the streaming data includes monocular data and binocular data.
Specifically, when S501 is executed, the streaming assistant of the external device acquires a display image of the external device in real time, transmits the acquired display image to the network, and transmits a frame of image transmitted by the external device to the VR device after the frame of image is encoded by the network. And the VR equipment receives the streaming data transmitted by the external equipment through the streaming application. In the display process, the types of the streaming data of the same video are the same, so that after receiving the first frame of streaming data, the VR equipment determines the type of the transmitted streaming data and marks the type, and when subsequently receiving the streaming data, judgment is not needed, so that the CPU resource is saved, the logic operation of the CPU is reduced, and the rendering and displaying speed is increased.
The embodiment of the present application does not impose a limitation on the manner of representing the streaming data type, for example, if the streaming data type is determined to be monocular data, the indication is "0", and if the streaming data type is determined to be binocular data, the indication is "1".
S502: and for each frame of received streaming data, performing frame decoding on the streaming data, and acquiring texture data from the decoded streaming data.
In the embodiment of the application, the VR device decodes each frame of streaming data in a decoding mode matched with a streaming data coding format, and copies each frame of decoded streaming data into the GPU completely, so that rendering display is performed in a rendering pipeline in the GPU, rendering advantages of the rendering pipeline are fully utilized, rendering display speed is improved, and CPU resources are not occupied.
In S502, after each frame of the streaming data is copied to the GPU, the rendering pipeline may obtain texture data of the display screen.
S503: and respectively acquiring left-eye texture data and right-eye texture data from the acquired texture data according to the type of the streaming data.
When the type of the streaming data is binocular data, in S503, after texture data of the display screen is obtained, a first part of the texture data (e.g., left half texture data in fig. 3) is taken as left eye texture data, and a second part of the texture data (e.g., left half texture data in fig. 3) is taken as right eye texture data. The sampling formula of the texture data is as follows:
texture (V _ tex, Vec2(V _ tex p.x 0.5+0.5 isLeft, V _ tex p.y)) formula 1
Wherein V _ tex represents the acquired texture data, texture (·) represents a function for extracting texture data from V _ tex, Vec2(·) represents the input texture coordinates, V _ texpo.x represents the s component of the texture coordinates, V _ texpo.y represents the t component of the texture coordinates, isLeft takes a value of 0 or 1, texture (V _ tex, Vec2(·)) represents the extracted right-eye texture data when isLeft takes a value of 0, and texture (V _ tex, Vec2(·)) represents the extracted left-eye texture data when isLeft takes a value of 1.
When the type of the streaming data is monocular data, the left and right eyes display the same picture, and the texture data for both the left and right eyes are acquired from the monocular data, that is, in S503, the acquired texture data are taken as left-eye texture data and right-eye texture data, respectively (as shown in fig. 4).
S504: and rendering the left eye picture and the right eye picture respectively according to the left eye texture data and the right eye texture data, and displaying the left eye picture and the right eye picture simultaneously.
In general, since the left and right eye screens are displayed separately in the VR device, the rendering display can be performed by making full use of the parallel processing capability of the GPU. In S504, in a vertex shader of the rendering pipeline, each vertex is generated, the vertex is rasterized to generate each fragment, in the fragment shader (also referred to as a fragment shader or a pixel shader), according to position information of the fragment to be rendered in the left display screen, a color value of the fragment to be rendered is obtained from left-eye texture data, and according to the color value of the fragment to be rendered, a left-eye picture is rendered, and simultaneously, according to the position information of the fragment to be rendered in the right display screen, a color value of the fragment to be rendered is obtained from right-eye texture data, and according to the color value of the fragment to be rendered, a right-eye picture is rendered.
In S504, after the left and right eye pictures are drawn, the drawn left and right eye pictures are synchronously displayed by the left and right eyeglass lenses of the VR device, and at this time, a stereoscopic display picture is seen in the eyes of a wearer of the VR device, thereby improving the immersive experience of the user.
In the following, taking a game scenario as an example, from the perspective of interaction between the VR device and the external device, a complete flow of streaming data display is described, as shown in fig. 6, which mainly includes the following steps:
s601: and a streaming assistant in the external equipment captures a display picture of the running VR game to obtain streaming data.
In S601, after the external device runs the VR game, the streaming assistant captures a display frame in real time, and transmits a captured frame of frame to the network, where the network encodes the frame of frame and transmits the encoded frame of frame to the VR device.
S602: the VR device receives the streamed data and decodes it.
In S602, the VR device starts a streaming application to receive streaming data sent by an external device, and decodes the received streaming data.
S603: the VR device determines whether the received streaming data is the first frame streaming data, if so, performs S604, otherwise performs S606.
Since the types of the same video are consistent, in S603, the type determination is performed only according to the first frame streaming data, so that the logical determination process of the CPU is reduced, and the rendering display speed is increased.
S604: the VR device determines a type of the streamed data based on the first frame of streamed data.
In S604, the types of the streaming data include monocular data and binocular data.
S605: and the VR equipment identifies and records the streaming data according to the determined type.
In an alternative embodiment, the simplex data is identified with a "0" and the duplex data is identified with a "1".
S606: and the VR equipment determines whether the streaming data is binocular data according to the type identification of the streaming data, if so, S607 is executed, and otherwise, S608 is executed.
S607: the VR device takes a first part of texture data in the texture data as left-eye texture data and a second part of texture data in the texture data as right-eye texture data.
The detailed description of this step is referred to S503 and will not be repeated here.
S608: the VR device takes the texture data as left-eye texture data and right-eye texture data, respectively.
The detailed description of this step is referred to S503 and will not be repeated here.
S609: and rendering the left eye picture and the right eye picture respectively according to the left eye texture data and the right eye texture data, and displaying the left eye picture and the right eye picture simultaneously.
The detailed description of this step is referred to S504 and will not be repeated here.
In the embodiment of the application, the type of the streaming data is determined and identified through the first frame of streaming data, and type judgment is not needed for each frame, so that the logic calculation process of a CPU is reduced, and the calculation speed is increased; after receiving the streaming data transmitted by the external equipment, the VR equipment decodes the streaming data to obtain texture data of a display picture, copies the texture data in a rendering pipeline of the CPU, and renders and displays the texture data by the rendering pipeline, so that the CPU resource is saved, the parallel processing capacity of the CPU is fully utilized, and the rendering and displaying speed is increased. In the process of rendering and displaying, aiming at different types of streaming data, the rendering pipeline adopts a mode matched with the types to respectively extract left-eye texture data and right-eye texture data from the texture data, and respectively draws left-eye and right-eye pictures and simultaneously displays the pictures based on the extracted left-eye and right-eye texture data, and respectively obtains the left-eye texture data and the right-eye texture data through the texture data copied at one time, so that the rendering capacity of VR equipment is improved, the rendering and displaying efficiency of the streaming data is higher, and further the immersive experience of a user is improved.
Based on the same technical concept, embodiments of the present application provide a VR device, which can execute the flow of the display method of the streaming data provided by embodiments of the present application, and can achieve the same technical effect, which is not repeated here.
Referring to fig. 7, the apparatus includes a processor 701, a memory 702, a display 703 and an external communication interface 704, the display 703 and the memory 702 being connected to the processor 701 through a bus 705; the memory 702 stores computer program instructions, and the processor 701 executes the following operations according to the computer program instructions:
determining the type of the streaming data upon receiving the first frame of streaming data transmitted by the external device through the external communication interface 704;
for each frame of received streaming data, performing frame decoding on the streaming data, and acquiring texture data from the decoded streaming data;
respectively acquiring left-eye texture data and right-eye texture data from the acquired texture data according to the type of the streaming data;
and rendering a left-eye picture and a right-eye picture respectively according to the left-eye texture data and the right-eye texture data, and simultaneously displaying the left-eye picture and the right-eye picture by the displayer.
Optionally, when the type of the streaming data is binocular data, the processor 701 obtains left-eye texture data and right-eye texture data from the obtained texture data according to the type of the streaming data, and the specific operations are as follows:
acquiring a first part of texture data from the acquired texture data, and taking the first part of texture data as left-eye texture data; and
and acquiring a second part of texture data from the acquired texture data, and taking the second part of texture data as right-eye texture data.
Optionally, the sampling formula of the left-eye texture data and the right-eye texture data is as follows:
texture(V_tex,Vec2(V_texPo.x*0.5+0.5*isLeft,V_texPo.y))
wherein V _ tex represents the acquired texture data, texture (·) represents a function for extracting texture data from V _ tex, Vec2(·) represents the input texture coordinates, V _ texpo.x represents the s component of the texture coordinates, V _ texpo.y represents the t component of the texture coordinates, isLeft takes a value of 0 or 1, texture (V _ tex, Vec2(·)) represents the extracted right-eye texture data when isLeft takes a value of 0, and texture (V _ tex, Vec2(·)) represents the extracted left-eye texture data when isLeft takes a value of 1.
Optionally, when the type of the streaming data is monocular data, the processor 701 obtains left-eye texture data and right-eye texture data from the obtained texture data according to the type of the streaming data, and specifically:
the acquired texture data is taken as left-eye texture data, and the acquired texture data is taken as right-eye texture data.
Optionally, the processor 701 renders a left-eye image and a right-eye image according to the left-eye texture data and the right-eye texture data, and specifically:
according to the position information of the fragment to be rendered in the left display screen, the color value of the fragment to be rendered is obtained from the left eye texture data, and a left eye picture is rendered according to the color value of the fragment to be rendered; and
and according to the position information of the fragment to be rendered in the right display screen, acquiring the color value of the fragment to be rendered from the texture data of the right eye, and rendering the picture of the right eye according to the color value of the fragment to be rendered.
Embodiments of the present application also provide a computer-readable storage medium for storing instructions that, when executed, may implement the methods of the foregoing embodiments.
The embodiments of the present application also provide a computer program product for storing a computer program, where the computer program is used to execute the method of the foregoing embodiments.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A method for displaying streaming data is applied to Virtual Reality (VR) equipment and comprises the following steps:
when first frame streaming data sent by external equipment is received, determining the type of the streaming data;
for each frame of received streaming data, performing frame decoding on the streaming data, and acquiring texture data from the decoded streaming data;
respectively acquiring left-eye texture data and right-eye texture data from the acquired texture data according to the type of the streaming data;
rendering a left eye picture and a right eye picture respectively according to the left eye texture data and the right eye texture data, and displaying the left eye picture and the right eye picture simultaneously.
2. The method of claim 1, wherein when the type of the streaming data is binocular data, the acquiring left-eye texture data and right-eye texture data from the acquired texture data, respectively, according to the type of the streaming data comprises:
acquiring a first part of texture data from the acquired texture data, and taking the first part of texture data as left-eye texture data; and
and acquiring second part of texture data from the acquired texture data, and taking the second part of texture data as right-eye texture data.
3. The method of claim 2, wherein the sampling formula of the left eye texture data and the right eye texture data is:
texture(V_tex,Vec2(V_texPo.x*0.5+0.5*isLeft,V_texPo.y))
wherein, V _ tex represents the acquired texture data, texture (·) represents a function for extracting texture data from V _ tex, Vec2(·) represents the input texture coordinates, V _ texpo.x represents the s component of the texture coordinates, V _ texpo.y represents the t component of the texture coordinates, isLeft takes a value of 0 or 1, when isLeft takes a value of 0, texture (V _ tex, Vec2(·)) represents the extracted right-eye texture data, and when isLeft takes a value of 1, texture (V _ tex, Vec2(·)) represents the extracted left-eye texture data.
4. The method of claim 1, wherein when the type of the streaming data is monocular data, the obtaining left-eye texture data and right-eye texture data from the obtained texture data according to the type of the streaming data comprises:
the acquired texture data is taken as left-eye texture data, and the acquired texture data is taken as right-eye texture data.
5. The method of any of claims 1-4, wherein rendering a left-eye picture and a right-eye picture from the left-eye texture data and the right-eye texture data, respectively, comprises:
according to the position information of the fragment to be rendered in the left display screen, acquiring the color value of the fragment to be rendered from the left eye texture data, and rendering a left eye picture according to the color value of the fragment to be rendered; and
and according to the position information of the fragment to be rendered in the right display screen, acquiring the color value of the fragment to be rendered from the texture data of the right eye, and rendering a picture of the right eye according to the color value of the fragment to be rendered.
6. A virtual reality, VR, device comprising a processor, a memory, a display, and an external communication interface, the display, and the memory coupled to the processor via a bus;
the memory stores computer program instructions, and the processor executes the following according to the computer program instructions:
determining the type of the streaming data when first frame streaming data sent by an external device is received through the external communication interface;
for each frame of received streaming data, performing frame decoding on the streaming data, and acquiring texture data from the decoded streaming data;
respectively acquiring left-eye texture data and right-eye texture data from the acquired texture data according to the type of the streaming data;
and rendering a left eye picture and a right eye picture respectively according to the left eye texture data and the right eye texture data, and simultaneously displaying the left eye picture and the right eye picture by the display.
7. The VR device of claim 6, wherein when the type of the streaming data is binocular data, the processor acquires left-eye texture data and right-eye texture data from the acquired texture data according to the type of the streaming data, and is further configured to:
acquiring a first part of texture data from the acquired texture data, and taking the first part of texture data as left-eye texture data; and
and acquiring second part of texture data from the acquired texture data, and taking the second part of texture data as right-eye texture data.
8. The VR device of claim 7, wherein the sampling formula for the left eye texture data and the right eye texture data is:
texture(V_tex,Vec2(V_texPo.x*0.5+0.5*isLeft,V_texPo.y))
wherein, V _ tex represents the acquired texture data, texture (·) represents a function for extracting texture data from V _ tex, Vec2(·) represents the input texture coordinates, V _ texpo.x represents the s component of the texture coordinates, V _ texpo.y represents the t component of the texture coordinates, isLeft takes a value of 0 or 1, when isLeft takes a value of 0, texture (V _ tex, Vec2(·)) represents the extracted right-eye texture data, and when isLeft takes a value of 1, texture (V _ tex, Vec2(·)) represents the extracted left-eye texture data.
9. The VR device of claim 6, wherein when the type of the streaming data is monocular data, the processor obtains left-eye texture data and right-eye texture data from the obtained texture data according to the type of the streaming data, and is further configured to:
the acquired texture data is taken as left-eye texture data, and the acquired texture data is taken as right-eye texture data.
10. The VR device of any one of claims 6-9, wherein the processor renders left-eye and right-eye pictures from the left-eye texture data and the right-eye texture data, respectively, by:
according to the position information of the fragment to be rendered in the left display screen, acquiring the color value of the fragment to be rendered from the left eye texture data, and rendering a left eye picture according to the color value of the fragment to be rendered; and
and according to the position information of the fragment to be rendered in the right display screen, acquiring the color value of the fragment to be rendered from the texture data of the right eye, and rendering a picture of the right eye according to the color value of the fragment to be rendered.
CN202111359884.6A 2021-11-17 2021-11-17 Method and device for displaying streaming data Pending CN114095655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111359884.6A CN114095655A (en) 2021-11-17 2021-11-17 Method and device for displaying streaming data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111359884.6A CN114095655A (en) 2021-11-17 2021-11-17 Method and device for displaying streaming data

Publications (1)

Publication Number Publication Date
CN114095655A true CN114095655A (en) 2022-02-25

Family

ID=80301175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111359884.6A Pending CN114095655A (en) 2021-11-17 2021-11-17 Method and device for displaying streaming data

Country Status (1)

Country Link
CN (1) CN114095655A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050041031A1 (en) * 2003-08-18 2005-02-24 Nvidia Corporation Adaptive load balancing in a multi-processor graphics processing system
CN105916022A (en) * 2015-12-28 2016-08-31 乐视致新电子科技(天津)有限公司 Video image processing method and apparatus based on virtual reality technology
CN106162142A (en) * 2016-06-15 2016-11-23 南京快脚兽软件科技有限公司 A kind of efficient VR scene drawing method
CN108241211A (en) * 2016-12-26 2018-07-03 成都理想境界科技有限公司 One kind wears display equipment and image rendering method
CN108282648A (en) * 2018-02-05 2018-07-13 北京搜狐新媒体信息技术有限公司 A kind of VR rendering intents, device, Wearable and readable storage medium storing program for executing
CN109510975A (en) * 2019-01-21 2019-03-22 恒信东方文化股份有限公司 A kind of extracting method of video image, equipment and system
CN111988598A (en) * 2020-09-09 2020-11-24 江苏普旭软件信息技术有限公司 Visual image generation method based on far and near view layered rendering

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050041031A1 (en) * 2003-08-18 2005-02-24 Nvidia Corporation Adaptive load balancing in a multi-processor graphics processing system
CN105916022A (en) * 2015-12-28 2016-08-31 乐视致新电子科技(天津)有限公司 Video image processing method and apparatus based on virtual reality technology
CN106162142A (en) * 2016-06-15 2016-11-23 南京快脚兽软件科技有限公司 A kind of efficient VR scene drawing method
CN108241211A (en) * 2016-12-26 2018-07-03 成都理想境界科技有限公司 One kind wears display equipment and image rendering method
CN108282648A (en) * 2018-02-05 2018-07-13 北京搜狐新媒体信息技术有限公司 A kind of VR rendering intents, device, Wearable and readable storage medium storing program for executing
CN109510975A (en) * 2019-01-21 2019-03-22 恒信东方文化股份有限公司 A kind of extracting method of video image, equipment and system
CN111988598A (en) * 2020-09-09 2020-11-24 江苏普旭软件信息技术有限公司 Visual image generation method based on far and near view layered rendering

Similar Documents

Publication Publication Date Title
US11087549B2 (en) Methods and apparatuses for dynamic navigable 360 degree environments
US10334238B2 (en) Method and system for real-time rendering displaying high resolution virtual reality (VR) video
KR102493754B1 (en) Methods and apparatus for creating and using reduced resolution images and/or communicating such images to a playback or content distribution device
JP5654138B2 (en) Hybrid reality for 3D human machine interface
US9392218B2 (en) Image processing method and device
CN107682688B (en) Video real-time recording method and recording equipment based on augmented reality
WO2018045927A1 (en) Three-dimensional virtual technology based internet real-time interactive live broadcasting method and device
US10536709B2 (en) Prioritized compression for video
WO2017193576A1 (en) Video resolution adaptation method and apparatus, and virtual reality terminal
CN106303573B (en) 3D video image processing method, server and client
EP2674916B1 (en) Moving image distribution server, moving image playback device, control method, program, and recording medium
CN109510975B (en) Video image extraction method, device and system
CN104898280B (en) The display methods and head mounted display of a kind of head mounted display
Li et al. Enhancing 3d applications using stereoscopic 3d and motion parallax
TW201315209A (en) System and method of rendering stereoscopic images
US10127714B1 (en) Spherical three-dimensional video rendering for virtual reality
CN113243112A (en) Streaming volumetric and non-volumetric video
US20220353486A1 (en) Method and System for Encoding a 3D Scene
WO2019118028A1 (en) Methods, systems, and media for generating and rendering immersive video content
US20170221174A1 (en) Gpu data sniffing and 3d streaming system and method
CN109658488B (en) Method for accelerating decoding of camera video stream through programmable GPU in virtual-real fusion system
WO2023065961A1 (en) Video implantation method and apparatus, device, and computer readable storage medium
CN110012284A (en) A kind of video broadcasting method and device based on helmet
US20230283759A1 (en) System and method for presenting three-dimensional content
CN114095655A (en) Method and device for displaying streaming data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination