CN114637398A - Image streaming display method and device - Google Patents

Image streaming display method and device Download PDF

Info

Publication number
CN114637398A
CN114637398A CN202210226329.4A CN202210226329A CN114637398A CN 114637398 A CN114637398 A CN 114637398A CN 202210226329 A CN202210226329 A CN 202210226329A CN 114637398 A CN114637398 A CN 114637398A
Authority
CN
China
Prior art keywords
pose data
compensation
image
time
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210226329.4A
Other languages
Chinese (zh)
Inventor
王智利
万文青
丁国耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202210226329.4A priority Critical patent/CN114637398A/en
Publication of CN114637398A publication Critical patent/CN114637398A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The utility model relates to the technical field of display, and provides an image series flow display method and equipment, wherein VR equipment sends pose data for controlling display images to a common terminal in real time, receives the controlled real images, determines the corresponding prediction time of each frame of images to be compensated according to the time interval of the received two adjacent real images and the preset number of compensation frames, determines the corresponding compensation pose data of each prediction time according to the stored historical pose data set, and predicts and displays the corresponding compensation images by combining the currently displayed real images. Because the compensation pose data corresponding to each prediction time determined according to the historical pose data set are all positioned between the pose data corresponding to the two adjacent real images, the compensation image between the two adjacent real images conforms to the actual motion track, when the next real image is displayed, the image jump is not caused, the phenomenon of image jitter during streaming display is effectively solved, and the user experience is further improved.

Description

Image streaming display method and device
Technical Field
The present disclosure relates to the field of display technologies, and in particular, to a method and an apparatus for displaying image streams.
Background
Virtual Reality (VR) technology utilizes computer systems and sensor technology to create a three-dimensional space, which provides users with immersive and immersive experiences, and is widely used in various industries, especially in the field of games.
In order to bring immersive experience to users, a common terminal (e.g., a mobile phone, a television, a computer, etc.) may stream a self picture to a VR device, and the VR device displays a received picture to achieve a 3D effect, and the VR device is generally configured with an Inertial Measurement Unit (IMU) and may measure head pose data of a wearer, and the VR device may control a display picture after sending the pose data to the common terminal.
Generally, when a VR device performs rendering display, a rendering frame rate of a GPU is generally lower than a refresh rate of a screen, and therefore, the VR device needs to perform picture compensation through Time warping (Time Wrap) to reduce picture jitter caused by the rendering frame rate lower than the refresh rate.
At present, when time warping is performed on VR equipment, the VR equipment is a compensated image predicted based on a previous frame of real image and a current pose. However, when the streaming images are displayed, because the VR device transmits the pose data to the general terminal through the network, and meanwhile, the image of the general terminal is also transmitted back to the VR device through the network, and the head rotates in the transmission process, a certain network delay exists when the VR device receives the next frame of real image, and the pose data corresponding to the next frame of real image is the pose data before the compensation image, so when the VR device replaces the compensation image with the received real image, the frame jumps, and the frame shakes when the VR device feeds back the compensation image to human eyes, thereby causing dizziness on the head.
Disclosure of Invention
The embodiment of the application provides an image streaming display method and equipment, which are used for reducing picture jitter and further reducing head dizziness.
In a first aspect, an embodiment of the present application provides a method for displaying streaming images, which is applied to a VR device, and includes:
acquiring a time interval of two adjacent real images received from a common terminal, wherein the real images are determined by the common terminal according to pose data sent by the VR equipment;
determining the corresponding prediction time of each frame of image to be compensated according to the time interval and the preset compensation frame number;
for each predicted time, determining a compensation pose data according to the historical pose data set, wherein the compensation pose data are positioned between pose data corresponding to the two adjacent real images;
predicting and displaying a corresponding compensation image according to the currently displayed real image and each compensation pose data;
and directly displaying when receiving the next frame of real image sent by the common terminal.
In a second aspect, an embodiment of the present application provides a VR device, which includes a processor, a memory, a communication interface, and a display screen, where the display screen, the communication interface, the memory, and the processor are connected through a bus:
the memory stores a computer program according to which the processor performs the following operations:
receiving two adjacent real images sent by a common terminal through the communication interface, and determining the time interval of the two adjacent real images, wherein the real images are determined by the common terminal according to pose data sent by the VR equipment;
determining the corresponding prediction time of each frame of image to be compensated according to the time interval and the preset compensation frame number;
for each predicted time, determining a compensation pose data according to the historical pose data set, wherein the compensation pose data are positioned between pose data corresponding to the two adjacent real images;
predicting a corresponding compensation image according to the real image currently displayed on the display screen and each compensation pose data, and displaying the corresponding compensation image by the display screen;
and when the next frame of real image sent by the common terminal is received through the communication interface, the real image is directly displayed on the display screen.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where computer-executable instructions are stored, and the computer-executable instructions are configured to cause a VR device to execute the image stream display method provided in the embodiment of the present application.
In the embodiment of the application, the VR device sends pose data to the common terminal in real time, and stores the sent historical pose data to obtain a historical pose data set, the common terminal determines a real image according to the pose data and streams the real image to the VR device, the VR device determines a prediction time corresponding to each frame of an image to be compensated according to a time interval between two received adjacent frames of real images and a preset compensation frame number, and for each prediction time, the VR device determines one compensation pose data according to the historical pose data set, and predicts and displays a corresponding compensation image in combination with a currently displayed real image, and directly displays the corresponding compensation image when receiving a next frame of real image. Because the compensation pose data corresponding to each prediction time determined according to the historical pose data set are all positioned between the pose data corresponding to the two adjacent real images, the compensation images between the two adjacent real images conform to the actual motion track, the picture jump cannot be caused, the picture shaking phenomenon during streaming display is effectively solved, and the user experience is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 schematically illustrates an application scenario provided by an embodiment of the present application;
fig. 2 schematically illustrates a picture jitter when a VR device provided by an embodiment of the application is in streaming display;
FIG. 3 is a flowchart illustrating a method for displaying image streams according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a method for determining a prediction time of an image to be compensated according to an embodiment of the present application;
FIG. 5 is a diagram illustrating an effect of an image stream display provided by an embodiment of the present application;
FIG. 6 is a flow chart illustrating an interaction of an image stream display provided by an embodiment of the present application;
fig. 7 is a block diagram schematically illustrating a VR device provided in an embodiment of the present application.
Detailed Description
To make the objects, embodiments and advantages of the present application clearer, the following description of exemplary embodiments of the present application will clearly and completely describe the exemplary embodiments of the present application with reference to the accompanying drawings in the exemplary embodiments of the present application, and it is to be understood that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments described herein without inventive step, are intended to be within the scope of the claims appended hereto. In addition, while the disclosure herein has been presented in terms of one or more exemplary examples, it should be appreciated that aspects of the disclosure may be implemented solely as a complete embodiment.
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 schematically illustrates an application scenario provided by an embodiment of the present application; as shown in fig. 1, the VR device 100 interacts with a generic terminal 300 through a network 200. The VR device 100 transmits the pose data of the head to the normal terminal 300 through the network 200 to control the image displayed by the normal terminal 300, and the normal terminal 300 transmits the controlled image to the VR device 100 through the network 200 for rendering and displaying. In the streaming display process, the VR device 100 predicts at least one compensation image according to the pose data and the received previous frame image, and replaces the corresponding compensation image when receiving the real image transmitted by the normal terminal 300. The common terminal 300 includes, but is not limited to, a desktop computer 301, a notebook computer 302, and a smart television 303.
In an application scene of non-streaming display, the VR device adopts a time warping technology to predict and compensate images according to the current pose data and the displayed last frame of real images, so that the rendering frame rate is improved, and image jitter is avoided. However, in an application scenario of streaming display, due to network transmission delay, if an image is still predicted and compensated according to current pose data and a previous frame of real image, the problem of image jitter cannot be solved.
In a scene that images of an ordinary terminal are streamed to VR equipment for display, assuming that the whole time consumed by the VR equipment from the position of the terminal to the position of the terminal is 30ms, and if the VR equipment refreshes every 16ms, a compensation image needs to be predicted in the 30 ms.
For example, as shown in fig. 2, the VR device transmits the pose POS _0 to the normal terminal through the network at time T0, the normal terminal controls the current self-picture according to the pose POS _0 to obtain a picture a, and returns the picture a to the VR device through the network, the VR device receives the picture a at time T1 and renders the picture a on two display screens respectively for stereoscopic display, the display effect is picture i, and meanwhile, the VR device sends the pose POS _1 at time T to the normal terminal. And if the VR equipment does not receive the picture returned by the common terminal at the time of T2, predicting and compensating the picture according to the pose POS _2 at the time of T2 and the picture A received at the last time, and displaying the picture with a display effect of (II). At the time of T3, the VR device receives a picture B corresponding to the pose POS _1 returned by the common terminal, and renders the picture B on two display screens respectively for three-dimensional display, wherein the display effect is a picture III. Ideally, according to the compensation result of the picture (c), the picture (c) should be displayed at the time T3, the pose corresponding to the picture (c) should be behind POS _2, and the pose corresponding to the picture (c) should be in front of POS _ 2. Therefore, the difference between the real picture (c) and the ideal picture (c) is large, jump can be caused, and head dizziness can be caused.
In view of this, an embodiment of the present application provides an image streaming display method and device, where the method determines a prediction time of each frame of an image to be compensated according to a time interval between two adjacent real images received from a common terminal and a preset compensation frame number, and determines a compensation pose data by using historical pose data for each prediction time, where the compensation pose data is located between pose data corresponding to the two adjacent real images, so that a compensation picture between the two adjacent real images conforms to an actual motion trajectory, and no picture jump is caused, thereby effectively solving a picture shaking phenomenon during streaming display, and further improving user experience.
In the embodiment of the application, an image received by the VR device from a common terminal is recorded as a real image, and an image predicted by the VR device according to pose data is recorded as a compensation image.
Referring to fig. 3, a flow of a method for displaying image streams provided by an embodiment of the present application is executed by a VR device, and the method mainly includes the following steps:
s301: and acquiring the time interval of two adjacent real images received from the common terminal, wherein the real images are determined by the common terminal according to the pose data sent by the VR equipment.
When the VR equipment displays images of the common terminal in a streaming mode, the VR equipment sends pose data to the common equipment through the network in real time, the common equipment controls the images displayed by the common equipment according to the received pose data according to the refresh rate of the target video and sends the controlled real images to the VR equipment through the network, and meanwhile, the common terminal sends the pose data corresponding to the real images to the VR equipment and the pose data are stored by the VR equipment.
Generally, the frequency of sending pose data by the VR device may not be consistent with the refresh rate of the target video, and therefore, not every pose data exists in a corresponding real image. In the embodiment of the application, when the VR device sends the pose data to the common terminal, the sent pose data is locally recorded and stored as a historical pose data set, and each historical pose data in the historical pose data set comprises a pose (position) and a sending time (time) of the pose.
In practical application, because the VR device does not know the time when the next frame of real image is received, when the compensation pose data of the image to be compensated is predicted, the VR device can predict according to the time interval of at least two frames of real images of the common terminal streaming, so as to perform frame compensation before the next frame of real image is received.
In specific implementation, when the common terminal sends each frame of real image, the pose data corresponding to the frame of real image is also sent to the VR device, and the VR device can determine whether the currently rendered display image is a real image, and record the time of the received two adjacent frames of real images to obtain the time interval of the two adjacent frames of real images.
For example, assuming that the time when the VR device receives the first frame real image of the normal terminal stream is T0, and the time when the second frame real image is T1, the time interval deltaT is T1-T0.
In the embodiment of the application, the time interval is affected by a network, and under the condition of serious frame loss, the time interval between the previous frame of real image and the next frame of real image exceeds the upper limit of the threshold value, so that the accuracy of the predicted compensation image is reduced. In order to improve the accuracy of the compensated image prediction, the abnormal condition of the network needs to be eliminated. Specifically, when the time interval is less than at least one refresh interval of the VR device screen, indicating that the time interval is valid, the compensated image can be predicted using the time interval, otherwise, indicating that the time interval is invalid and needs to be recalculated.
S302: and determining the corresponding prediction time of each frame of image to be compensated according to the time interval and the preset compensation frame number.
In embodiments of the present application, one or more frames of images are compensated between two adjacent real images to ensure the VR device refresh rate requirements. Each frame of image to be compensated corresponds to a prediction time, and the specific determination process is as shown in fig. 4:
s3021: and equally dividing the value interval (0, 1) into corresponding number of intervals according to the number of the compensation frames.
For example, assume that the compensation frame number N is 2, the first value section after the bisection is (0, 0.5), and the second value section after the bisection is (0.5, 1).
For another example, assume that the compensation frame number N is 3, the first value section after the bisection is (0, 0.3), the second value section after the bisection is (0.3, 0.6), and the third value section after the bisection is (0.6, 1).
When the compensation frame number N is 1, the corresponding section is the original value section (0, 1).
S3022: a target value is taken from each interval, and the target values are multiplied by the time intervals.
For example, assuming that the compensation frame number N is 1, one target value Δ T is taken from the section (0, 1)10.5; assuming that the compensation frame number N is 2, a target value Δ T is obtained from the section (0, 0.5)2Taking a target value Δ T between intervals (0.5, 1) ═ 0.33=0.7。
S3023: and respectively adding the products by taking the receiving time of the first frame of real image in the two adjacent real images as a reference to obtain the corresponding prediction time of each frame of image to be compensated.
For example, assuming that the time when the VR device receives the first real image of two adjacent real images is T0, the calculation formula of the predicted time is:
T1i′=T0+deltaT*ΔTii ═ N formula 1
S303: and determining one compensation pose data according to the historical pose data set for each prediction time, wherein the compensation pose data are positioned between the pose data corresponding to the two adjacent real images.
During specific implementation, the historical pose data in the historical pose data set and the sending time of the historical pose data are fitted to obtain an objective function, and one compensation pose data corresponding to each prediction time is determined according to the objective function.
In the embodiment of the application, when the VR device sends the real images to the VR device, the pose data corresponding to the real images are also sent to the VR device, and each predicted time is located between two adjacent frames of real images, so that each compensation pose data determined according to the historical pose data set is located between the pose data corresponding to two adjacent frames of real images.
It should be noted that, the embodiment of the present application does not set a limitation on the fitting algorithm, for example, the fitting may be performed by using a least square method, and the fitting may also be performed by using a gradient descent method.
In an alternative embodiment, to improve the fitting efficiency and accuracy, the VR device may fit the historical pose data between two adjacent real images in the historical pose data set.
S304: and predicting and displaying the corresponding compensation image according to the currently displayed real image and each compensation pose data.
In the embodiment of the application, the time interval needs to be calculated after two adjacent real images are received, frame compensation can not be performed between the two adjacent real images, and corresponding compensation images are predicted and displayed according to the currently displayed real image and each compensation pose data from the reception of the second real image. Because each compensation pose data is a certain point in the motion trail corresponding to the previous frame of real image and the next frame of real image, and the compensation pose data have a time sequence, the predicted compensation images of each frame compound the motion trend between the two frames of real images, and the picture jump cannot be caused.
S305: and directly displaying when the next real image frame is received.
For example, suppose that the currently displayed image is a second frame of real image, two frames of compensation images are predicted and are respectively recorded as compensation image 1 and compensation image 2, when the first refresh time arrives, the third frame of real image is not received, the compensation image 1 is displayed, and when the second refresh time arrives, the third frame of real image is received, and the compensation image 2 is replaced by the third frame of real image for display.
For another example, assume that a second frame of real image is currently displayed, two frames of compensated images are predicted and are respectively recorded as compensated image 1 and compensated image 2, when a first refresh time arrives, a third frame of real image is not received, compensated image 1 is displayed, when a second refresh time arrives, a third frame of real image is not received, compensated image 2 is displayed, and when a third refresh time arrives, a third frame of real image is directly displayed.
Taking a compensated frame of image as an example, referring to fig. 5, which is a schematic process diagram of the image streaming display method provided by the embodiment of the present application, the VR device records the pose sent to the common terminal at each time, and stores the pose locally to obtain a historical pose data set. At the time of T0, the VR device sends the pose POS _0 to the common terminal, the common terminal controls a picture displayed by the common terminal according to the pose POS _0 to obtain a picture A, and the pose POS _0 corresponding to the picture A and the picture A are transmitted back to the VR device through a network; the VR equipment receives the picture A at the time of T1, frame compensation is not needed, the picture A is directly rendered on the two display screens to be displayed in a three-dimensional mode, the display effect is that the picture is a picture I, and meanwhile the VR equipment sends the pose POS _1 at the time of T1 to a common terminal. At the time of T2, the VR device sends POS _2 to the normal terminal, and due to the existence of network delay, the VR device does not receive the frame B corresponding to the pose POS _1 sent by the normal terminal and needs to perform frame compensation. At the time of T2, the pose of the currently displayed picture (i) is POS _0, the sending time of POS _0 is T0, and then T0 is taken as a reference, the predicted time T1 'of the picture to be compensated is determined, wherein T1' is positioned between T0 and T1, T1 'is brought into an objective function fitted by a historical pose data set, and the pose POS _ 1' is obtained, wherein POS _1 'is positioned behind POS _0 and in front of POS _1, and furthermore, a frame of compensation picture (ii) is predicted according to the currently displayed picture (i) and the determined pose POS _ 1', and is displayed at the time of T2. At the time of T3, the VR device sends the pose POS _3 to the common terminal, and after receiving the picture B to be controlled by the common terminal according to the pose POS _1, the VR device renders the picture B on the two display screens respectively for three-dimensional display, and the display effect is picture (c).
As shown in fig. 5, the compensation picture (c) is a picture predicted by the compensation pose after T0 and before T2, so that when a real picture corresponding to the pose POS _1 sent at the time of T1 is received, the compensation picture is a picture predicted by the compensation pose between picture (i) and picture (T1), and conforms to the actual motion trajectory, and no picture jump as shown in fig. 1 occurs, so that discomfort to the head of the user is not caused.
In the embodiment of the application, in the streaming display process, a VR device sends pose data to a common terminal and obtains a historical pose data set according to the sent pose data, the common terminal returns an image controlled according to the pose data of the VR device and the pose data corresponding to the image to the VR device, the VR device determines the prediction time corresponding to each frame of image to be compensated according to the time interval of two adjacent received real images and the preset compensation frame number, fits an objective function according to the historical pose data between two display moments, brings each prediction time into the objective function to obtain the compensation pose corresponding to each prediction time, and predicts and displays the corresponding compensation image by combining with the currently displayed real image and directly displays the corresponding compensation image when the next frame of real image is received. Because each predicted time is positioned between two adjacent displayed real images, the compensation pose data corresponding to each predicted time determined according to the historical pose data set are positioned between the pose data corresponding to the two real images, so that the compensation images between the two adjacent real images conform to the actual motion track, the image jump cannot be caused, the phenomenon of image jitter during streaming display is effectively solved, and the user experience is further improved.
In some embodiments, when the encoding frame rate of the normal terminal is N times of the decoding frame rate of the VR device, where N is an integer greater than or equal to 2, the VR device may directly select, from the historical pose data set, historical pose data in which no corresponding real image exists, as compensation pose data corresponding to the corresponding predicted time.
For example, assuming that the frame rate of the target video displayed by the normal terminal is 60 frames per second, and the encoding frame rate of the normal terminal is equal to the frame rate of the target video, that is, the number of pose data sent to the normal terminal by the VR device per second is greater than or equal to 60, and the decoding frame rate of the VR device is 30 frames per second, which is lower than the encoding frame rate of the normal terminal, in order to ensure that the VR device displays the latest image frame each time, half of the 60 frames of images encoded by the normal terminal need to be dropped. In an ideal case, it can be understood that frame dropping is to drop one frame of image every other pose data, so that, assuming that there are pose data of 0, 2, 4, 6 … for the corresponding image frame and pose data of 1, 3, 5 … for the non-corresponding image, the VR device can directly acquire pose 1, 3, 5 from the historical pose data set as compensation pose data of the compensation image under the condition that one frame of image needs to be compensated.
In practical application, by using the method of performing frame compensation by using historical pose data of an image which is not actually generated, only the pose of the historical pose data used for compensation before the pose data corresponding to the next real image is displayed, and the image displayed by the VR device can be compensated in a smooth mode during streaming display.
It should be noted that, during streaming display, there is a certain time delay between the image displayed by the VR device and the image displayed by the common terminal, but because the compensation pictures are generated based on pose data between two real frames, the rendering frame rate of the VR device can be increased without affecting VR experience under the condition of small time delay at both ends.
The method for displaying the image streaming provided by the embodiment of the application can be applied to various virtual reality scenes, for example, the image which can be displayed can be a game developed by Unity, a teaching video, a movie and television play and the like.
Taking a game scene as an example, the method for displaying image streams provided by the embodiment of the present application is described from an interaction process between a television and VR glasses, and referring to fig. 6, the method mainly includes the following steps:
s601: the VR glasses establish a connection with the television.
In an alternative embodiment, the VR glasses and the television may be wired via the communication interface.
S602: the television starts the target game.
Wherein, the game interface of the target game can be controlled by the VR glasses.
S603: and the VR glasses send the pose data to the television in real time and store a local historical pose data set.
The user changes the pose of the VR glasses through head movement. And the VR glasses measure the pose data of the head of the user in real time by utilizing the IMU, send the pose data to the television through a network, and record the sent pose data in a historical pose data set.
S604: and the television controls the game interface of the television according to the frame rate of the target game by using the received pose data.
And after receiving the pose data sent by the VR glasses, the television controls the game interface according to the pose data.
S605: and the television sends the controlled game interface and the pose data corresponding to the game interface to the VR glasses.
During streaming display, the VR glasses do not know which position and posture data is obtained after the game interface is controlled by the sent position and posture data, so that when the television sends the game interface through the OpenVR interface, the position and posture data corresponding to the game interface can be sent to the VR glasses, and the compensation position and posture data determined by the VR glasses are located behind the position and posture data, so that smooth display of the game interface is guaranteed.
S606: and the VR glasses respond to the refreshing request and determine whether a game interface sent by the television is received, if so, S607 is executed, otherwise, S609 is executed.
S607: and the VR glasses directly render the received game interface on two display screens for display.
VR glasses are through rendering the interface of playing on two display screens to realize the 3D stereoeffect.
S608: and the VR glasses determine a time interval according to the received two adjacent game interfaces.
The determination of the time interval is described in S301 and will not be repeated here.
S609: the VR glasses determine whether the time interval is less than at least one refresh interval of the screen, if so, S610 is performed, otherwise, the VR glasses return to S608 to recalculate the time interval.
In order to avoid the influence of network abnormality, when the time interval is used for frame compensation, whether the time interval is smaller than at least one refreshing interval of the screen or not needs to be ensured.
S610: and the VR glasses determine at least one compensation pose data according to the time interval, the preset compensation frame number and the historical pose data set, wherein the compensation pose data are positioned between pose data corresponding to two adjacent real images.
The description of this step is referred to S302-S303 and will not be repeated here.
S611: and predicting and displaying the corresponding compensation interface by the VR glasses according to the currently displayed game interface and each compensation pose data.
VR glasses are through rendering the compensation interface on two display screens to realize the 3D stereoeffect.
Based on the same technical concept, the embodiments of the present application provide a VR device, which can implement the steps of the image stream display method in the foregoing embodiments and achieve the same technical effects.
Referring to fig. 7, the VR device includes a processor 701, a memory 702, a communication interface 703, and a display screen 704, where the display screen 704, the communication interface 703, and the memory 702 are connected to the processor 701 through a bus 705:
the memory 702 stores a computer program according to which the processor 701 performs the following operations:
receiving two adjacent real images sent by a common terminal through the communication interface 703, and determining a time interval between the two adjacent real images, where the real images are determined by the common terminal according to pose data sent by the VR device;
determining the corresponding prediction time of each frame of image to be compensated according to the time interval and the preset compensation frame number;
for each predicted time, determining a compensation pose data according to the historical pose data set, wherein the compensation pose data are positioned between pose data corresponding to the two adjacent real images;
predicting a corresponding compensation image according to the real image currently displayed on the display screen 704 and each compensation pose data, and displaying the corresponding compensation image by the display screen;
when the next frame of real image sent by the common terminal is received through the communication interface 703, the real image is directly displayed on the display screen 704.
Optionally, the processor 701 determines the prediction time corresponding to each frame of the image to be compensated according to the time interval and the preset compensation frame number, and the specific operation is as follows:
equally dividing the value interval (0, 1) into intervals with corresponding number according to the compensation frame number;
respectively taking a target value from each interval, and multiplying the target value by the time interval;
and respectively adding the products by taking the receiving time of the first frame of real image in the two adjacent real images as a reference to obtain the corresponding prediction time of each frame of image to be compensated.
Optionally, the processor 701 determines, for each predicted time, one compensation pose data according to the historical pose data set, where the specific operations are:
fitting each historical pose data in the historical pose data set and the sending time of the historical pose data to obtain an objective function;
and determining a compensation pose data corresponding to the prediction time according to the objective function.
Optionally, when the encoding frame rate of the common terminal is N times of the decoding frame rate of the VR device, N is an integer greater than or equal to 2;
the processor 701 determines, for each predicted time, one compensation pose data from the historical pose data set, specifically:
and selecting historical pose data of the real image which does not have the corresponding historical pose data from the historical pose data set as compensation pose data corresponding to the corresponding prediction time.
Optionally, the time interval is less than at least one refresh interval of the VR device screen.
Optionally, the real image is a game picture.
It should be noted that fig. 7 is only an example, and shows hardware necessary for the VR device to perform the steps of the image stream display method provided in the embodiment of the present application, which is not shown, and the VR device further includes common hardware of the display device, such as a speaker, a microphone, left and right glasses lenses, and the like.
The Processor referred to in fig. 7 in this embodiment may be a Central Processing Unit (CPU), a general purpose Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application-specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a DSP and a microprocessor, or the like. Wherein the memory may be integrated in the processor or may be provided separately from the processor.
Embodiments of the present application also provide a computer-readable storage medium for storing instructions that, when executed, may implement the methods of the foregoing embodiments.
The embodiments of the present application also provide a computer program product for storing a computer program, where the computer program is used to execute the method of the foregoing embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. An image streaming display method applied to a VR device includes:
acquiring a time interval of two adjacent real images received from a common terminal, wherein the real images are determined by the common terminal according to pose data sent by the VR equipment;
determining the corresponding prediction time of each frame of image to be compensated according to the time interval and the preset compensation frame number;
for each predicted time, determining a compensation pose data according to the historical pose data set, wherein the compensation pose data are positioned between pose data corresponding to the two adjacent real images;
predicting and displaying a corresponding compensation image according to the currently displayed real image and each compensation pose data;
and directly displaying when receiving the next frame of real image sent by the common terminal.
2. The method as claimed in claim 1, wherein said determining the prediction time corresponding to each frame of image to be compensated according to the time interval and the preset number of compensation frames comprises:
equally dividing the value interval (0, 1) into intervals with corresponding number according to the compensation frame number;
respectively taking a target value from each interval, and multiplying the target value by the time interval;
and respectively adding the products by taking the receiving time of the first frame of real image in the two adjacent real images as a reference to obtain the corresponding prediction time of each frame of image to be compensated.
3. The method of claim 1, wherein determining one compensation pose data from the historical pose data set for each predicted time comprises:
fitting each historical pose data in the historical pose data set and the sending time of the historical pose data to obtain an objective function;
and determining a compensation pose data corresponding to the prediction time according to the objective function.
4. The method of claim 1, wherein when an encoding frame rate of the normal terminal is N times a decoding frame rate of the VR device, N is an integer greater than or equal to 2;
determining, for each predicted time, a compensated pose data from the historical pose data set, comprising:
and selecting historical pose data of the real image which does not have the corresponding historical pose data from the historical pose data set as compensation pose data corresponding to the corresponding prediction time.
5. The method of any of claims 1-4, wherein the time interval is less than at least one refresh interval of the VR device screen.
6. The method of any one of claims 1-4, wherein the real image is a game scene.
7. A VR device comprising a processor, a memory, a communication interface, a display screen, the communication interface, the memory and the processor are connected by a bus:
the memory stores a computer program according to which the processor performs the following operations:
receiving two adjacent real images sent by a common terminal through the communication interface, and determining the time interval of the two adjacent real images, wherein the real images are determined by the common terminal according to pose data sent by the VR equipment;
determining the corresponding prediction time of each frame of image to be compensated according to the time interval and the preset compensation frame number;
for each predicted time, determining a compensation pose data according to the historical pose data set, wherein the compensation pose data are positioned between pose data corresponding to the two adjacent real images;
predicting a corresponding compensation image according to the real image currently displayed on the display screen and each compensation pose data, and displaying the corresponding compensation image by the display screen;
and when the next frame of real image sent by the common terminal is received through the communication interface, the real image is directly displayed on the display screen.
8. The VR device of claim 7, wherein the processor determines the predicted time corresponding to each frame of the image to be compensated based on the time interval and a preset number of compensation frames by:
equally dividing the value interval (0, 1) into intervals with corresponding number according to the compensation frame number;
respectively taking a target value from each interval, and multiplying the target value by the time interval;
and respectively adding the products by taking the receiving time of the first frame of real image in the two adjacent real images as a reference to obtain the corresponding prediction time of each frame of image to be compensated.
9. The VR device of claim 7, wherein the processor determines, for each predicted time, one compensation pose data from the historical pose data set by:
fitting each historical pose data in the historical pose data set and the sending time of the historical pose data to obtain an objective function;
and determining a compensation pose data corresponding to the prediction time according to the objective function.
10. The VR device of claim 7, wherein when an encoding frame rate of the regular terminal is N times a decoding frame rate of the VR device, N is an integer greater than or equal to 2;
the processor determines a compensation pose data according to the historical pose data set for each predicted time, and the specific operation is as follows:
and selecting historical pose data of the real image which does not have the corresponding historical pose data from the historical pose data set as compensation pose data corresponding to the corresponding prediction time.
CN202210226329.4A 2022-03-09 2022-03-09 Image streaming display method and device Pending CN114637398A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210226329.4A CN114637398A (en) 2022-03-09 2022-03-09 Image streaming display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210226329.4A CN114637398A (en) 2022-03-09 2022-03-09 Image streaming display method and device

Publications (1)

Publication Number Publication Date
CN114637398A true CN114637398A (en) 2022-06-17

Family

ID=81947888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210226329.4A Pending CN114637398A (en) 2022-03-09 2022-03-09 Image streaming display method and device

Country Status (1)

Country Link
CN (1) CN114637398A (en)

Similar Documents

Publication Publication Date Title
JP7095030B2 (en) Adjusting video rendering speed and processing 3D images of virtual reality content
US10171851B2 (en) Video content distribution system and content management server
JP6314217B2 (en) Transition management of adaptive display rate for various video playback scenarios
EP3691280B1 (en) Video transmission method, server, vr playback terminal and computer-readable storage medium
US20190045228A1 (en) Video content distribution system and content management server
JP2018514093A (en) Zoom related method and apparatus
CN108310766B (en) Data processing method and device, storage medium, processor and terminal
CN109819232B (en) Image processing method, image processing device and display device
CN108881894B (en) VR multimedia experience quality determination method and device
US9830880B1 (en) Method and system for adjusting the refresh rate of a display device based on a video content rate
CN106817508B (en) A kind of synchronization object determines methods, devices and systems
US20180102082A1 (en) Apparatus, system, and method for video creation, transmission and display to reduce latency and enhance video quality
EP3542264B1 (en) Streaming application environment with recovery of lost or delayed input events
CN114637398A (en) Image streaming display method and device
CN112380989B (en) Head-mounted display equipment, data acquisition method and device thereof, and host
KR20180108967A (en) Multi-vision screen image rendering system, device and method
JP5700998B2 (en) 3D image display apparatus and control method thereof
US9219903B2 (en) Temporal cadence perturbation for time-division stereoscopic displays
US9247229B2 (en) Temporal cadence perturbation for time-division stereoscopic displays
US9621870B2 (en) Temporal cadence perturbation for time-division stereoscopic displays
WO2017010038A1 (en) Communication apparatus, head mounted display, image processing system, communication method and program
TWI502959B (en) Method and apparatus for compensating dynamic 3d images
CN117414577A (en) Game frame supplementing method, device and system, electronic equipment and storage medium
JP2017139794A (en) Video content distribution system and content management server
JP2017143505A (en) Video content distribution system and content management server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination