CN111627116B - Image rendering control method and device and server - Google Patents

Image rendering control method and device and server Download PDF

Info

Publication number
CN111627116B
CN111627116B CN202010473505.5A CN202010473505A CN111627116B CN 111627116 B CN111627116 B CN 111627116B CN 202010473505 A CN202010473505 A CN 202010473505A CN 111627116 B CN111627116 B CN 111627116B
Authority
CN
China
Prior art keywords
rendering
image
view angle
angle information
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010473505.5A
Other languages
Chinese (zh)
Other versions
CN111627116A (en
Inventor
毛世杰
盛兴东
刘云辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010473505.5A priority Critical patent/CN111627116B/en
Publication of CN111627116A publication Critical patent/CN111627116A/en
Application granted granted Critical
Publication of CN111627116B publication Critical patent/CN111627116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

According to the image rendering control method, device and server, during the process of obtaining the rendering image of the first historical frame, the second rendering view angle information of the first historical frame can be utilized to predict the pre-rendering view angle information of the future frame, and corresponding pre-rendering images are generated and stored accordingly, so that after the first rendering view angle information of the current frame of an object to be rendered is obtained, the first pre-rendering image matched with the first rendering view angle information can be detected directly from the pre-selected and stored pre-rendering images, the target rendering image of the current frame can be obtained directly, quickly and accurately according to the first pre-rendering image, rendering operation is not needed, the time interval of rendering with the image of the previous frame is greatly shortened, namely, the time delay of image rendering of the adjacent frame is reduced, the image rendering efficiency is improved, and the experience of a user on virtual reality scenes is further improved.

Description

Image rendering control method and device and server
Technical Field
The present application relates generally to the field of image processing technologies, and in particular, to a method and apparatus for controlling image rendering, and a server.
Background
Nowadays, with the development and popularization of a 5G network (fifth generation mobile communication network), an AR (Augmented Reality )/VR (Virtual Reality) device application can use a cloud rendering technology to perform image rendering on a complex model, so that a device wearer can see detailed information of the complex model in real time, and user experience is improved.
The cloud rendering technology is to put a 3D program in a remote server for rendering, a user initiates a control instruction through a terminal, the server responds to the control instruction to execute a corresponding rendering task, and a rendering result picture is fed back to the user terminal for display.
However, in the existing image rendering control process based on the cloud rendering technology, the server usually finishes rendering of one frame of image by using model data, and transmits the obtained rendered image to the user terminal, then obtains rendering view angle information of the next frame to continue image rendering, which causes a certain delay in rendering of each frame of image, so that the rendering of the whole model takes longer, and the waiting time of the user is overlong and the experience is worse.
Disclosure of Invention
In view of this, in order to reduce the rendering delay of the adjacent frame image and improve the model rendering efficiency, the present application provides the following technical scheme:
In one aspect, the present application proposes an image rendering control method, including:
acquiring first rendering view angle information of a current frame of an object to be rendered;
detecting that a first pre-rendering image matched with the first rendering visual angle information exists, wherein the first pre-rendering image is obtained by performing image rendering by using the model data of the object to be rendered according to the first pre-rendering visual angle information, and the first pre-rendering visual angle information is obtained according to second rendering visual angle information acquired in a first historical frame;
and obtaining a target rendering image of the current frame according to the first pre-rendering image.
Optionally, the method further comprises:
detecting that a first pre-rendering image matched with the first rendering view angle information does not exist, and performing image rendering by using the model data according to the first rendering view angle information to obtain a target rendering image of the current frame;
before outputting a target rendering image of the current frame, obtaining second prerendering view angle information of at least one future frame according to the first rendering view angle information;
performing image rendering by using the model data according to the second prerendering view angle information of the at least one future frame to obtain a second prerendering image of the corresponding future frame;
And storing second prerendered view angle information of the at least one future frame in association with the corresponding second prerendered image.
Optionally, the detecting that there is a first pre-rendered image matching the first rendering perspective information includes:
invoking second rendering view angle information acquired in the first historical frame to obtain a plurality of prerendering view angle information;
acquiring the first rendering view angle information and comparing the acquired pre-rendering view angle information;
determining prerendered view angle information of which the comparison result meets the condition as first prerendered view angle information;
and acquiring the first prerendered image stored in association with the first prerendered view angle information.
Optionally, the rendering view angle information is a multi-degree-of-freedom rendering view angle, and the obtaining, according to the first rendering view angle information, second prerendering view angle information of at least one future frame includes:
utilizing the multiple-degree-of-freedom rendering visual angles of each of the plurality of historical frames to obtain multiple-degree-of-freedom rendering motion directions among the frames;
acquiring a multidimensional spherical space formed by a first multi-degree-of-freedom rendering view angle of a current frame;
based on the multi-degree-of-freedom rendering motion direction, data sampling is performed in the multi-dimensional spherical space range, and at least one multi-degree-of-freedom prerendering view angle is determined to be a second multi-degree-of-freedom rendering view angle of at least one future frame.
Optionally, the rendering view angle information is a multi-degree-of-freedom rendering view angle, and the obtaining, according to the first rendering view angle information, second prerendering view angle information of at least one future frame includes:
predicting a multidimensional spherical spatial range of the multiple degree of freedom rendering view angle of the next future frame based on the first multiple degree of freedom rendering view angle of the current frame;
and performing data sampling in the multidimensional spherical space range to obtain a second multi-degree-of-freedom prerendering view angle of at least one future frame.
Optionally, the performing data sampling in the multi-dimensional spherical space range to obtain a second multi-degree-of-freedom prerendering view angle of at least one future frame includes:
acquiring schedulable resource information of a server;
based on the schedulable resource information, performing data sampling in the multi-dimensional spherical space range to obtain a second multi-degree-of-freedom prerendering view angle of at least one future frame;
wherein the number of the second multi-degree-of-freedom prerendering perspectives can be changed with the change of the schedulable resource information.
Optionally, the obtaining the target rendering image of the current frame according to the first pre-rendering image includes:
and if the first pre-rendering view angle information is different from the first rendering view angle, correcting the first pre-rendering image according to the first pre-rendering view angle information to obtain a target rendering image of the current frame.
Optionally:
if the first pre-rendering view angle information and the first rendering view angle are different, the first pre-rendering view angle and the target rendering image are sent to the electronic device, so that the electronic device can correct the target rendering image according to the first pre-rendering view angle information.
In still another aspect, the present application further proposes an image rendering control apparatus, including:
the rendering view angle information acquisition module is used for acquiring first rendering view angle information of a current frame of an object to be rendered;
the detection module is used for detecting that a first pre-rendering image matched with the first rendering visual angle information exists, wherein the first pre-rendering image is obtained by performing image rendering by utilizing the model data of the object to be rendered according to the first pre-rendering visual angle information, and the first pre-rendering visual angle information is obtained according to second rendering visual angle information acquired in a first historical frame;
and the target rendering image obtaining module is used for obtaining a target rendering image of the current frame according to the first pre-rendering image.
In yet another aspect, the present application further proposes a server, including:
A communication interface;
a memory for storing a program for implementing the image rendering control method as described above;
and the processor is used for calling and loading the program of the memory so as to realize the steps of the image rendering control method.
Therefore, the image rendering control method, device and server provided by the application can utilize the second rendering view angle information of the first history frame to predict the pre-rendering view angle information of the future frame and generate corresponding pre-rendering images according to the pre-rendering view angle information, and then store the pre-rendering images.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings may be obtained according to the provided drawings without inventive effort to a person skilled in the art.
FIG. 1 is a schematic diagram of an alternative system for implementing the image rendering control method of the present application;
FIG. 2 illustrates a flow chart of a cloud rendering application scenario;
FIG. 3 is a flow chart illustrating an alternative example of an image rendering control method as proposed herein;
FIG. 4 is a flow chart illustrating yet another alternative example of an image rendering control method as set forth herein;
FIG. 5 shows a flow diagram of yet another alternative example of an image rendering control method proposed herein;
FIG. 6 shows a flow diagram of yet another alternative example of an image rendering control method proposed herein;
FIG. 7 is a flow chart illustrating yet another alternative example of an image rendering control method as set forth herein;
FIG. 8 is a multi-dimensional spherical space diagram of a multi-free rendering perspective in the image rendering control method according to the present application;
fig. 9 is a schematic structural view showing an alternative example of the image rendering control apparatus proposed in the present application;
fig. 10 is a schematic structural view showing still another alternative example of the image rendering control apparatus proposed in the present application;
fig. 11 is a schematic diagram showing a hardware configuration of an alternative example of a server implementing the image rendering control method proposed in the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
For convenience of description, only a portion related to the present invention is shown in the drawings. Embodiments and features of embodiments in this application may be combined with each other without conflict. Moreover, as used in this application, the terms "system," "apparatus," "unit," and/or "module" are intended to be a means for distinguishing between different components, elements, parts, portions, or assemblies at different levels, and may be replaced by other expressions if the other terms accomplish the same purpose.
As used in this application and in the claims, the terms "a," "an," "the," and/or "the" are not specific to the singular, but may include the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus. The inclusion of an element defined by the phrase "comprising one … …" does not exclude the presence of additional identical elements in a process, method, article, or apparatus that comprises an element.
Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two. The following terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
Additionally, flowcharts are used in this application to describe the operations performed by systems according to embodiments of the present application. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
Referring to fig. 1, a schematic structural diagram of an alternative system suitable for the image rendering control method proposed in the present application may be suitable for various cloud rendering application scenarios, and as shown in fig. 1, the system may include at least one electronic device 100 and at least one server 200, where:
the electronic device 100 may be configured to obtain rendering perspective information corresponding to each of consecutive frames of an object to be rendered, when the electronic device is worn by a user.
In this embodiment, the electronic device 100 may include various types of AR (Augmented Reality) devices, VR (Virtual Reality) devices, etc., such as VR smart glasses, helmets, handles, etc., and the specific device types of the electronic device 100 are not limited in this application, and for different users, the electronic devices used for watching the objects to be rendered in the Virtual scene may be determined according to actual needs, usage habits, etc., which are not described in detail in this application.
The server 200 may be a service device supporting cloud computing and implementing cloud rendering, that is, a cloud server deployed in a cloud, and may specifically be one or more servers, which is not limited in the specific composition structure of the server 200 in this application. In this application, the image rendering control method and apparatus provided in each embodiment of the present application may be executed, and a specific implementation process may refer to descriptions of corresponding portions of the following embodiments.
In practical applications, in order to improve interactive experience and control a virtual world, in a cloud rendering application scene, referring to a flowchart of the cloud rendering application scene shown in fig. 2, it is generally required that electronic device 100 collect rendering angle information of different moments, send the rendering angle information to server 200, and according to the rendering angle information, server 200 performs image rendering by using model data to obtain a rendering image of a model of an object to be rendered under the rendering angle, and then, after compression, the rendering image is fed back to electronic device 100, and then, electronic device 100 decompresses the compressed rendering image and displays the decompressed rendering image.
The rendering view angle information collected by the electronic device 100 at different moments may be multiple degrees of freedom (degree of freedom, dof) spatial pose, such as a 3dof spatial pose, a 6dof spatial pose, etc., and 3dof may refer to degrees of freedom of 3 rotation angles, such as rotation of the head of a user wearing the electronic device in different directions, but it cannot detect front-back, left-right spatial displacement of the head, which may be suitable for viewing application scenes of VR movies; the 6dof can be the change of up-down, front-back, left-right movement caused by the body movement of a user wearing the electronic equipment is increased on the basis of 3dof, so that tracking and positioning of the user can be better realized, for example, under a game application scene, the user of the electronic equipment can interact in scenes such as crossing obstacles, avoiding monster, climbing mountain and the like, and more real and immersive experience is obtained.
Based on this, the electronic device 100 may include various types of sensors for sensing spatial poses of different degrees of freedom of the electronic device 100 to form rendering angle information of corresponding frames, and specific detection processes of the rendering angle information will not be described in detail.
Aiming at the prior art described in the background art, after a frame of image to be rendered is transmitted to the electronic equipment, rendering view angle information of a next frame is acquired, and rendering of the next frame of image is completed, so that the time consumed by rendering a three-dimensional model of the whole object to be rendered is long, and the experience of a user in a cloud rendering application scene is poor. In order to improve the model rendering efficiency, the pre-rendering means is used in the cloud rendering process, that is, pre-rendering view angle information of a future frame is directly predicted in the process of rendering a certain frame of image, image rendering is carried out by utilizing model data according to the pre-rendering view angle information to obtain a pre-rendering image of the future frame, so that after the future frame is reached, real rendering view angle information of the future frame electronic equipment is obtained, the pre-rendering image of the future frame can be directly utilized to obtain a target rendering image, time is not spent for rendering the target rendering image of the future frame, the rendering time for obtaining the target rendering image of the future frame is greatly shortened, the time delay of image rendering of adjacent frames is reduced, the rendering efficiency of the whole three-dimensional model is improved, and the technical effect of experience of a user in cloud rendering application scenes is improved. For a specific implementation, reference may be made to the description of the corresponding parts of the embodiments below.
Referring to fig. 3, a flowchart of an alternative example of an image rendering control method according to an embodiment of the present application may be applied to the above server, and the image rendering control method shown in fig. 3 may include, but is not limited to, the following steps:
step S11, obtaining first rendering view angle information of a current frame of an object to be rendered;
in combination with the description of the cloud rendering application scene, a user interacts with a server through an electronic device (such as the above AR device or VR device, which is not explained one by one any more), or other terminals, to determine an object to be rendered, such as the whole virtual scene or a certain virtual object contained in the whole virtual scene, so that the server wears the electronic device after obtaining model data of the object to be rendered, and in the process of interacting with the virtual scene to be rendered, each sensor in the electronic device can acquire first rendering view angle information, such as multi-degree-of-freedom space pose data of 3dof space pose, 6dof space pose, and the like, and upload the first rendering view angle information to the server in real time, so that the server obtains the first rendering view angle information of the current frame of the object to be rendered, and the server can determine which rendering view angle of the object to be rendered needs to be rendered to the user.
It can be seen that step S11 may be that the server directly or indirectly receives, through a Wireless communication network, such as a 5G (fifth generation mobile communication)/6G (fifth generation mobile communication) network, a WIFI (Wireless-local area network) network, or the like, first rendering angle of view information of a current frame acquired and transmitted by the electronic device during wearing of the electronic device by the user.
It should be noted that, the collection of the first rendering perspective information is not limited to the collection of multiple types of sensors configured by the electronic device, and may be combined with other collection devices configured in the space where the user is located, such as image collection devices such as a camera, an intelligent bracelet, and the like, so as to collect multi-degree-of-freedom space pose data of the user wearing the electronic device, so as to determine the first rendering perspective information required by the current frame for the object to be rendered.
In order to ensure the real-time performance and reliability of the interaction between the user and the virtual scene, since the viewing angle in the virtual scene is always changed after the user wears the electronic device, that is, at least one part of the body or the body moves, such as the head, the body, the eyes, the hands, etc., after the user wears the electronic device, the rendering viewing angle information of different frames needs to be continuously collected, and the image rendering of the corresponding frames is completed according to each step described below.
The frame is usually a single image picture of the minimum unit in the image animation, one frame is a still picture, and continuous frames can form the animation, such as presenting a dynamic game scene, playing a VR movie, etc., and the frame number is the frame number of a picture transmitted in 1 second, which can be understood as the refreshing time of each second of the graphics processor, so that the frame number required for completing the rendering of the three-dimensional model of the object to be rendered can be determined according to the actual display requirement of the virtual scene (which may include the object to be rendered), which is not limited in the application.
Step S12, detecting that a first pre-rendering image matched with the first rendering angle information exists;
in combination with the description of the inventive concept, the present application performs pre-rendering on an image of a current frame during image rendering of a certain history frame (which may be recorded as a first history frame) of the current frame, that is, obtains first pre-rendering view information of the current frame according to the second rendering view information after obtaining the second rendering view information of the first history frame, performs image rendering according to the first pre-rendering view information by using model data of an object to be rendered, obtains a first pre-rendering image of the current frame, and stores the first pre-rendering image.
In this way, after the first rendering view angle information actually acquired by the current frame is acquired, whether the first pre-rendering image matched with the first rendering view angle information exists or not can be detected from pre-stored pre-rendering images, and if so, the target rendering image of the current frame can be quickly and accurately obtained directly in a mode described in the following steps without performing image rendering processing; and if the first pre-rendering image does not exist, then the first rendering view angle information is utilized, and image rendering is carried out by utilizing the model data of the object to be rendered, so that a target rendering image of the current frame is obtained.
In practical application of this embodiment, in combination with the description of the inventive concept of this application, if there is no first prerendered image currently, it is possible that the previous frame of the current frame is the last frame of the virtual reality prerendered frame number, in this case, the first rendering view angle information may be utilized to continuously acquire prerendered view angle information of the future frame, and render to obtain the prerendered image of the corresponding future frame, and the specific implementation process may refer to the description of the corresponding embodiment below.
The virtual reality pre-rendering frame number may refer to a maximum pre-rendering frame number, that is, the maximum pre-rendering frame number is not displayed after the server finishes rendering, and a plurality of rendering images with several frames stored are preselected, so that the frame rate stability is improved, but the numerical value of the maximum pre-rendering frame number is not limited. The head rotation and the presented picture are consistent through the pre-rendering mode, dizziness and uncomfortable feeling during wearing of the head-mounted electronic equipment by a user are reduced to a certain extent, and the immersive feeling of the user on the virtual reality scene can be improved by adopting the pre-rendering mode for the motion of other body parts of the user.
Step S13, obtaining a target rendering image of the current frame according to the first pre-rendering image.
In general, a certain difference exists between the first pre-rendering view angle information of the current frame obtained in advance and the first rendering view angle information actually acquired, which results in that a difference exists between the obtained first pre-rendering image and the rendering image of the current frame actually required according to the first pre-rendering view angle information, and the first pre-rendering image needs to be calibrated to obtain the target rendering image of the current frame.
It should be understood that, compared with the traditional time taken for rendering the target rendering image of the current frame according to the first rendering view angle information, the time taken for directly calling the first pre-rendering image and calibrating the first pre-rendering image is more, so that the process of acquiring the target rendering image of the current frame provided by the embodiment improves the efficiency of acquiring the rendering image and ensures the reliability and accuracy of the acquired rendering image compared with the existing pre-rendering mode.
In some embodiments, if the first prerendered view angle information of the current frame is the same as the actually acquired first prerendered view angle information, calibration of the first prerendered image is not needed, and the first prerendered image can be directly determined as the target rendered image of the current frame.
In summary, after obtaining the second rendering view angle information of the first history frame during the period of obtaining the rendering image of the first history frame, the server predicts the pre-rendering view angle information of the future frame (i.e. the future frame possibly including the current frame relative to the first history frame) according to the second rendering view angle information of the first history frame, generates the corresponding pre-rendering image according to the pre-rendering view angle information, and stores the pre-rendering image.
Referring to fig. 4, a flowchart of still another alternative example of the image rendering control method according to the embodiment of the present application may be applied to a server, and as shown in fig. 4, the image rendering control method according to the embodiment may include:
Step S21, obtaining first rendering view angle information of a current frame of an object to be rendered;
step S22, detecting whether a first pre-rendered image matched with the first rendering angle information exists, if so, proceeding to step S23, and if not, executing step S24;
step S23, obtaining a target rendering image of the current frame according to the first pre-rendering image;
regarding the process of acquiring the target rendered image of the current frame in the case of pre-storing the first pre-rendered image, reference may be made to the description of the corresponding parts of the above embodiments, which are not repeated.
Step S24, performing image rendering by using model data of an object to be rendered according to the first rendering view angle information to obtain a target rendering image of the current frame;
it should be understood that the three-dimensional model of the object to be rendered is different in data to be rendered under different rendering angles, and the data to be rendered may be model data required for rendering a three-dimensional model picture under a corresponding rendering angle, so that a certain difference exists in a corresponding rendering image obtained after cloud rendering, but rendering means used for obtaining the rendering image under the different rendering angles are similar, and the three-dimensional model data is generally required to be projected onto a two-dimensional plane to obtain corresponding two-dimensional projection point data, so as to form a rendering image on the two-dimensional plane.
Step S25, before outputting the target rendering image of the current frame, obtaining second prerendering view angle information of at least one future frame according to the first rendering view angle information;
in combination with the description of the inventive concept of the present application, if the first prerendered image of the current frame is not detected, the prerendered view angle information may be acquired by prerendered future frames in the image rendering process of the history frame, and the corresponding prerendered image obtained in this way is already called, that is, the prerendered frame number of the prerendered image obtained in advance in the history frame is not greater than the frame number difference between the current frame and the history frame, which means that there is no prerendered image of the future frames at present, and the prerendered needs to be continued.
Of course, in practical application, after the pre-rendered image of the subsequent frame is called, the pre-rendered image of the subsequent frame is acquired without waiting for the pre-rendered image to be called, or the pre-rendered image of the future frame may be predicted again directly according to the manner described in the embodiment after a plurality of pre-rendered images are called.
Based on the analysis, after the first rendering view angle information of the current frame is obtained, the second pre-rendering view angle information of at least one future frame can be directly predicted, and the prediction of the pre-rendering view angle information can be realized according to a kinematic model or resources which can be allocated by a server, etc., so that the specific prediction realization method is not limited.
As described in step S25, the present embodiment may begin to predict the second prerendered view angle information of the future frame before transmitting the target rendered image of the current frame to the electronic device, and specifically may predict the second prerendered view angle information of the future frame while performing image rendering in the above manner after acquiring the first rendered view angle information of the current frame, or predict the second prerendered view angle information of the future frame while performing compression processing on the rendered image of the current frame, so as to reduce the delay of image rendering of the adjacent frame.
Step S26, performing image rendering by using the model data according to the second prerendering view angle information of at least one future frame to obtain a second prerendering image of the corresponding future frame;
regarding how to render the image according to the second prerendered viewing angle information by using the model data, a process of obtaining a corresponding prerendered image is similar to the implementation process of step S24, which is not described in detail in the present application.
Step S27, storing the second prerendered viewing angle information of at least one future frame in association with the corresponding second prerendered image.
The specific storage mode of the second prerendered view angle information of the predicted at least one future frame and the second prerendered image of the corresponding frame is not limited, such as a storage table mode and the like.
In combination with the description of the above embodiment, when a certain future frame is reached, that is, when the certain future frame becomes the current frame, the rendering view angle information actually acquired in the future frame may be utilized in the manner described above, and the second pre-rendering image corresponding to the second pre-rendering view angle information matched with the rendering view angle information is called from the second pre-rendering images corresponding to each of the stored plurality of second pre-rendering view angle information, so that the target rendering image of the future frame is determined accordingly, and the image rendering operation in the future frame is not required, and the specific implementation process may refer to the description of the corresponding part of the above embodiment.
Specifically, referring to the flow chart of the image rendering control method shown in fig. 5, taking the current frame as the kth frame as an example, compared with the prior art before improvement (such as the flow chart shown above the improvement arrow in fig. 5), each frame needs to perform an image rendering operation, which results in longer delay time for each frame compared with the previous frame. According to the improved scheme, after receiving the rendering view angle information of the kth frame, the server can predict the prerendered image of the future frame according to the mode, specifically, the prerendered image of the kth+1 frame is taken as an example for explanation, the rendering view angle information of the kth frame can be utilized to predict the prerendered view angle information of the kth+1 frame, and further, the prerendered image of the kth+1 frame obtained through rendering operation is realized in the process of processing the rendering image of the kth frame, the processing of the image of the kth+1 frame is not influenced, and after the corresponding rendering view angle information is obtained, the prerendered image obtained in the kth frame is directly called, so that time rendering is not spent, and the time delay of the rendering image of the kth+1 frame is shortened.
It should be noted that, in the process of rendering the kth frame image, the rendering process is not limited to pre-rendering the kth+1 frame image, and pre-rendering the pre-rendered image corresponding to each i frames in the future may be performed as required, and the specific implementation process is similar to the above process of obtaining the k+1 frame pre-rendered image, which is not described in detail in this application.
In summary, in this embodiment, after the first rendering view angle information of the current frame of the object to be rendered is obtained, that is, the rendering view angle information that is actually collected, whether the first pre-rendering image that matches the first rendering view angle information is stored currently may be detected first, if so, the first pre-rendering image is directly utilized to quickly obtain the target rendering image of the current frame, so that the time spent in the image rendering operation is saved, and the model rendering efficiency is improved; if the first pre-rendering image does not exist, performing image rendering by using model data according to the first rendering view angle information to obtain a target rendering image of a current frame, ensuring that a certain unrendered frame image is not missed, and predicting second pre-rendering view angle information of at least one future frame by using the first rendering view angle information of the current frame before outputting the target rendering image of the current frame, further performing image rendering by using corresponding model data to obtain the second pre-rendering image of the corresponding future frame, and then storing the second pre-rendering image, so that the matched second pre-rendering image can be continuously invoked after the future frame is obtained, the target rendering image of the future frame is obtained, the whole image rendering delay is effectively reduced, and the whole three-dimensional model rendering efficiency of an object to be rendered is improved.
Referring to fig. 6, a flowchart of still another alternative example of the image rendering control method according to the embodiment of the present application is provided, where the embodiment may be an alternative refinement implementation of the image rendering control method described in the above embodiment, but is not limited to this refinement implementation described in the present embodiment. As shown in fig. 6, the method may include:
step S31, obtaining first rendering view angle information of a current frame of an object to be rendered;
step S32, second rendering view angle information acquired in the first history frame is called to obtain a plurality of prerendered view angle information;
regarding the process of predicting and obtaining the pre-rendering view angle information of each of the plurality of future frames and the corresponding pre-rendered image in the first historical frame and storing the pre-rendered image, reference may be made to the description of the corresponding parts of the above embodiment, which is not repeated herein.
Step S33, comparing the first rendering view angle information with the retrieved pre-rendering view angle information;
step S34, determining the prerendered view angle information with the comparison result meeting the condition as first prerendered view angle information;
in practical application of the embodiment, because the predicted pre-rendering view angle information may have a certain difference from the rendering view angle information actually collected by the corresponding frame, in this case, an actual target rendering image may be obtained by calibrating the pre-rendering image, so as to reduce the calibration workload, improve the reliability of the calibrated target rendering image, and render the pre-rendering view angle information with a smaller difference from the actually collected rendering view angle information as the first pre-rendering view angle information.
Therefore, the embodiment of the present application may obtain the difference between the first rendering angle information and each of the retrieved pre-rendering angle information, and if the difference is smaller than the angle threshold (which is a relatively smaller value but not limited to a specific value, the comparison result between the first rendering angle information and the corresponding pre-rendering angle information may be considered to satisfy the condition, and the pre-rendering angle information satisfying the condition is determined as the first pre-rendering angle information, i.e. the first pre-rendering angle information matched with the first rendering angle information.
Step S35, a first prerendered image stored in association with first prerendered view angle information is acquired;
step S36, obtaining a target rendering image of the current frame according to the first pre-rendering image.
Because the first history frame already stores the plurality of prerendered view angle information, the plurality of prerendered images and the association relation between the prerendered view angle information and the prerendered images, after the first prerendered view angle information matched with the first prerendered view angle information of the current frame is obtained, the prerendered images which are stored in association with the first prerendered view angle information can be called according to the association relation, namely the first prerendered images, and then the target rendered images of the current frame can be directly and quickly obtained by utilizing the first prerendered images, and the specific implementation process is not described in detail.
Therefore, the first rendering view angle information of the current frame of the object to be rendered is obtained, the first rendering view angle information meeting the conditions of the comparison result can be compared with the predicted multiple pre-rendering view angle information, the first pre-rendering view angle information is selected, the first pre-rendering image which is stored in association with the first pre-rendering view angle information is obtained, the first pre-rendering image is directly utilized, the target rendering image of the current frame is directly and rapidly obtained, rendering operation is not needed, the time delay of the image rendering of the current frame is reduced, the model rendering efficiency is improved, and user experience is further improved.
In some embodiments, in combination with the above analysis, since the rendering angle information of a certain frame actually collected is often different from the pre-rendering angle information of the frame predicted in advance, so that the corresponding pre-rendering image is different from the actual target rendering image of the frame, calibration needs to be performed on the pre-rendering image, where in this case, the embodiment may perform ATW (asynchronous time warping, asynchronous Timewarp) processing on the pre-rendering image to obtain the corresponding target rendering image.
The ATW is a technology for generating an intermediate frame, which can be used for correcting an image frame, is applied to the field of virtual reality, can be used for reducing the problems of picture jitter of a virtual reality scene, scene rendering delay caused by too fast motion, and the like, and can not obviously reduce rendering quality even if the frame rate is reduced by filling the intermediate frame.
In practical application of this embodiment, when there is a difference between the first pre-rendering view angle information and the first rendering view angle, or when the difference exceeds an error threshold (i.e., an error value allowed to exist), the server may use an ATW technology, and correct the first pre-rendering image according to the first pre-rendering view angle information, to obtain the target rendering image of the current frame. If the positions of the degrees of freedom of the differences can be used for correcting the two-dimensional pre-rendered image, the two-dimensional pre-rendered image generally does not consume too much system resources, and a new image frame can be generated by less calculation on a Duyu complex scene.
Of course, in the practical application of the present application, when there is a difference between the first pre-rendering view angle information and the first rendering view angle, or when the difference exceeds an error threshold, the server may also send the first pre-rendering view angle and the target rendering image (which may be the pre-rendering image at this time) to the electronic device, so that the electronic device adopts the ATW technology, and corrects the target rendering image according to the first pre-rendering view angle information. Therefore, compared with the existing rendering image return flow, the method is different in that not only the rendering image but also the corresponding pre-rendering visual angle information needs to be transmitted, and the compression processing can be performed before the transmission in combination with the description of the pre-rendering technology, and the specific compression processing process is not described in detail.
Referring to fig. 7, a flowchart of another alternative example of the image rendering control method according to the embodiment of the present application may be a further alternative refinement implementation manner of the image rendering control method described in the foregoing embodiments, where the implementation process of how to obtain the second prerendering view angle information of at least one future frame by using the rendering view angle information actually collected by the current frame in the foregoing embodiments is mainly refined, where the rendering view angle information is a multiple degree of freedom rendering view angle, and as shown in fig. 7, the implementation process may include:
step S41, utilizing the multiple-degree-of-freedom rendering visual angles of each of a plurality of history frames to obtain multiple-degree-of-freedom rendering motion directions among frames;
the embodiment can determine the motion direction of the user wearing the electronic device by using the change of the multiple-degree-of-freedom rendering viewing angle of each of the plurality of history frames, that is, determine the multiple-degree-of-freedom rendering motion direction between frames, so as to be used as the prediction direction of the subsequent prediction rendering viewing angle, and the specific implementation process is not described in detail.
Step S42, obtaining a multidimensional spherical space range formed by a first multi-degree-of-freedom rendering view angle of a current frame;
Step S43, based on the multi-degree-of-freedom rendering motion direction, data sampling is performed in the multi-dimensional spherical space range, and at least one multi-degree-of-freedom prerendering view angle is determined as a second multi-degree-of-freedom rendering view angle of at least one future frame.
Referring to the multi-degree-of-freedom spherical space schematic diagram shown in fig. 8, the present application may determine a corresponding multi-dimensional spherical space range, such as a 3/6-dimensional spherical space range, according to a change of each degree-of-freedom spatial pose included in the multi-degree-of-freedom rendering view between two adjacent frames, and a specific construction process will not be described in detail. And then, considering the limitation of multi-degree-of-freedom motion between two adjacent frames, when the multi-degree-of-freedom rendering view angle of the future frame is predicted, the second multi-degree-of-freedom rendering view angle of at least one future frame can be obtained directly from the multi-dimensional spherical space range of the current frame in a data sampling mode.
In the data sampling process, the smaller the sampling interval is, the more favorable is for quickly and reliably acquiring the rendering image of the future frame.
In still other embodiments, according to the invention concept of the implementation process of obtaining the second prerendering view angle information of at least one future frame by using the rendering view angle information actually collected by the current frame described in the above embodiments, the present application may predict, based on the first multiple degrees of freedom rendering view angle of the current frame, a multidimensional spherical space range of multiple degrees of freedom rendering view angles of a next future frame (i.e., an adjacent next frame), and then perform data sampling within the multidimensional spherical space range of the next future frame to obtain the second multiple degrees of freedom prerendering view angle of the at least one future frame, which is not described in detail in the present application for specific implementation process.
In the foregoing embodiments, for the performing step of performing data sampling in the multi-dimensional spherical space to obtain the second multiple-degree-of-freedom prerendering view angle of at least one future frame, the performing step may adaptively adjust according to cloud computing resources, sample to obtain a plurality of prerendering view angles, specifically may obtain schedulable resource information of a server, and perform data sampling in the multi-dimensional spherical space based on the schedulable resource information to obtain the second multiple-degree-of-freedom prerendering view angle of the at least one future frame, where it should be understood that the number of the second multiple-degree-of-freedom prerendering view angles can be changed along with the change of the schedulable resource information.
In summary, in the embodiment of the present application, in the process of predicting a possible rendering view angle of a next frame, the limitation of the variation of the rendering view angle with multiple degrees of freedom between adjacent frames may be considered, and according to the rendering motion direction with multiple degrees of freedom between frames, data in a multi-dimensional space range under the rendering view angle with multiple degrees of freedom of a current frame or a predicted next frame is sampled, so as to obtain a second rendering view angle with multiple degrees of freedom of at least one future frame, so as to implement pre-rendering of an image of the future frame in advance, so that the image can be invoked when the future frame arrives later, and the purpose of improving the rendering efficiency of the model is achieved.
Referring to fig. 9, a schematic structural diagram of an alternative example of an image rendering control device according to an embodiment of the present application may be applied to the server, and as shown in fig. 9, the image rendering control device may include:
a rendering perspective information obtaining module 210, configured to obtain first rendering perspective information of a current frame for an object to be rendered;
a detection module 220, configured to detect that a first pre-rendered image matching the first rendering perspective information exists;
the first pre-rendering image is obtained by performing image rendering by using the model data of the object to be rendered according to first pre-rendering view angle information, and the first pre-rendering view angle information is obtained according to second rendering view angle information acquired in a first history frame.
In some embodiments, the detection module 220 may include:
the pre-rendering view angle calling unit is used for calling the second rendering view angle information acquired in the first historical frame to acquire a plurality of pre-rendering view angle information;
a viewing angle comparing unit for comparing the first rendering viewing angle information with the retrieved pre-rendering viewing angle information;
a prerendering view angle determining unit configured to determine prerendering view angle information, for which the comparison result satisfies a condition, as first prerendering view angle information;
And the prerendered image acquisition unit is used for acquiring the first prerendered image stored in association with the first prerendered view angle information.
The target rendering image obtaining module 230 is configured to obtain a target rendering image of the current frame according to the first pre-rendering image.
In one possible implementation, the target rendered image obtaining module 230 may include:
the image correction unit is used for correcting the first pre-rendering image according to the first pre-rendering view angle information under the condition that the first pre-rendering view angle information and the first rendering view angle are different, so as to obtain a target rendering image of the current frame.
In yet another possible implementation manner, the apparatus may further include:
the data transmission module is used for transmitting the first pre-rendering view angle and the target rendering image to the electronic equipment under the condition that the first pre-rendering view angle information and the first rendering view angle are different, so that the electronic equipment can correct the target rendering image according to the first pre-rendering view angle information.
In some embodiments, as shown in fig. 10, the image rendering control apparatus may further include, on the basis of the above embodiments:
The image rendering module 240 is configured to detect that a first pre-rendered image that is matched with the first rendering perspective information does not exist, perform image rendering according to the first rendering perspective information by using the model data, and obtain a target rendered image of the current frame;
a prerendering view angle obtaining module 250, configured to obtain second prerendering view angle information of at least one future frame according to the first rendering view angle information before outputting the target rendered image of the current frame;
in one possible implementation manner, if the rendering angle information is a multi-degree-of-freedom rendering angle, the prerendering angle obtaining module 250 may include:
a multi-degree-of-freedom rendering motion direction obtaining unit configured to obtain a multi-degree-of-freedom rendering motion direction between frames by using the multi-degree-of-freedom rendering viewing angles of each of the plurality of history frames;
the multi-dimensional spherical space range acquisition unit is used for acquiring a multi-dimensional spherical space range formed by a first multi-degree-of-freedom rendering view angle of the current frame;
the first data sampling unit is used for performing data sampling in the multi-dimensional spherical space range based on the multi-degree-of-freedom rendering motion direction, and determining at least one multi-degree-of-freedom prerendering view angle as a second multi-degree-of-freedom rendering view angle of at least one future frame.
In still another possible implementation manner, if the rendering angle information is a multi-degree-of-freedom rendering angle, the pre-rendering angle obtaining module 250 may further include:
the multi-dimensional spherical space range prediction unit is used for predicting the multi-degree-of-freedom rendering view angle of the next future frame based on the first multi-degree-of-freedom rendering view angle of the current frame;
and the second data sampling unit is used for sampling data in the multidimensional spherical space range to obtain a second multi-degree-of-freedom prerendering view angle of at least one future frame.
For the possible embodiments, the first data sampling unit and/or the second data sampling unit may include:
a schedulable resource information acquisition unit for acquiring schedulable resource information of the server;
a multi-degree-of-freedom prerendering view angle obtaining unit, configured to sample data in the multi-dimensional spherical space range based on the schedulable resource information, to obtain a second multi-degree-of-freedom prerendering view angle of at least one future frame;
wherein the number of the second multi-degree-of-freedom prerendering perspectives can be changed with the change of the schedulable resource information.
A pre-rendering module 260, configured to perform image rendering according to the second pre-rendering perspective information of the at least one future frame by using the model data, so as to obtain a second pre-rendered image of the corresponding future frame;
A storage module 270, configured to store second prerendered perspective information of the at least one future frame in association with the corresponding second prerendered image.
It should be noted that, regarding the various modules, units, and the like in the foregoing embodiments of the apparatus, the various modules and units may be stored as program modules in a memory, and the processor executes the program modules stored in the memory to implement corresponding functions, and regarding the functions implemented by each program module and the combination thereof, and the achieved technical effects, reference may be made to descriptions of corresponding parts of the foregoing method embodiments, which are not repeated herein.
The present application also provides a storage medium on which a computer program may be stored, which computer program may be called and loaded by a processor to implement the steps of the image rendering control method described in the above embodiments.
Referring to fig. 11, to implement a hardware architecture diagram of an alternative example of a server of the image rendering control method proposed in the present application, the server may include a communication interface 31, a memory 32, and a processor 33, wherein:
the number of the communication interface 31, the memory 32 and the processor 33 may be at least one, and the communication interface 31, the memory 32 and the processor 33 may be all connected to a communication bus, so as to implement mutual data interaction through the communication bus, and the specific implementation process may be determined according to the requirements of the specific application scenario, which is not described in detail in this application.
The communication interface 31 may include an interface capable of implementing data interaction by using a wireless communication network, such as an interface of a communication module including a WIFI module, a 5G/6G (fifth generation mobile communication network/sixth generation mobile communication network) module, etc., and the communication interface 31 may also include a data interface capable of implementing data interaction between components inside a server, such as a USB interface, a serial/parallel interface, etc., which is not limited to specific content included in the communication interface 31.
In embodiments of the present application, memory 32 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid-state storage device. The processor 33 may be a central processing unit (Central Processing Unit, CPU), application-specific integrated circuit (ASIC), digital Signal Processor (DSP), application-specific integrated circuit (ASIC), off-the-shelf programmable gate array (FPGA), or other programmable logic device, etc.
In practical application of the present embodiment, the memory 32 may be used to store a program for implementing the image rendering control method described in any of the method embodiments described above; the processor 33 may load and execute the program stored in the memory 32 to implement each step of the image rendering control method according to any one of the above method embodiments of the present application, and the specific implementation process may refer to the description of the corresponding portion of the corresponding embodiment, which is not repeated.
It should be understood that the structure of the server shown in fig. 11 does not limit the server in the embodiments of the present application, and in practical applications, the server may include more or fewer components than those shown in fig. 11, or may combine some components, which are not listed herein.
In this specification, each embodiment is described in a progressive or parallel manner, and each embodiment is mainly described by a difference from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. The device and the server disclosed in the embodiments correspond to the method disclosed in the embodiments, so that the description is simpler, and the relevant parts are referred to in the description of the method.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. An image rendering control method, the method comprising:
acquiring first rendering view angle information of a current frame of an object to be rendered;
detecting that a first pre-rendering image matched with the first rendering visual angle information exists, wherein the first pre-rendering image is obtained by performing image rendering by using the model data of the object to be rendered according to the first pre-rendering visual angle information, and the first pre-rendering visual angle information is obtained according to second rendering visual angle information acquired in a first historical frame;
obtaining a target rendering image of the current frame according to the first pre-rendering image;
wherein the detecting that there is a first pre-rendered image that matches the first rendering perspective information includes:
the method comprises the steps of calling second rendering view angle information acquired in a first historical frame to acquire a plurality of prerendering view angle information, wherein the first historical frame stores the prerendering view angle information, the prerendering images and the association relation between the prerendering view angle information and the prerendering images;
comparing the first rendering perspective information with the retrieved pre-rendering perspective information;
determining prerendered view angle information of which the comparison result meets the condition as first prerendered view angle information;
And acquiring the first prerendered image stored in association with the first prerendered view angle information.
2. The method of claim 1, the method further comprising:
detecting that a first pre-rendering image matched with the first rendering view angle information does not exist, and performing image rendering by using the model data according to the first rendering view angle information to obtain a target rendering image of the current frame;
before outputting a target rendering image of the current frame, obtaining second prerendering view angle information of at least one future frame according to the first rendering view angle information;
performing image rendering by using the model data according to the second prerendering view angle information of the at least one future frame to obtain a second prerendering image of the corresponding future frame;
and storing second prerendered view angle information of the at least one future frame in association with the corresponding second prerendered image.
3. The method of claim 2, wherein the rendering perspective information is a multi-degree of freedom rendering perspective, and the obtaining second prerendering perspective information of at least one future frame according to the first rendering perspective information comprises:
utilizing the multiple-degree-of-freedom rendering visual angles of each of the plurality of historical frames to obtain multiple-degree-of-freedom rendering motion directions among the frames;
Acquiring a multidimensional spherical space range formed by a first multi-degree-of-freedom rendering view angle of a current frame;
based on the multi-degree-of-freedom rendering motion direction, data sampling is performed in the multi-dimensional spherical space range, and at least one multi-degree-of-freedom prerendering view angle is determined to be a second multi-degree-of-freedom rendering view angle of at least one future frame.
4. The method of claim 2, wherein the rendering perspective information is a multi-degree of freedom rendering perspective, and the obtaining second prerendering perspective information of at least one future frame according to the first rendering perspective information comprises:
predicting a multidimensional spherical spatial range of the multiple degree of freedom rendering view angle of the next future frame based on the first multiple degree of freedom rendering view angle of the current frame;
and performing data sampling in the multidimensional spherical space range to obtain a second multi-degree-of-freedom prerendering view angle of at least one future frame.
5. The method of claim 4, the sampling data within the multi-dimensional spherical space to obtain a second multi-degree of freedom prerendering view of at least one future frame, comprising:
acquiring schedulable resource information of a server;
based on the schedulable resource information, performing data sampling in the multi-dimensional spherical space range to obtain a second multi-degree-of-freedom prerendering view angle of at least one future frame;
Wherein the number of the second multi-degree-of-freedom prerendering perspectives can be changed with the change of the schedulable resource information.
6. The method according to claim 1, wherein the obtaining the target rendered image of the current frame according to the first pre-rendered image includes:
and if the first pre-rendering view angle information is different from the first rendering view angle, correcting the first pre-rendering image according to the first pre-rendering view angle information to obtain a target rendering image of the current frame.
7. The method of claim 1, the method further comprising:
if the first pre-rendering view angle information and the first rendering view angle are different, the first pre-rendering view angle and the target rendering image are sent to the electronic device, so that the electronic device can correct the target rendering image according to the first pre-rendering view angle information.
8. An image rendering control apparatus, the apparatus comprising:
the rendering view angle information acquisition module is used for acquiring first rendering view angle information of a current frame of an object to be rendered;
the detection module is used for detecting that a first pre-rendering image matched with the first rendering visual angle information exists, wherein the first pre-rendering image is obtained by performing image rendering by utilizing the model data of the object to be rendered according to the first pre-rendering visual angle information, and the first pre-rendering visual angle information is obtained according to second rendering visual angle information acquired in a first historical frame;
The target rendering image obtaining module is used for obtaining a target rendering image of the current frame according to the first pre-rendering image;
the detection module is specifically configured to retrieve second rendering angle of view information acquired in a first history frame to obtain a plurality of pre-rendering angle of view information, where the first history frame stores the plurality of pre-rendering angle of view information, the plurality of pre-rendering images, and an association relationship between the plurality of pre-rendering angle of view information and the plurality of pre-rendering images; comparing the first rendering perspective information with the retrieved pre-rendering perspective information; determining prerendered view angle information of which the comparison result meets the condition as first prerendered view angle information; and acquiring the first prerendered image stored in association with the first prerendered view angle information.
9. A server, the server comprising:
a communication interface;
a memory for storing a program for implementing the image rendering control method according to any one of claims 1 to 7;
a processor for calling and loading the program of the memory to realize the steps of the image rendering control method according to any one of claims 1 to 7.
CN202010473505.5A 2020-05-29 2020-05-29 Image rendering control method and device and server Active CN111627116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010473505.5A CN111627116B (en) 2020-05-29 2020-05-29 Image rendering control method and device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010473505.5A CN111627116B (en) 2020-05-29 2020-05-29 Image rendering control method and device and server

Publications (2)

Publication Number Publication Date
CN111627116A CN111627116A (en) 2020-09-04
CN111627116B true CN111627116B (en) 2024-02-27

Family

ID=72259202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010473505.5A Active CN111627116B (en) 2020-05-29 2020-05-29 Image rendering control method and device and server

Country Status (1)

Country Link
CN (1) CN111627116B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10939038B2 (en) * 2017-04-24 2021-03-02 Intel Corporation Object pre-encoding for 360-degree view for optimal quality and latency
CN114078092A (en) * 2020-08-11 2022-02-22 中兴通讯股份有限公司 Image processing method and device, electronic equipment and storage medium
CN114255315A (en) * 2020-09-25 2022-03-29 华为云计算技术有限公司 Rendering method, device and equipment
CN113316020B (en) * 2021-05-28 2023-09-15 上海曼恒数字技术股份有限公司 Rendering method, device, medium and equipment
CN113721874A (en) * 2021-07-29 2021-11-30 阿里巴巴(中国)有限公司 Virtual reality picture display method and electronic equipment
CN113485776B (en) * 2021-08-02 2024-04-05 竞技世界(北京)网络技术有限公司 Method and device for processing entity in multithreading rendering
CN114489538A (en) * 2021-12-27 2022-05-13 炫彩互动网络科技有限公司 Terminal display method of cloud game VR
CN114077508B (en) * 2022-01-19 2022-10-11 维塔科技(北京)有限公司 Remote image rendering method and device, electronic equipment and medium
CN115942049A (en) * 2022-08-26 2023-04-07 北京博雅睿视科技有限公司 VR video-oriented visual angle switching method, device, equipment and medium
CN115423920B (en) * 2022-09-16 2024-01-30 如你所视(北京)科技有限公司 VR scene processing method, device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502427A (en) * 2016-12-15 2017-03-15 北京国承万通信息科技有限公司 Virtual reality system and its scene rendering method
WO2017131977A1 (en) * 2016-01-25 2017-08-03 Microsoft Technology Licensing, Llc Frame projection for augmented reality environments
CN107274472A (en) * 2017-06-16 2017-10-20 福州瑞芯微电子股份有限公司 A kind of method and apparatus of raising VR play frame rate
CN108171783A (en) * 2018-03-20 2018-06-15 联想(北京)有限公司 Image rendering method, system and electronic equipment
CN110136082A (en) * 2019-05-10 2019-08-16 腾讯科技(深圳)有限公司 Occlusion culling method, apparatus and computer equipment
CN110351480A (en) * 2019-06-13 2019-10-18 歌尔科技有限公司 Image processing method, device and electronic equipment for electronic equipment
CN111051959A (en) * 2017-09-01 2020-04-21 奇跃公司 Generating new frames using rendered and non-rendered content from previous perspectives

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017131977A1 (en) * 2016-01-25 2017-08-03 Microsoft Technology Licensing, Llc Frame projection for augmented reality environments
CN106502427A (en) * 2016-12-15 2017-03-15 北京国承万通信息科技有限公司 Virtual reality system and its scene rendering method
CN107274472A (en) * 2017-06-16 2017-10-20 福州瑞芯微电子股份有限公司 A kind of method and apparatus of raising VR play frame rate
CN111051959A (en) * 2017-09-01 2020-04-21 奇跃公司 Generating new frames using rendered and non-rendered content from previous perspectives
CN108171783A (en) * 2018-03-20 2018-06-15 联想(北京)有限公司 Image rendering method, system and electronic equipment
CN110136082A (en) * 2019-05-10 2019-08-16 腾讯科技(深圳)有限公司 Occlusion culling method, apparatus and computer equipment
CN110351480A (en) * 2019-06-13 2019-10-18 歌尔科技有限公司 Image processing method, device and electronic equipment for electronic equipment

Also Published As

Publication number Publication date
CN111627116A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111627116B (en) Image rendering control method and device and server
US20220174252A1 (en) Selective culling of multi-dimensional data sets
CN108139204B (en) Information processing apparatus, method for estimating position and/or orientation, and recording medium
US10306180B2 (en) Predictive virtual reality content streaming techniques
CN107911737B (en) Media content display method and device, computing equipment and storage medium
JP6131950B2 (en) Information processing apparatus, information processing method, and program
CN108292489B (en) Information processing apparatus and image generating method
US20170155885A1 (en) Methods for reduced-bandwidth wireless 3d video transmission
EP3522542B1 (en) Switching between multidirectional and limited viewport video content
US20170188058A1 (en) Video content distribution system and content management server
CN109743626B (en) Image display method, image processing method and related equipment
CN109845275B (en) Method and apparatus for session control support for visual field virtual reality streaming
US20200120380A1 (en) Video transmission method, server and vr playback terminal
JP6620079B2 (en) Image processing system, image processing method, and computer program
CN110996097B (en) VR multimedia experience quality determination method and device
CN111583350A (en) Image processing method, device and system and server
CN108363486A (en) Image display device and method, image processing apparatus and method and storage medium
KR102503337B1 (en) Image display method, apparatus and system
CN114175630A (en) Methods, systems, and media for rendering immersive video content using a point of gaze grid
CN109978945B (en) Augmented reality information processing method and device
JP6949475B2 (en) Image processing equipment, image processing methods and programs
US20180364800A1 (en) System for Picking an Object Base on View-Direction and Method Thereof
US20220343583A1 (en) Information processing apparatus, 3d data generation method, and program
CN114690894A (en) Method and device for realizing display processing, computer storage medium and terminal
US8619124B2 (en) Video data processing systems and methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant