CN111627116A - Image rendering control method and device and server - Google Patents

Image rendering control method and device and server Download PDF

Info

Publication number
CN111627116A
CN111627116A CN202010473505.5A CN202010473505A CN111627116A CN 111627116 A CN111627116 A CN 111627116A CN 202010473505 A CN202010473505 A CN 202010473505A CN 111627116 A CN111627116 A CN 111627116A
Authority
CN
China
Prior art keywords
rendering
image
visual angle
angle information
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010473505.5A
Other languages
Chinese (zh)
Other versions
CN111627116B (en
Inventor
毛世杰
盛兴东
刘云辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010473505.5A priority Critical patent/CN111627116B/en
Publication of CN111627116A publication Critical patent/CN111627116A/en
Application granted granted Critical
Publication of CN111627116B publication Critical patent/CN111627116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The application provides an image rendering control method, device and server, because during the process of obtaining the rendering image of the first historical frame, the second rendering perspective information of the first historical frame can be used for predicting the pre-rendering perspective information of the future frame, and the corresponding pre-rendering image is generated and stored according to the pre-rendering perspective information, so that, after acquiring the first rendering perspective information of the current frame for the object to be rendered, a first pre-rendering image matched with the first rendering perspective information can be detected directly from pre-selected and stored pre-rendering images, thereby directly, quickly and accurately obtaining the target rendering image of the current frame according to the first pre-rendering image without rendering operation, greatly shortening the time interval between the rendering of the target rendering image and the previous frame image, the time delay of the rendering of the adjacent frame images is reduced, the image rendering efficiency is improved, and the experience of the user on the virtual reality scene is further improved.

Description

Image rendering control method and device and server
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image rendering control method, an image rendering control device, and a server.
Background
Nowadays, with the development and popularization of a 5G network (fifth generation mobile communication network), a cloud rendering technology can be adopted in AR (augmented Reality)/VR (Virtual Reality) device applications to perform image rendering on a complex model, so that a device wearer can see detailed information of the complex model in real time, and user experience is improved.
The cloud rendering technology is that a 3D program is placed in a remote server for rendering, a user initiates a control instruction through a terminal, the server responds to the control instruction, corresponding rendering tasks are executed, and rendering result pictures are fed back to the user terminal for displaying.
However, in the image rendering control process based on the existing cloud rendering technology, a server usually completes rendering of one frame of image by using model data, and transmits the obtained rendered image to a user terminal, and then obtains the next frame of rendering view angle information to continue image rendering, which may cause rendering of each frame of image to generate a certain delay, resulting in a long rendering time of the whole model, a long waiting time of a user, and poor experience.
Disclosure of Invention
In view of this, in order to reduce the rendering delay of the adjacent frame image and improve the model rendering efficiency, the present application provides the following technical solutions:
in one aspect, the present application provides an image rendering control method, including:
acquiring first rendering visual angle information of a current frame of an object to be rendered;
detecting that a first pre-rendering image matched with the first rendering visual angle information exists, wherein the first pre-rendering image is obtained by utilizing model data of the object to be rendered according to the first pre-rendering visual angle information, and the first pre-rendering visual angle information is obtained according to second rendering visual angle information obtained in a first historical frame;
and obtaining a target rendering image of the current frame according to the first pre-rendering image.
Optionally, the method further includes:
detecting that a first pre-rendering image matched with the first rendering visual angle information does not exist, and performing image rendering by using the model data according to the first rendering visual angle information to obtain a target rendering image of the current frame;
before outputting the target rendering image of the current frame, obtaining second pre-rendering visual angle information of at least one future frame according to the first rendering visual angle information;
according to the second pre-rendering visual angle information of the at least one future frame, image rendering is carried out by utilizing the model data, and a second pre-rendering image of the corresponding future frame is obtained;
and storing second prerender visual angle information of the at least one future frame and the corresponding second prerender image in an associated mode.
Optionally, the detecting that there exists a first prerendered image matched with the first rendering perspective information includes:
calling second rendering visual angle information acquired from the first historical frame to obtain a plurality of pieces of pre-rendering visual angle information;
obtaining the first rendering visual angle information and comparing the first rendering visual angle information with each piece of called pre-rendering visual angle information;
determining the pre-rendering visual angle information of which the comparison result meets the condition as first pre-rendering visual angle information;
and acquiring a first prerender image which is stored in association with the first prerender visual angle information.
Optionally, the rendering view information is a multiple-degree-of-freedom rendering view, and obtaining second pre-rendering view information of at least one future frame according to the first rendering view information includes:
obtaining the interframe multi-degree-of-freedom rendering motion direction by utilizing the respective multi-degree-of-freedom rendering visual angles of the plurality of historical frames;
acquiring a multi-dimensional spherical space formed by a first multi-degree-of-freedom rendering visual angle of a current frame;
and based on the rendering motion direction with multiple degrees of freedom, performing data sampling in the multi-dimensional spherical space range, and determining at least one pre-rendering visual angle with multiple degrees of freedom as a rendering visual angle with multiple degrees of freedom of a second rendering visual angle with multiple degrees of freedom of at least one future frame.
Optionally, the rendering view information is a multiple-degree-of-freedom rendering view, and obtaining second pre-rendering view information of at least one future frame according to the first rendering view information includes:
predicting a multi-dimensional spherical space range of a multi-degree-of-freedom rendering visual angle of a next future frame based on a first multi-degree-of-freedom rendering visual angle of a current frame;
and carrying out data sampling in the multi-dimensional spherical space range to obtain a second multi-degree-of-freedom pre-rendering visual angle of at least one future frame.
Optionally, the performing data sampling in the multi-dimensional spherical space range to obtain a second multiple-degree-of-freedom prerendering view of at least one future frame includes:
acquiring schedulable resource information of a server;
based on the schedulable resource information, performing data sampling in the multi-dimensional spherical space range to obtain a second multi-degree-of-freedom pre-rendering visual angle of at least one future frame;
wherein the number of the second multiple degree of freedom pre-rendering views can be changed with the change of the schedulable resource information.
Optionally, the obtaining a target rendering image of a current frame according to the first pre-rendering image includes:
and if the first pre-rendering visual angle information is different from the first rendering visual angle, correcting the first pre-rendering image according to the first pre-rendering visual angle information to obtain a target rendering image of the current frame.
Optionally:
if the first pre-rendering visual angle information is different from the first rendering visual angle, the first pre-rendering visual angle and the target rendering image are sent to the electronic equipment, so that the electronic equipment can correct the target rendering image according to the first pre-rendering visual angle information.
In another aspect, the present application further provides an image rendering control apparatus, including:
the rendering visual angle information acquisition module is used for acquiring first rendering visual angle information of a current frame of an object to be rendered;
the detection module is used for detecting that a first prerender image matched with the first rendering visual angle information exists, wherein the first prerender image is obtained by utilizing the model data of the object to be rendered according to the first prerender visual angle information, and the first prerender visual angle information is obtained according to second rendering visual angle information obtained in a first historical frame;
and the target rendering image obtaining module is used for obtaining a target rendering image of the current frame according to the first pre-rendering image.
In another aspect, the present application further provides a server, including:
a communication interface;
a memory for storing a program for implementing the image rendering control method as described above;
and the processor is used for calling and loading the program of the memory so as to realize the steps of the image rendering control method.
Therefore, the application provides an image rendering control method, device and server, because during the process of acquiring the rendering image of the first historical frame, the second rendering perspective information of the first historical frame can be used for predicting the pre-rendering perspective information of the future frame, and the corresponding pre-rendering image is generated and stored according to the pre-rendering perspective information, so that, after acquiring the first rendering perspective information of the current frame for the object to be rendered, a first pre-rendering image matched with the first rendering perspective information can be detected directly from pre-selected and stored pre-rendering images, thereby directly, quickly and accurately obtaining the target rendering image of the current frame according to the first pre-rendering image without rendering operation, greatly shortening the time interval between the rendering of the target rendering image and the previous frame image, the time delay of the rendering of the adjacent frame images is reduced, the image rendering efficiency is improved, and the experience of the user on the virtual reality scene is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an alternative system for implementing the image rendering control method proposed in the present application;
FIG. 2 illustrates a flow diagram of cloud rendering an application scene;
FIG. 3 is a flow chart illustrating an alternative example of the image rendering control method proposed in the present application;
FIG. 4 shows a schematic flow diagram of yet another alternative example of the image rendering control method proposed by the present application;
FIG. 5 is a flow chart illustrating yet another alternative example of the image rendering control method proposed by the present application;
FIG. 6 is a flow chart illustrating yet another alternative example of the image rendering control method proposed by the present application;
fig. 7 is a flowchart illustrating still another alternative example of the image rendering control method proposed by the present application;
FIG. 8 is a diagram illustrating a multi-dimensional spherical space with multiple rendering views in the image rendering control method proposed in the present application;
fig. 9 is a schematic structural diagram showing an alternative example of the image rendering control apparatus proposed in the present application;
fig. 10 is a schematic structural diagram showing still another alternative example of the image rendering control apparatus proposed by the present application;
fig. 11 is a hardware configuration diagram illustrating an alternative example of a server that implements the image rendering control method proposed by the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present application may be combined with each other without conflict. Moreover, as used in this application, a "system," "apparatus," "unit" and/or "module" is a method for distinguishing between different components, elements, components, parts or assemblies of different levels and may be replaced by other language if it is done so.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements. An element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two. The terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
Additionally, flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Referring to fig. 1, which is a schematic structural diagram of an optional system applicable to the image rendering control method provided in the present application, the system may be applicable to various cloud rendering application scenarios, as shown in fig. 1, the system may include at least one electronic device 100 and at least one server 200, where:
the electronic device 100 may be configured to acquire rendering perspective information corresponding to each of consecutive multiple frames of an object to be rendered when a user wears the electronic device.
In this embodiment, the electronic device 100 may include various types of AR (Augmented Reality) devices, VR (Virtual Reality) devices, and the like, such as VR smart glasses, a helmet, a handle, and the like, and the specific device types of the electronic device 100 are not limited in this application, and for different users, the electronic device used by the user to view an object to be rendered in a Virtual scene may be determined according to actual needs, usage habits, and the like, which are not described in detail herein.
The server 200 may be a service device that supports cloud computing and implements cloud rendering, that is, a cloud server deployed in a cloud end, and may specifically be composed of one or more servers, and the specific composition structure of the server 200 is not limited in the present application. In the present application, the image rendering control method and apparatus provided in the following embodiments of the present application may be implemented, and the specific implementation process may refer to the description of the corresponding parts of the following embodiments.
In practical applications, in order to improve interactive experience and control over a virtual world, in a cloud rendering application scene, referring to a flow chart of the cloud rendering application scene shown in fig. 2, it is generally required that the electronic device 100 collects rendering view angle information at different times and sends the rendering view angle information to the server 200, the server 200 performs image rendering by using model data according to the rendering view angle information, obtains a rendering image of a model of an object to be rendered at the rendering view angle, and then feeds the rendering image back to the electronic device 100 after compressing the rendering image, and the electronic device 100 decompresses and displays the compressed rendering image.
The rendering visual angle information acquired by the electronic device 100 at different times may be a multiple degree of freedom (dof) spatial pose, such as a 3dof spatial pose, a 6dof spatial pose, and the like, and the 3dof may refer to degrees of freedom of 3 rotation angles, such as rotation of the head of a user wearing the electronic device in different directions, but cannot detect front, back, left, and right spatial displacements of the head, and may be suitable for viewing an application scene of a VR movie; 6dof can be on the basis of 3dof, increase and worn the electronic equipment user's health and moved the change that brings about from top to bottom, from front to back and from left to right, can realize better tracking location to the user, if under the game application scene, can make the electronic equipment user cross the obstacle, avoid scenes such as strange animal, mountain-climbing interdynamic, obtain more true, immersive experience and experience.
Based on this, the electronic device 100 may include various types of sensors for sensing spatial poses of different degrees of freedom of the electronic device 100 to form rendering perspective information of corresponding frames, and a detailed detection process of the rendering perspective information is not described in detail.
For the prior art described in the background section, rendering view angle information of a next frame is acquired only after a rendered frame of image is transmitted to an electronic device, and rendering of the next frame of image is completed, so that time consumed by rendering a three-dimensional model of a whole object to be rendered is long, and experience of a user in a cloud rendering application scene is poor. In order to improve the model rendering efficiency, a pre-rendering means is provided in the cloud rendering process, namely, during the rendering of a certain frame of image, the pre-rendering visual angle information of a future frame is directly predicted, image rendering is carried out by using model data according to the pre-rendering visual angle information, and the pre-rendering image of the future frame is obtained. The specific implementation process can refer to the description of the corresponding parts of the following embodiments.
Referring to fig. 3, a flowchart of an optional example of an image rendering control method provided in an embodiment of the present application, where the method may be applied to the server, and as shown in fig. 3, the image rendering control method may include, but is not limited to, the following steps:
step S11, acquiring first rendering visual angle information of a current frame of an object to be rendered;
in connection with the above description of the cloud rendering application scenario, the user may, via an electronic device (such as the above-mentioned AR device or VR device, etc., which are not explained one by one below) or other terminal, through interaction with the server, an object to be rendered, such as an entire virtual scene or a certain virtual object contained therein, which needs to be rendered and presented currently, is determined, so that after the server obtains model data of the object to be rendered, when the user wears the electronic equipment and interacts with the currently presented virtual scene, each sensor in the electronic equipment can collect first rendering visual angle information, such as 3dof space attitude, 6dof space attitude and other multi-degree-of-freedom space attitude data, is uploaded to a server in real time, therefore, the server obtains the first rendering view angle information of the current frame of the object to be rendered, and can determine the rendering image at which rendering view angle of the object to be rendered needs to be presented to the user.
It can be seen that, in step S11, the server may directly or indirectly receive, through a Wireless communication network, for example, a 5G (fifth generation mobile communication)/6G (fifth generation mobile communication) network, a WIFI (Wireless-Fidelity, Wireless local area network) network, and the like, the first rendering perspective information of the current frame, which is collected and sent by the electronic device during the electronic device is worn by the user.
It should be noted that, regarding the collection of the first rendering perspective information, the collection is not limited to the collection of multiple types of sensors configured in the electronic device, and may also be performed on the user wearing the electronic device by combining with other collection devices configured in the space where the user is located, such as image collection devices like a camera, and an intelligent bracelet, so as to determine the first rendering perspective information for the object to be rendered, which is needed by the current frame.
In order to ensure real-time performance and reliability of interaction between a user and a virtual scene, after the user wears electronic equipment, an angle of view in the virtual scene usually changes continuously, that is, after the user wears the electronic equipment, at least one part of a body of the user or moves, such as a head, a body, eyes, hands and the like, rendering angle of view information of different frames needs to be continuously acquired, and image rendering of corresponding frames is completed according to the steps described below.
The frames generally refer to a single image picture of a minimum unit in an image animation, one frame is a static picture, continuous frames can form animation, such as presenting a dynamic game scene, playing a VR movie, and the like, and the number of frames is the number of frames of pictures transmitted in 1 second, which can be understood as the number of times that a graphics processor can refresh per second, so that the number of frames required for completing rendering of a three-dimensional model of an object to be rendered can be determined according to the display requirement of an actual virtual scene (which may include the object to be rendered), which is not limited in the present application.
Step S12, detecting the existence of a first pre-rendering image matched with the first rendering perspective information;
in combination with the above description of the inventive concept of the present application, the present application performs pre-rendering on an image of a current frame during image rendering of a certain history frame (which may be referred to as a first history frame) of the current frame, that is, after second rendering perspective information of the first history frame is obtained, according to the second rendering perspective information, first pre-rendering perspective information of the current frame is obtained, and according to the first pre-rendering perspective information, image rendering is performed by using model data of an object to be rendered, so as to obtain a first pre-rendering image of the current frame, and the first pre-rendering image is stored.
In this way, after first rendering visual angle information actually acquired by the current frame is acquired, whether a first pre-rendering image matched with the first rendering visual angle information exists or not can be detected from pre-stored pre-rendering images, if so, image rendering processing can be omitted, and a target rendering image of the current frame can be quickly and accurately acquired directly in a mode described in the following steps; and if the first pre-rendering image does not exist, performing image rendering by using the model data of the object to be rendered by using the first rendering visual angle information to obtain a target rendering image of the current frame.
In practical application of this embodiment, in combination with the above description of the inventive concept of the present application, if there is no first pre-rendered image currently, it may be that a previous frame of the current frame is a last frame of the number of virtual reality pre-rendered frames, in this case, the pre-rendered view angle information of a future frame may be continuously obtained by using the first rendering view angle information, and a pre-rendered image of a corresponding future frame is obtained by rendering, and a specific implementation process may refer to the following description of the corresponding embodiment.
The virtual reality pre-rendering frame number can refer to a maximum pre-rendering frame number, namely the virtual reality pre-rendering frame number is not displayed after the server renders, a plurality of frames of rendering images are pre-selected and stored, the frame rate stability is improved, and the value of the maximum pre-rendering frame number is not limited. The head can be rotated to be consistent with the presented picture through the pre-rendering mode, dizziness and discomfort of a user during wearing of the head-wearing electronic equipment are reduced to a certain extent, and the immersive feeling of the user on the virtual reality scene can be improved by the pre-rendering mode for the movement of other body parts of the user.
Step S13, obtaining a target rendering image of the current frame according to the first pre-rendering image.
In general, there is a certain difference between the first pre-rendering angle of view information of the current frame obtained in advance and the first rendering angle of view information actually acquired, which may cause a difference between the first pre-rendering image obtained according to the first pre-rendering angle of view information and the actually required rendering image of the current frame, and the first pre-rendering image needs to be calibrated to obtain the target rendering image of the current frame.
It should be understood that, compared with the time taken for obtaining the target rendering image of the current frame through rendering according to the first rendering perspective information in the prior art, the time taken for directly retrieving the first pre-rendering image and calibrating the first pre-rendering image is much, so that the process for obtaining the target rendering image of the current frame, which is provided by the embodiment, improves the efficiency of obtaining the rendering image and ensures the reliability and accuracy of the obtained rendering image compared with the existing pre-rendering mode.
In some embodiments, if the pre-obtained first pre-rendering view angle information of the current frame is the same as the actually acquired first rendering view angle information, the first pre-rendering image does not need to be calibrated, and the first pre-rendering image can be directly determined as the target rendering image of the current frame.
In summary, after obtaining the second rendering perspective information of the first historical frame during the period of obtaining the rendering image of the first historical frame, the server predicts the pre-rendering perspective information of the future frame (i.e. the future frame relative to the first historical frame, which may include the current frame) based on the second rendering perspective information, and generates and stores the corresponding pre-rendering image based on the second rendering perspective information, so that after obtaining the first rendering perspective information of the current frame of the object to be rendered, the embodiment does not need to perform model rendering based on the first rendering perspective information to obtain the target rendering image of the current frame, and can directly detect the first pre-rendering image matched with the first rendering perspective information from the pre-selected and stored pre-rendering images, thereby directly and accurately obtaining the target rendering image of the current frame based on the first pre-rendering image, and greatly shortening the rendering time interval with the previous frame image, the time delay of the rendering of the adjacent frame images is reduced, the image rendering efficiency is improved, and the experience of the user on the virtual reality scene is further improved.
Referring to fig. 4, which is a schematic flowchart of another optional example of the image rendering control method provided in the embodiment of the present application, the method may still be applied to a server, and as shown in fig. 4, the image rendering control method provided in the embodiment may include:
step S21, acquiring first rendering visual angle information of a current frame of an object to be rendered;
step S22, detecting whether there is a first pre-rendered image matching the first rendering perspective information, if yes, proceeding to step S23, if not, executing step S24;
step S23, obtaining a target rendering image of the current frame according to the first pre-rendering image;
for the process of obtaining the target rendering image of the current frame under the condition of pre-storing the first pre-rendering image, reference may be made to the description of the corresponding parts in the above embodiments, and this embodiment is not repeated again.
Step S24, according to the first rendering visual angle information, image rendering is carried out by using the model data of the object to be rendered, and a target rendering image of the current frame is obtained;
it should be understood that the data to be rendered of the three-dimensional model of the object to be rendered at different rendering angles are different, and the data to be rendered may be model data required for rendering a three-dimensional model picture at the corresponding rendering angle, so that corresponding rendered images obtained after cloud rendering may have a certain difference, but rendering means used for obtaining the rendered images at different rendering angles are similar, and the three-dimensional model data is generally required to be projected onto a two-dimensional plane to obtain corresponding two-dimensional projection point data, so as to form the rendered image on the two-dimensional plane.
Step S25, before outputting the target rendering image of the current frame, obtaining second pre-rendering visual angle information of at least one future frame according to the first rendering visual angle information;
in combination with the above description of the inventive concept of the present application, if the first prerendering image of the current frame is not detected, it may be the prerendering view angle information that may be obtained by prerendering the future frame in the image rendering process of the historical frame, and the corresponding prerendering image obtained according to the prerendering view angle information is already called up, that is, the number of prerendering frames of the prerendering image obtained in advance in the historical frame is not greater than the difference between the number of frames of the current frame and the historical frame, which also means that there is no prerendering image of the future frame currently, and it is necessary to continue to prerender again.
Of course, in practical applications, it is not necessary to obtain the pre-rendered image of the subsequent frame after the pre-rendered image is called, or to predict the pre-rendered image of the future frame again directly according to the manner described in this embodiment after a plurality of pre-rendered images are called.
Based on the above analysis, after the first rendering view information of the current frame is obtained, the second pre-rendering view information of at least one future frame may be directly predicted, and the prediction of the pre-rendering view information may be implemented according to a kinematic model or a resource that can be allocated by a server, for example.
As described in step S25, the embodiment may start predicting the second pre-rendering perspective information of the future frame before transmitting the target rendering image of the current frame to the electronic device, and specifically may predict the second pre-rendering perspective information of the future frame while performing image rendering in the above manner after acquiring the first rendering perspective information of the current frame, or predict the second pre-rendering perspective information of the future frame while performing compression processing on the rendering image of the current frame, so as to reduce the delay of image rendering of the adjacent frame.
Step S26, according to the second prerender visual angle information of at least one future frame, image rendering is carried out by utilizing the model data, and a second prerender image of the corresponding future frame is obtained;
the process of how to perform image rendering by using the model data according to the second pre-rendering perspective information to obtain a corresponding pre-rendering image is similar to the process of implementing the step S24, and detailed description is not given herein.
Step S27, storing the second prerender perspective information of at least one future frame in association with the corresponding second prerender image.
The second prerender visual angle information of at least one predicted future frame and the specific storage mode of the second prerender image of the corresponding frame are not limited, such as a storage table mode and the like.
In combination with the description of the above embodiment, when a certain future frame is reached, that is, a certain future frame becomes a current frame, according to the above manner, the rendering perspective information actually collected in the future frame is utilized, and the second pre-rendering image corresponding to the second pre-rendering perspective information that matches the rendering perspective information is retrieved from the second pre-rendering images corresponding to the stored multiple pieces of second pre-rendering perspective information, so as to determine the target rendering image of the future frame based on the second pre-rendering perspective information, and no image rendering operation needs to be performed on the future frame, and the specific implementation process may refer to the description of the corresponding part of the above embodiment.
Specifically, referring to the flowchart of the image rendering control method shown in fig. 5, taking the current frame as the kth frame as an example for description, compared to the prior art before improvement (such as the flowchart shown above the improvement arrow in fig. 5), each frame needs to perform an image rendering operation, which results in a longer delay time for each frame relative to the previous frame. According to the improved scheme provided by the application, after the server receives rendering visual angle information of a kth frame, a pre-rendering image of a future frame can be predicted according to the method, specifically, the prediction of the pre-rendering image of the (k + 1) th frame is taken as an example, the pre-rendering visual angle information of the (k + 1) th frame can be predicted by using the rendering visual angle information of the kth frame, and then the pre-rendering image of the (k + 1) th frame is obtained through rendering operation.
It should be noted that, in the k frame image rendering process, it is not limited to pre-rendering the k +1 frame image, and pre-rendering the pre-rendered images corresponding to the future i frames may also be performed according to the need, and the specific implementation process is similar to the above process for obtaining the k +1 frame pre-rendered images, and detailed description is not given in this application.
In summary, in this embodiment, after obtaining the first rendering perspective information of the current frame of the object to be rendered, that is, the actually collected rendering perspective information, it may be detected whether a first pre-rendering image matched with the first rendering perspective information is currently stored, and if so, the first pre-rendering image is directly utilized to quickly obtain the target rendering image of the current frame, so that time consumed by image rendering operation is saved, and model rendering efficiency is improved; if the first pre-rendering image does not exist, then according to the first rendering visual angle information, image rendering is performed by using the model data to obtain a target rendering image of the current frame, so that it is ensured that a certain frame image which is not rendered is not omitted, and before the rendered target rendering image of the current frame is output, in this embodiment, second pre-rendering visual angle information of at least one future frame is predicted by using the first rendering visual angle information of the current frame, and then image rendering is performed by using corresponding model data to obtain a second pre-rendering image of a corresponding future frame and then stored, so that after the future frame is reached, the matched second pre-rendering image can be continuously called to obtain the target rendering image of the future frame, thereby effectively reducing the rendering delay of the whole image and improving the rendering efficiency of the whole three-dimensional model of the object to be rendered.
Referring to fig. 6, which is a schematic flowchart of yet another optional example of the image rendering control method provided in the embodiment of the present application, the present embodiment may be an optional detailed implementation of the image rendering control method described in the foregoing embodiment, but is not limited to the detailed implementation described in the present embodiment. As shown in fig. 6, the method may include:
step S31, acquiring first rendering visual angle information of a current frame of an object to be rendered;
step S32, second rendering visual angle information acquired from the first historical frame is called to obtain a plurality of pieces of pre-rendering visual angle information;
for the process of obtaining and storing the prerendering view angle information and the corresponding prerendering image of each of the multiple future frames predicted from the first history frame, reference may be made to the description of the corresponding part of the above embodiment, which is not repeated in this embodiment.
Step S33, comparing the first rendering visual angle information with the called pre-rendering visual angle information;
step S34, determining the prerender visual angle information with the comparison result meeting the condition as first prerender visual angle information;
in practical application of this embodiment, due to the fact that the predicted pre-rendering visual angle information may have a certain difference from the rendering visual angle information actually acquired by the corresponding frame, in this case, an actual target rendering image may be obtained by calibrating the pre-rendering image, and in order to reduce the calibration workload and improve the reliability of the calibrated target rendering image, the pre-rendering visual angle information having a smaller difference from the actually acquired rendering visual angle information may be rendered as the first pre-rendering visual angle information.
Therefore, in the embodiment of the present application, a difference between the first rendering perspective information and each of the retrieved pre-rendering perspective information may be obtained, and if the difference is smaller than a perspective threshold (which is a relatively small numerical value, but the specific numerical value is not limited), it may be considered that a comparison result between the first rendering perspective information and the corresponding pre-rendering perspective information satisfies a condition, and the pre-rendering perspective information satisfying the condition is determined as the first pre-rendering perspective information, that is, the first pre-rendering perspective information matched with the first rendering perspective information.
Step S35, acquiring a first prerender image stored in association with the first prerender perspective information;
step S36, obtaining a target rendering image of the current frame according to the first pre-rendering image.
Since the plurality of pre-rendering visual angle information, the plurality of pre-rendering images and the association relationship between the plurality of pre-rendering visual angle information and the plurality of pre-rendering images are already stored in the first historical frame, after the first pre-rendering visual angle information matched with the first rendering visual angle information of the current frame is obtained, the pre-rendering image which is associated and stored with the first pre-rendering visual angle information can be called according to the association relationship, namely the first pre-rendering image is obtained, then the target rendering image of the current frame can be directly and quickly obtained by using the first pre-rendering image, and the specific implementation process is not described in detail.
Therefore, the first rendering visual angle information of the current frame of the object to be rendered is obtained, the first rendering visual angle information can be compared with a plurality of predicted pre-rendering visual angle information, the first pre-rendering visual angle information with the comparison result meeting the conditions is selected, the first pre-rendering image which is stored in association with the first pre-rendering visual angle information is obtained, the first pre-rendering image is directly utilized, the target rendering image of the current frame is directly and quickly obtained, the rendering operation is not needed, the time delay of rendering the current frame image is reduced, the model rendering efficiency is improved, and the user experience is further improved.
In some embodiments, in combination with the above analysis, because a certain difference often exists between actually acquired rendering view information of a certain frame and pre-rendering view information of the frame predicted in advance, so that a corresponding pre-rendering image is different from an actual target rendering image of the frame, and the pre-rendering image needs to be calibrated.
The ATW is a technology for generating an intermediate frame, can be used to correct an image frame, is applied to the field of virtual reality, and can be used to reduce problems such as image jitter of a virtual reality scene and scene rendering delay caused by too fast motion.
In practical application of this embodiment, when the first pre-rendering perspective information is different from the first rendering perspective, or when the difference exceeds an error threshold (that is, an allowable error value), the server may adopt an ATW technique to correct the first pre-rendering image according to the first pre-rendering perspective information, so as to obtain a target rendering image of the current frame. If the two-dimensional pre-rendered image can be corrected by utilizing the respective freedom degree change poses contained in the differences, the system resources are not too much consumed, the complex scene of the duyu can be avoided, and a new image frame can be generated by less calculation.
Of course, in practical application of the present application, when the first pre-rendering view angle information is different from the first rendering view angle, or the difference exceeds an error threshold, the server may also send the first pre-rendering view angle and the target rendering image (which may be a pre-rendering image at this time) to the electronic device, so that the electronic device corrects the target rendering image according to the first pre-rendering view angle information by using an ATW technique. It can be seen that, compared to the existing rendering image return flow, the method is different in that not only the rendering image but also the corresponding pre-rendering view angle information need to be transmitted, and in combination with the above description of the pre-rendering technology, the compression processing may be performed before the transmission, and the specific compression processing process is not described in detail.
Referring to fig. 7, which is a schematic flow diagram of another optional example of the image rendering control method provided in the embodiment of the present application, this embodiment may be a further optional detailed implementation manner of the image rendering control method described in the above embodiment, and this embodiment mainly refines an implementation process of how to obtain second prerendering view angle information of at least one future frame by using rendering view angle information actually acquired by a current frame in each of the above embodiments, and uses the rendering view angle information as a multiple-degree-of-freedom rendering view angle, such as a view angle corresponding to the above 3dof/6dof space pose, as shown in fig. 7, the implementation process may include:
step S41, obtaining the interframe multi-degree-of-freedom rendering motion direction by using the respective multi-degree-of-freedom rendering visual angles of a plurality of historical frames;
in this embodiment, the motion direction of the electronic device worn by the user, that is, the multiple-degree-of-freedom rendering motion direction between frames is determined by using the change of the multiple-degree-of-freedom rendering view angle of each of the multiple historical frames, so as to be used as the prediction direction of the subsequent prediction rendering view angle, and the specific implementation process is not described in detail.
Step S42, acquiring a multi-dimensional spherical space range formed by a first multi-degree-of-freedom rendering visual angle of the current frame;
and step S43, based on the rendering motion direction with multiple degrees of freedom, performing data sampling in the multi-dimensional spherical space range, and determining at least one pre-rendering visual angle with multiple degrees of freedom as a second rendering visual angle with multiple degrees of freedom of at least one future frame.
Referring to the multi-degree-of-freedom spherical space diagram shown in fig. 8, according to the change of each degree-of-freedom spatial attitude included in the multi-degree-of-freedom rendering view angle between two adjacent frames, a corresponding multi-dimensional spherical space range, such as an 3/6-dimensional spherical space range, can be determined, and a specific construction process is not described in detail. Then, considering the limitation of multi-degree-of-freedom motion between two adjacent frames, when predicting the multi-degree-of-freedom rendering visual angle of a future frame, a second multi-degree-of-freedom rendering visual angle of at least one future frame can be obtained directly from the multi-dimensional spherical space range of the current frame in a data sampling mode.
The smaller the sampling interval is, the more favorable the rendering image of the future frame can be quickly and reliably acquired, and the sampling interval of the data sampling is not limited by the application and can be adjusted according to actual conditions.
In still other embodiments, according to the inventive concept of obtaining the second pre-rendering viewing angle information of at least one future frame by using the rendering viewing angle information actually collected by the current frame described in the above embodiments, the multi-dimensional spherical spatial range of the multi-degree of freedom rendering viewing angle of the next future frame (i.e., the next adjacent frame) may be predicted based on the first multi-degree of freedom rendering viewing angle of the current frame according to a kinematic model (e.g., the inter-frame multi-degree of freedom rendering motion direction obtained above), and then data sampling may be performed in the multi-dimensional spherical spatial range of the next future frame to obtain the second multi-degree of freedom pre-rendering viewing angle of at least one future frame, which is not described in detail in this application.
In the above embodiments, the performing step of performing data sampling in the multi-dimensional spherical space range to obtain the second multi-degree-of-freedom pre-rendering view angle of the at least one future frame may be performed adaptively according to cloud computing resources, and a plurality of pre-rendering view angles are obtained through sampling, specifically, schedulable resource information of the server may be obtained, and based on the schedulable resource information, data sampling may be performed in the multi-dimensional spherical space range to obtain the second multi-degree-of-freedom pre-rendering view angle of the at least one future frame.
In summary, in the embodiment of the present application, in the process of predicting a possible rendering view angle of a next frame, limitations of changes of multiple degrees of freedom rendering view angles between adjacent frames may be considered, data in a multi-dimensional space range under the multiple degrees of freedom rendering view angle of the current frame or the predicted next frame is sampled according to the multiple degrees of freedom rendering motion direction between frames, so as to obtain a second multiple degrees of freedom rendering view angle of at least one future frame, and pre-rendering of an image of the future frame is realized in advance for calling when the future frame arrives later, so as to achieve the purpose of improving the model rendering efficiency.
Referring to fig. 9, a schematic structural diagram of an optional example of an image rendering control apparatus provided in the embodiment of the present application, the apparatus may be applied to the server, and as shown in fig. 9, the image rendering control apparatus may include:
a rendering perspective information obtaining module 210, configured to obtain first rendering perspective information of a current frame of an object to be rendered;
a detection module 220, configured to detect that there is a first pre-rendered image matching the first rendering perspective information;
the first pre-rendering image is obtained by utilizing the model data of the object to be rendered to perform image rendering according to first pre-rendering visual angle information, and the first pre-rendering visual angle information is obtained according to second rendering visual angle information acquired in a first historical frame.
In some embodiments, the detection module 220 may include:
the pre-rendering visual angle calling unit is used for calling second rendering visual angle information acquired from the first historical frame to obtain a plurality of pieces of pre-rendering visual angle information;
the visual angle comparison unit is used for comparing the first rendering visual angle information with the called pre-rendering visual angle information;
the pre-rendering visual angle determining unit is used for determining the pre-rendering visual angle information of which the comparison result meets the condition as first pre-rendering visual angle information;
and the pre-rendering image acquisition unit is used for acquiring a first pre-rendering image which is stored in association with the first pre-rendering visual angle information.
And a target rendering image obtaining module 230, configured to obtain a target rendering image of the current frame according to the first pre-rendering image.
In one possible implementation, the target-rendering image obtaining module 230 may include:
and the image correction unit is used for correcting the first prerendered image according to the first prerendered visual angle information under the condition that the first prerendered visual angle information is different from the first rendering visual angle, so as to obtain a target rendering image of the current frame.
In another possible implementation manner, the apparatus may further include:
the data transmission module is used for sending the first prerendering visual angle and the target rendering image to the electronic equipment under the condition that the first prerendering visual angle information is different from the first rendering visual angle, so that the electronic equipment can correct the target rendering image according to the first prerendering visual angle information.
In some embodiments, as shown in fig. 10, on the basis of the above embodiments, the image rendering control apparatus may further include:
an image rendering module 240, configured to detect that there is no first pre-rendered image that matches the first rendering perspective information, and perform image rendering by using the model data according to the first rendering perspective information to obtain a target rendering image of the current frame;
a pre-rendering view obtaining module 250, configured to obtain second pre-rendering view information of at least one future frame according to the first rendering view information before outputting the target rendering image of the current frame;
in a possible implementation manner, if the rendering view information is a multiple degree of freedom rendering view, the pre-rendering view obtaining module 250 may include:
the rendering motion direction obtaining unit with multiple degrees of freedom is used for obtaining the rendering motion direction with multiple degrees of freedom between frames by utilizing the rendering visual angles with multiple degrees of freedom of multiple historical frames;
the multi-dimensional spherical space range acquisition unit is used for acquiring a multi-dimensional spherical space range formed by a first multi-degree-of-freedom rendering visual angle of the current frame;
and the first data sampling unit is used for carrying out data sampling in the multi-dimensional spherical space range based on the multi-degree-of-freedom rendering motion direction and determining at least one multi-degree-of-freedom pre-rendering visual angle as a second multi-degree-of-freedom rendering visual angle of at least one future frame.
In another possible implementation manner, if the rendering view information is a multi-degree-of-freedom rendering view, the pre-rendering view obtaining module 250 may also include:
the multi-dimensional spherical space range prediction unit is used for predicting the multi-dimensional spherical space range of the multi-freedom rendering visual angle of the next future frame based on the first multi-freedom rendering visual angle of the current frame;
and the second data sampling unit is used for sampling data in the multi-dimensional spherical space range to obtain a second multi-degree-of-freedom pre-rendering visual angle of at least one future frame.
For the possible embodiment, the first data sampling unit and/or the second data sampling unit may include:
the schedulable resource information acquiring unit is used for acquiring schedulable resource information of the server;
a multi-degree-of-freedom pre-rendering visual angle obtaining unit, configured to perform data sampling within the multi-dimensional spherical space range based on the schedulable resource information, so as to obtain a second multi-degree-of-freedom pre-rendering visual angle of at least one future frame;
wherein the number of the second multiple degree of freedom pre-rendering views can be changed with the change of the schedulable resource information.
A pre-rendering module 260, configured to perform image rendering by using the model data according to second pre-rendering perspective information of the at least one future frame, so as to obtain a second pre-rendering image of the corresponding future frame;
a storage module 270, configured to store second prerender perspective information of the at least one future frame in association with the corresponding second prerender image.
It should be noted that, various modules, units, and the like in the embodiments of the foregoing apparatuses may be stored in the memory as program modules, and the processor executes the program modules stored in the memory to implement corresponding functions, and for the functions implemented by the program modules and their combinations and the achieved technical effects, reference may be made to the description of corresponding parts in the embodiments of the foregoing methods, which is not described in detail in this embodiment.
The present application also provides a storage medium on which a computer program may be stored, where the computer program may be called and loaded by a processor to implement the steps of the image rendering control method described in the above embodiments.
Referring to fig. 11, in order to illustrate a hardware structure of an optional example of a server for implementing the image rendering control method provided in the present application, the server may include a communication interface 31, a memory 32, and a processor 33, where:
the number of the communication interface 31, the memory 32, and the processor 33 may be at least one, and the communication interface 31, the memory 32, and the processor 33 may be connected to a communication bus, so as to implement data interaction therebetween through the communication bus, and a specific implementation process may be determined according to requirements of a specific application scenario, which is not described in detail herein.
The communication interface 31 may include an interface capable of implementing data interaction by using a wireless communication network, such as an interface of a communication module, such as a WIFI module, a 5G/6G (fifth generation mobile communication network/sixth generation mobile communication network) module, and the communication interface 31 may also include a data interface, such as a USB interface, a serial/parallel interface, and the like, for implementing data interaction between internal components of the server, and the specific content included in the communication interface 31 is not limited in the present application.
In the present embodiment, the memory 32 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device or other volatile solid state storage device. The processor 33 may be a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA), or other programmable logic device.
In practical applications of the present embodiment, the memory 32 may be used to store a program for implementing the image rendering control method described in any of the above method embodiments; the processor 33 may load and execute a program stored in the memory 32 to implement each step of the image rendering control method provided in any one of the above method embodiments of the present application, and for a specific implementation process, reference may be made to the description of the corresponding part in the corresponding embodiment above, which is not described again.
It should be understood that the structure of the server shown in fig. 11 does not constitute a limitation to the server in the embodiment of the present application, and in practical applications, the server may include more or less components than those shown in fig. 11, or some components may be combined, and the present application is not specifically described herein.
The embodiments in the present description are described in a progressive or parallel manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device and the server disclosed by the embodiment correspond to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the description of the method part.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An image rendering control method, the method comprising:
acquiring first rendering visual angle information of a current frame of an object to be rendered;
detecting that a first pre-rendering image matched with the first rendering visual angle information exists, wherein the first pre-rendering image is obtained by utilizing model data of the object to be rendered according to the first pre-rendering visual angle information, and the first pre-rendering visual angle information is obtained according to second rendering visual angle information obtained in a first historical frame;
and obtaining a target rendering image of the current frame according to the first pre-rendering image.
2. The method of claim 1, further comprising:
detecting that a first pre-rendering image matched with the first rendering visual angle information does not exist, and performing image rendering by using the model data according to the first rendering visual angle information to obtain a target rendering image of the current frame;
before outputting the target rendering image of the current frame, obtaining second pre-rendering visual angle information of at least one future frame according to the first rendering visual angle information;
according to the second pre-rendering visual angle information of the at least one future frame, image rendering is carried out by utilizing the model data, and a second pre-rendering image of the corresponding future frame is obtained;
and storing second prerender visual angle information of the at least one future frame and the corresponding second prerender image in an associated mode.
3. The method of claim 1 or 2, the detecting presence of a first pre-rendered image that matches the first rendering perspective information, comprising:
calling second rendering visual angle information acquired from the first historical frame to obtain a plurality of pieces of pre-rendering visual angle information;
comparing the first rendering visual angle information with the called pre-rendering visual angle information;
determining the pre-rendering visual angle information of which the comparison result meets the condition as first pre-rendering visual angle information;
and acquiring a first prerender image which is stored in association with the first prerender visual angle information.
4. The method of claim 2, wherein the rendering perspective information is a multiple degree of freedom rendering perspective, and the obtaining second pre-rendering perspective information for at least one future frame according to the first rendering perspective information comprises:
obtaining the interframe multi-degree-of-freedom rendering motion direction by utilizing the respective multi-degree-of-freedom rendering visual angles of the plurality of historical frames;
acquiring a multi-dimensional spherical space range formed by a first multi-degree-of-freedom rendering visual angle of a current frame;
and based on the rendering motion direction with multiple degrees of freedom, performing data sampling in the multi-dimensional spherical space range, and determining at least one pre-rendering visual angle with multiple degrees of freedom as a rendering visual angle with multiple degrees of freedom of a second rendering visual angle with multiple degrees of freedom of at least one future frame.
5. The method of claim 2, wherein the rendering perspective information is a multiple degree of freedom rendering perspective, and the obtaining second pre-rendering perspective information for at least one future frame according to the first rendering perspective information comprises:
predicting a multi-dimensional spherical space range of a multi-degree-of-freedom rendering visual angle of a next future frame based on a first multi-degree-of-freedom rendering visual angle of a current frame;
and carrying out data sampling in the multi-dimensional spherical space range to obtain a second multi-degree-of-freedom pre-rendering visual angle of at least one future frame.
6. The method of claim 5, the sampling data in the multi-dimensional spherical space range resulting in a second multiple degree of freedom prerendering view of at least one future frame, comprising:
acquiring schedulable resource information of a server;
based on the schedulable resource information, performing data sampling in the multi-dimensional spherical space range to obtain a second multi-degree-of-freedom pre-rendering visual angle of at least one future frame;
wherein the number of the second multiple degree of freedom pre-rendering views can be changed with the change of the schedulable resource information.
7. The method of claim 3, wherein obtaining the target rendered image of the current frame from the first pre-rendered image comprises:
and if the first pre-rendering visual angle information is different from the first rendering visual angle, correcting the first pre-rendering image according to the first pre-rendering visual angle information to obtain a target rendering image of the current frame.
8. The method of claim 3, further comprising:
if the first pre-rendering visual angle information is different from the first rendering visual angle, the first pre-rendering visual angle and the target rendering image are sent to the electronic equipment, so that the electronic equipment can correct the target rendering image according to the first pre-rendering visual angle information.
9. An image rendering control apparatus, the apparatus comprising:
the rendering visual angle information acquisition module is used for acquiring first rendering visual angle information of a current frame of an object to be rendered;
the detection module is used for detecting that a first prerender image matched with the first rendering visual angle information exists, wherein the first prerender image is obtained by utilizing the model data of the object to be rendered according to the first prerender visual angle information, and the first prerender visual angle information is obtained according to second rendering visual angle information obtained in a first historical frame;
and the target rendering image obtaining module is used for obtaining a target rendering image of the current frame according to the first pre-rendering image.
10. A server, the server comprising:
a communication interface;
a memory for storing a program for implementing the image rendering control method according to any one of claims 1 to 8;
a processor for calling and loading the program of the memory to realize the steps of the image rendering control method according to any one of claims 1 to 8.
CN202010473505.5A 2020-05-29 2020-05-29 Image rendering control method and device and server Active CN111627116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010473505.5A CN111627116B (en) 2020-05-29 2020-05-29 Image rendering control method and device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010473505.5A CN111627116B (en) 2020-05-29 2020-05-29 Image rendering control method and device and server

Publications (2)

Publication Number Publication Date
CN111627116A true CN111627116A (en) 2020-09-04
CN111627116B CN111627116B (en) 2024-02-27

Family

ID=72259202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010473505.5A Active CN111627116B (en) 2020-05-29 2020-05-29 Image rendering control method and device and server

Country Status (1)

Country Link
CN (1) CN111627116B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113316020A (en) * 2021-05-28 2021-08-27 上海曼恒数字技术股份有限公司 Rendering method, device, medium and equipment
CN113485776A (en) * 2021-08-02 2021-10-08 竞技世界(北京)网络技术有限公司 Entity processing method and device in multi-thread rendering
US20210360155A1 (en) * 2017-04-24 2021-11-18 Intel Corporation Object pre-encoding for 360-degree view for optimal quality and latency
CN113721874A (en) * 2021-07-29 2021-11-30 阿里巴巴(中国)有限公司 Virtual reality picture display method and electronic equipment
WO2022033389A1 (en) * 2020-08-11 2022-02-17 中兴通讯股份有限公司 Image processing method and apparatus, and electronic device and storage medium
CN114077508A (en) * 2022-01-19 2022-02-22 维塔科技(北京)有限公司 Remote image rendering method and device, electronic equipment and medium
WO2022063260A1 (en) * 2020-09-25 2022-03-31 华为云计算技术有限公司 Rendering method and apparatus, and device
CN114489538A (en) * 2021-12-27 2022-05-13 炫彩互动网络科技有限公司 Terminal display method of cloud game VR
CN115942049A (en) * 2022-08-26 2023-04-07 北京博雅睿视科技有限公司 VR video-oriented visual angle switching method, device, equipment and medium
WO2024055462A1 (en) * 2022-09-16 2024-03-21 如你所视(北京)科技有限公司 Vr scene processing method and apparatus, electronic device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106502427A (en) * 2016-12-15 2017-03-15 北京国承万通信息科技有限公司 Virtual reality system and its scene rendering method
WO2017131977A1 (en) * 2016-01-25 2017-08-03 Microsoft Technology Licensing, Llc Frame projection for augmented reality environments
CN107274472A (en) * 2017-06-16 2017-10-20 福州瑞芯微电子股份有限公司 A kind of method and apparatus of raising VR play frame rate
CN108171783A (en) * 2018-03-20 2018-06-15 联想(北京)有限公司 Image rendering method, system and electronic equipment
CN110136082A (en) * 2019-05-10 2019-08-16 腾讯科技(深圳)有限公司 Occlusion culling method, apparatus and computer equipment
CN110351480A (en) * 2019-06-13 2019-10-18 歌尔科技有限公司 Image processing method, device and electronic equipment for electronic equipment
CN111051959A (en) * 2017-09-01 2020-04-21 奇跃公司 Generating new frames using rendered and non-rendered content from previous perspectives

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017131977A1 (en) * 2016-01-25 2017-08-03 Microsoft Technology Licensing, Llc Frame projection for augmented reality environments
CN106502427A (en) * 2016-12-15 2017-03-15 北京国承万通信息科技有限公司 Virtual reality system and its scene rendering method
CN107274472A (en) * 2017-06-16 2017-10-20 福州瑞芯微电子股份有限公司 A kind of method and apparatus of raising VR play frame rate
CN111051959A (en) * 2017-09-01 2020-04-21 奇跃公司 Generating new frames using rendered and non-rendered content from previous perspectives
CN108171783A (en) * 2018-03-20 2018-06-15 联想(北京)有限公司 Image rendering method, system and electronic equipment
CN110136082A (en) * 2019-05-10 2019-08-16 腾讯科技(深圳)有限公司 Occlusion culling method, apparatus and computer equipment
CN110351480A (en) * 2019-06-13 2019-10-18 歌尔科技有限公司 Image processing method, device and electronic equipment for electronic equipment

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11800232B2 (en) * 2017-04-24 2023-10-24 Intel Corporation Object pre-encoding for 360-degree view for optimal quality and latency
US20210360155A1 (en) * 2017-04-24 2021-11-18 Intel Corporation Object pre-encoding for 360-degree view for optimal quality and latency
WO2022033389A1 (en) * 2020-08-11 2022-02-17 中兴通讯股份有限公司 Image processing method and apparatus, and electronic device and storage medium
WO2022063260A1 (en) * 2020-09-25 2022-03-31 华为云计算技术有限公司 Rendering method and apparatus, and device
CN113316020B (en) * 2021-05-28 2023-09-15 上海曼恒数字技术股份有限公司 Rendering method, device, medium and equipment
CN113316020A (en) * 2021-05-28 2021-08-27 上海曼恒数字技术股份有限公司 Rendering method, device, medium and equipment
CN113721874A (en) * 2021-07-29 2021-11-30 阿里巴巴(中国)有限公司 Virtual reality picture display method and electronic equipment
CN113485776A (en) * 2021-08-02 2021-10-08 竞技世界(北京)网络技术有限公司 Entity processing method and device in multi-thread rendering
CN113485776B (en) * 2021-08-02 2024-04-05 竞技世界(北京)网络技术有限公司 Method and device for processing entity in multithreading rendering
CN114489538A (en) * 2021-12-27 2022-05-13 炫彩互动网络科技有限公司 Terminal display method of cloud game VR
CN114077508A (en) * 2022-01-19 2022-02-22 维塔科技(北京)有限公司 Remote image rendering method and device, electronic equipment and medium
CN115942049A (en) * 2022-08-26 2023-04-07 北京博雅睿视科技有限公司 VR video-oriented visual angle switching method, device, equipment and medium
WO2024055462A1 (en) * 2022-09-16 2024-03-21 如你所视(北京)科技有限公司 Vr scene processing method and apparatus, electronic device and storage medium

Also Published As

Publication number Publication date
CN111627116B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
CN111627116B (en) Image rendering control method and device and server
US10586395B2 (en) Remote object detection and local tracking using visual odometry
US20200112625A1 (en) Adaptive streaming of virtual reality data
US10659759B2 (en) Selective culling of multi-dimensional data sets
CN109743626B (en) Image display method, image processing method and related equipment
JP6131950B2 (en) Information processing apparatus, information processing method, and program
CN107911737B (en) Media content display method and device, computing equipment and storage medium
US20170155885A1 (en) Methods for reduced-bandwidth wireless 3d video transmission
EP3691280B1 (en) Video transmission method, server, vr playback terminal and computer-readable storage medium
CN115409940A (en) Terminal, receiving method, distributing device and distributing method
CN110996097B (en) VR multimedia experience quality determination method and device
CN111583350A (en) Image processing method, device and system and server
AU2018416431B2 (en) Head-mounted display and method to reduce visually induced motion sickness in a connected remote display
CN106375682B (en) Image processing method and device, movable equipment, unmanned aerial vehicle remote controller and system
CN109040525B (en) Image processing method, image processing device, computer readable medium and electronic equipment
KR102503337B1 (en) Image display method, apparatus and system
CN109978945B (en) Augmented reality information processing method and device
CN109766006B (en) Virtual reality scene display method, device and equipment
CN112073632A (en) Image processing method, apparatus and storage medium
CN115131528A (en) Virtual reality scene determination method, device and system
KR20210055381A (en) Apparatus, method and computer program for providing augmented reality contentes through smart display
US8619124B2 (en) Video data processing systems and methods
CN113515193A (en) Model data transmission method and device
JP2017183816A (en) Moving image distribution server, moving image output device, moving image distribution system, and moving image distribution method
WO2023079987A1 (en) Distribution device, distribution method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant