CN109712224B - Virtual scene rendering method and device and intelligent device - Google Patents

Virtual scene rendering method and device and intelligent device Download PDF

Info

Publication number
CN109712224B
CN109712224B CN201811639195.9A CN201811639195A CN109712224B CN 109712224 B CN109712224 B CN 109712224B CN 201811639195 A CN201811639195 A CN 201811639195A CN 109712224 B CN109712224 B CN 109712224B
Authority
CN
China
Prior art keywords
head
motion
angular velocity
displacement
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811639195.9A
Other languages
Chinese (zh)
Other versions
CN109712224A (en
Inventor
刘帅
杨宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Hisense Visual Technology Co Ltd
Original Assignee
Beihang University
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University, Hisense Visual Technology Co Ltd filed Critical Beihang University
Priority to CN201811639195.9A priority Critical patent/CN109712224B/en
Publication of CN109712224A publication Critical patent/CN109712224A/en
Application granted granted Critical
Publication of CN109712224B publication Critical patent/CN109712224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses a virtual scene rendering method, a virtual scene rendering device and intelligent equipment, and belongs to the field of computer graphics. According to the method and the device, the motion data acquired at the current moment can be acquired, the head gesture is determined based on the motion data, and whether the head motion and/or the body motion occurs or not is determined, when the head motion and/or the body motion occurs based on the motion data, the first scaling factor is acquired based on the motion data, the first virtual scene to be rendered is rendered based on the head gesture and the texture image of the first scaling factor, namely, when the head motion or the body motion is detected, the low-resolution texture can be adopted for rendering, so that the calculated amount can be effectively reduced, and the rendering efficiency is improved.

Description

Virtual scene rendering method and device and intelligent device
Technical Field
The present disclosure relates to the field of computer graphics, and in particular, to a method and apparatus for volume rendering and an intelligent device.
Background
The development of computer graphics has greatly facilitated the update iterations of industries such as games, movies, animations, computer aided design and manufacture, virtual reality, etc. In the field of computer graphics technology, simulation of real world simulations and rendering of virtual scenes have been a research hotspot. Specifically, the rendering of the virtual scene refers to constructing a scene model of the virtual scene, and then drawing textures on the scene model, thereby obtaining a three-dimensional virtual scene with reality.
In the related art, when virtual scene rendering is performed, the intelligent device can acquire inertial data acquired by an IMU located at a head, then perform gesture calculation on the inertial data to obtain a head gesture, then determine a scene model of a virtual scene to be rendered according to the head gesture, then load a texture image matched with the scene model, and render the scene model of the virtual scene according to the loaded texture image.
When the method is adopted to render the virtual scene, when the texture of the scene model is complex, the rendering amount is extremely huge, so that the computing power consumption of the GPU (Graphics Processing Unit, graphics processor) of the intelligent device is overlarge, and the rendering efficiency is low.
Disclosure of Invention
The embodiment of the application provides a virtual scene rendering method, device and intelligent equipment, which can be used for solving the problems of large GPU (graphics processing unit) calculation power consumption and low rendering efficiency during virtual scene rendering. The technical scheme is as follows:
in a first aspect, a method for rendering a virtual scene is provided, the method comprising:
acquiring motion data acquired at the current moment;
determining a head pose, and whether a head movement and/or a body movement occurs, based on the motion data;
When it is determined that head movement and/or body movement occurs based on the movement data, obtaining a first scaling factor based on the movement data;
and rendering the first virtual scene to be rendered based on the head gesture and the texture image of the first scaling factor.
Optionally, the motion data comprises angular velocity acquired by an inertial measurement unit IMU located at the head and position information acquired by a position tracking device;
the determining of the head pose, and whether a head movement and/or a body movement occurs, based on the movement data, comprises:
carrying out attitude calculation on the angular velocity to obtain the head attitude;
determining body displacement based on the position information and historical position information, wherein the historical position information refers to position information acquired by the position tracking equipment at a first target time before and a preset time away from the current time;
judging whether the angular velocity is greater than an angular velocity threshold and judging whether the body displacement is greater than a displacement threshold;
if the angular velocity is greater than the angular velocity threshold, determining that the head movement occurs, and if the body displacement is greater than the displacement threshold, determining that the body movement occurs.
Optionally, the motion data includes angular velocity acquired by an IMU located at the head and position information acquired by a position tracking device;
the obtaining a first scaling factor based on the motion data includes:
determining a second scaling factor corresponding to the angular velocity based on a first corresponding relation between the stored angular velocity and the scaling factor, and determining a third scaling factor corresponding to the body displacement based on a second corresponding relation between the stored displacement and the scaling factor, wherein the body displacement is determined according to the position information and the historical position information;
and determining the scaling multiple with the largest numerical value in the second scaling multiple and the third scaling multiple as the first scaling multiple.
Optionally, the motion data includes angular velocity acquired by an IMU located at the head and position information acquired by a position tracking device;
the obtaining a first scaling factor based on the motion data includes:
determining a target head motion coefficient corresponding to the angular velocity based on a third corresponding relation between the stored angular velocity and the head motion coefficient, and determining a target body motion coefficient corresponding to a body displacement based on a fourth corresponding relation between the stored displacement and the body motion coefficient, wherein the body displacement is determined according to the position information and the historical position information;
Wherein each head motion coefficient in the third correspondence is greater than 1, and the head motion coefficients in the third correspondence are positively correlated with angular velocity, each body motion coefficient in the fourth correspondence is greater than 1, and the body motion coefficients and displacement in the fourth correspondence are positively correlated;
the first scaling factor is determined based on the target head motion factor and the body motion factor.
Optionally, the rendering the first virtual scene to be rendered based on the head pose and the texture image of the first scaling factor includes:
determining a scene model of a first virtual scene to be rendered based on the head pose;
obtaining a texture image which is matched with the scene model and zoomed according to the first zoom multiple;
and rendering the scene model based on the acquired texture image.
Optionally, after the rendering of the first virtual scene to be rendered, based on the head pose and the texture image of the first scaling factor, the method further includes:
predicting motion data of a second target moment based on the operation data to obtain predicted motion data, wherein the second target moment is after the current moment and is separated from the current moment by a preset time length;
Determining a predicted head pose at the second target instant based on the predicted motion data, and whether head motion and/or body motion occurs;
obtaining a fourth scaling factor based on the predicted motion data when it is determined that head motion and/or body motion occurs based on the predicted motion data;
and at the second target moment, rendering the second virtual scene to be rendered based on the predicted head gesture and the texture image of the fourth scaling multiple.
In a second aspect, there is provided a rendering apparatus of a virtual scene, the apparatus comprising:
the first acquisition module is used for acquiring the motion data acquired at the current moment;
a first determination module for determining a head pose, and whether a head movement and/or a body movement occurs, based on the movement data;
a second acquisition module for acquiring a first scaling factor based on the motion data when it is determined that head motion and/or body motion occurs based on the motion data;
and the first rendering module is used for rendering the first virtual scene to be rendered based on the head gesture and the texture image of the first scaling multiple.
Optionally, the motion data includes angular velocity acquired by an IMU located at the head and position information acquired by a position tracking device;
The first determining module is specifically configured to:
carrying out attitude calculation on the angular velocity to obtain the head attitude;
determining body displacement according to the position information and historical position information, wherein the historical position information refers to position information acquired by the position tracking equipment at a first target time before the current time and at a preset time from the current time;
judging whether the angular velocity is greater than an angular velocity threshold and judging whether the body displacement is greater than a displacement threshold;
if the angular velocity is greater than the angular velocity threshold, determining that the head movement occurs, and if the body displacement is greater than the displacement threshold, determining that the body movement occurs.
Optionally, the motion data includes angular velocity acquired by an IMU located at the head and position information acquired by a position tracking device;
the second obtaining module is specifically configured to:
determining a second scaling factor corresponding to the angular velocity based on a first corresponding relation between the stored angular velocity and the scaling factor, and determining a third scaling factor corresponding to the body displacement based on a second corresponding relation between the stored displacement and the scaling factor, wherein the body displacement is determined according to the position information and the historical position information;
And determining the scaling multiple with the largest numerical value in the second scaling multiple and the third scaling multiple as the first scaling multiple.
Optionally, the motion data includes angular velocity acquired by an IMU located at the head and position information acquired by a position tracking device;
the second obtaining module is specifically configured to:
determining a target head motion coefficient corresponding to the angular velocity based on a third corresponding relation between the stored angular velocity and the head motion coefficient, and determining a target body motion coefficient corresponding to a body displacement based on a fourth corresponding relation between the stored displacement and the body motion coefficient, wherein the body displacement is determined according to the position information and the historical position information;
wherein each head motion coefficient in the third correspondence is greater than 1, and the head motion coefficients in the third correspondence are positively correlated with angular velocity, each body motion coefficient in the fourth correspondence is greater than 1, and the body motion coefficients and displacement in the fourth correspondence are positively correlated;
the first scaling factor is determined based on the target head motion factor and the body motion factor.
Optionally, the first rendering module is specifically configured to:
determining a scene model of the first virtual scene to be rendered based on the head pose;
obtaining a texture image which is matched with the scene model and zoomed according to the first zoom multiple;
and rendering the scene model based on the acquired texture image.
Optionally, the apparatus further comprises:
the prediction module is used for predicting the motion data of a second target moment based on the operation data to obtain predicted motion data, wherein the second target moment is after the current moment and is separated from the current moment by a preset time length;
a second determining module for determining, based on the predicted motion data, a predicted head pose at the second target moment, and whether a head motion and/or a body motion occurs;
a third acquisition module for acquiring a fourth scaling factor based on the predicted motion data when it is determined that head motion and/or body motion occurs based on the predicted motion data;
and the second rendering module is used for rendering a second virtual scene to be rendered based on the predicted head gesture and the texture image with the fourth scaling multiple at the second target moment.
In a third aspect, there is provided a volume rendering apparatus, the apparatus comprising:
a processor comprising an image processor GPU;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the methods of the first aspect above.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon instructions which, when executed by a processor, implement the steps of any of the methods of the first aspect described above.
The beneficial effects that technical scheme that this application embodiment provided include at least:
according to the method and the device for rendering the first virtual scene at the current moment, the motion data acquired at the current moment can be acquired, the head gesture is determined based on the motion data, and whether the head motion and/or the body motion occurs or not is determined. That is, in the embodiment of the present application, when the head movement or the body movement is detected, the virtual scene may be rendered based on the scaled texture image, that is, the low resolution texture is used for rendering, and since the human eyes do not need the high resolution image when the head movement or the body movement occurs, the low resolution texture image is used for rendering, so that the calculation amount may be effectively reduced, and the rendering efficiency may be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a virtual scene rendering method provided in an embodiment of the present application;
FIG. 2 is a flowchart of another method for rendering a virtual scene according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of a texture image at various zoom factors provided by an embodiment of the present application;
fig. 4 is a block diagram of a volume rendering apparatus 400 according to an embodiment of the present application;
fig. 5 is a block diagram of a smart device 500 according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the embodiments of the present application in detail, application scenarios related to the embodiments of the present application are described.
Currently, in VR (Virtual Reality) or AR (Augmented Reality ) technologies, when a Virtual scene is rendered for a user with high resolution required for high immersion, extremely high demands are placed on the processing capability of the GPU of the smart device. For users, low delay, high frame rate and high image quality in the rendering process of the intelligent device are the necessary conditions for ensuring good virtual reality experience. For example, for VR head mounted display devices, low resolution can limit the field of view, resulting in a poor user experience. If the resolution of the VR head-mounted display device is increased, the GPU of the VR head-mounted display device is required to have higher processing capability accordingly. At present, a high-end GPU still cannot bring an optimal VR or AR experience to a user, so how to effectively utilize the processing capability of the GPU, and thus providing high-quality VR or AR content more conforming to human eyes for the user is a key problem. The rendering method provided by the embodiment of the application can be applied to the scenes, the judgment of the motion state is added on the basis of the conventional rendering method, the rendering parameters are adjusted according to the judgment result, and then scene rendering and display are performed, so that the calculated amount of the GPU of the intelligent device is reduced while the requirement of a user on the resolution of the image is met, and the rendering efficiency is further improved.
Next, a specific implementation manner of the volume rendering method provided in the embodiment of the present application will be described.
Fig. 1 is a flowchart of a virtual scene rendering method provided in an embodiment of the present application, where the method may be used in a smart device, where the smart device may be a VR head-mounted display device with integrated image processing functions and display functions, and the smart device may include an IMU (inertial measurement unit ), or an IMU and a position tracking device. Alternatively, the smart device may be a terminal such as a cell phone, tablet, laptop, desktop, etc., and the smart device may have a VR or AR head mounted display device connected thereto, wherein the connected VR or AR head mounted display device includes an IMU, or an IMU and a position tracking device. As shown in fig. 1, the method comprises the steps of:
step 101: and acquiring the motion data acquired at the current moment.
The motion data collected at the current moment may include an angular velocity collected by the IMU, or include an angular velocity collected by the IMU and position information collected by the position tracking device. Wherein the IMU is located on the user's head and the position-tracking device is located on or outside the user's body, such as a laser position-tracking device or the like outside the user's body.
Step 102: based on the motion data, a head pose is determined, as well as whether head motion and/or body motion occurs.
The head pose may include pose data such as a center of gravity of the head and a tilt angle of the head, for example, the head pose may be a low head, head-up, left-tilt, right-tilt, or the like pose.
Step 103: when it is determined that head movement and/or body movement occurs based on the movement data, a first scaling factor is acquired based on the movement data.
Step 104: and rendering the first virtual scene to be rendered based on the head gesture and the texture image of the first scaling multiple.
According to the method and the device for rendering the first virtual scene at the current moment, the motion data acquired at the current moment can be acquired, the head gesture is determined based on the motion data, and whether the head motion and/or the body motion occurs or not is determined. That is, in the embodiment of the present application, when the head movement or the body movement is detected, the virtual scene may be rendered based on the zoomed texture image, that is, the scene rendering is performed by adopting the low resolution texture, and since the human eyes do not need the high resolution image when the head movement or the body movement occurs, the rendering is performed by adopting the low resolution texture image, which can effectively reduce the calculation amount and improve the rendering efficiency.
Fig. 2 is a flowchart of another virtual scene rendering method provided in an embodiment of the present application, where the method may be used in a smart device, where the smart device may be a VR head-mounted display device with integrated image processing functions and display functions, and the smart device may include an IMU, or an IMU and a position tracking device. Alternatively, the smart device may be a terminal such as a cell phone, tablet, laptop, desktop, etc., and the smart device may have a VR or AR head mounted display device connected thereto, wherein the connected VR or AR head mounted display device includes an IMU, or an IMU and a position tracking device. As shown in fig. 2, the method comprises the steps of:
step 201: and acquiring the motion data acquired at the current moment.
Wherein the motion data comprises inertial data acquired by the IMU or comprises inertial data acquired by the IMU and position information acquired by the position-tracking device. The IMU is located on the head of the user, and the position tracking device is located on or outside the body of the user and used for collecting position information of the body of the user.
The inertial data collected by the IMU includes at least angular velocity, but may also include acceleration, yaw angle, and the like. Wherein the angular velocity is integrated to obtain the device pose. In addition, the attitude deviation with respect to the direction of gravity can be corrected using the acceleration, i.e., the acceleration can be used to correct the angular deviation in the attitude. The yaw angle may be used to further correct the attitude deviation.
For example, the IMU may include a gyroscope for acquiring angular velocity. Further, the IMU may also include an accelerometer for acquiring acceleration and a magnetometer for acquiring yaw angle.
By way of example, the position-tracking device may be an optically position-tracking device, such as a laser position-tracking device, a Light House tracking kit, an Oculus's "constellation" tracking kit, a VR head-up tracking kit, and the like.
In practical application, if the intelligent device is integrated with the VR device, the intelligent device may collect the angular velocity through the IMU in the VR device. If the intelligent device does not comprise the VR device, the intelligent device can communicate with the VR device and acquire the angular velocity acquired by the IMU in the VR device at the current moment. It should be noted that, since the IMU is located at the head of the user, the angular velocity acquired by the IMU is actually the rotational velocity of the head of the user. In addition, if the intelligent device and the position tracking device are integrated together, the intelligent device can acquire the position information of the current moment through the position tracking device. If the intelligent device does not comprise the position tracking device, the intelligent device can communicate with the position tracking device and acquire the position information acquired by the position tracking device at the current moment. It should be noted that the position tracking device is used for tracking the position of the user, and the position information of the body of the user is collected.
In the embodiment of the application, the motion data acquired at the current moment can be acquired, the motion state of the head of the user is determined according to the acquired motion data, and then the rendering parameters are adjusted according to the motion state of the head, so that virtual scene rendering is performed for the user. Or determining the motion state of the head of the user and the motion state of the body of the user according to the acquired motion data, and further adjusting rendering parameters according to the motion states of the head and the body so as to render the virtual scene for the user.
Step 202: based on the motion data, a head pose is determined, as well as whether head motion and/or body motion occurs.
In this embodiment of the present application, after the motion data is obtained, the gesture calculation may be performed on the motion data to obtain the head gesture of the user, and according to the motion data, whether the head motion and/or the body motion occurs may be determined.
Specifically, the inertial data acquired by the IMU may be subjected to a pose solution to obtain a head pose. For example, the angular velocity acquired by the IMU is subjected to pose calculation to obtain the head pose. Wherein performing the gesture resolution on the angular velocity includes integrating the angular velocity.
In one embodiment, if the motion data includes an angular velocity acquired by the IMU, after the angular velocity is acquired, the smart device may perform a pose calculation on the angular velocity to obtain a head pose, and determine whether the angular velocity is greater than an angular velocity threshold, and if the angular velocity is greater than the angular velocity threshold, determine that head motion occurs.
In another embodiment, if the motion data includes angular velocity acquired by the IMU and position information acquired by the position tracking device, the intelligent device may perform gesture calculation on the angular velocity after the motion data is acquired to obtain a head gesture, determine body displacement based on the position information and the historical position information, determine whether the angular velocity is greater than an angular velocity threshold, and determine whether the body displacement is greater than a displacement threshold, determine that head movement occurs if the angular velocity is greater than the angular velocity threshold, and determine that body movement occurs if the body displacement is greater than the displacement threshold.
The historical position information refers to position information acquired by the position tracking device at a first target time before and a preset time away from the current time. That is, the location tracking device may send the collected location information at different moments to the smart device, where the location information is stored by the smart device. When the intelligent device acquires the position information of the current moment, the displacement of the current moment relative to the first target moment can be determined according to the position information and the history position information of the first target moment acquired before, and then the body displacement is obtained.
The angular velocity threshold may be a preset angular velocity at which the head starts to rotate. The displacement threshold may be a preset displacement of the body to begin movement.
If the angular velocity is greater than the angular velocity threshold value, but the body displacement is not greater than the displacement threshold value, it may be determined that the head movement has occurred, but that the body movement has not occurred. If the angular velocity is not greater than the angular velocity threshold, but the body displacement is greater than the displacement threshold, it may be determined that body movement has occurred, but that head movement has not occurred. If the angular velocity is greater than the angular velocity threshold and the body displacement is greater than the displacement threshold, it may be determined that head and body movement occurred. If the angular velocity is not greater than the angular velocity threshold and the body displacement is not greater than the displacement threshold, it may be determined that neither head movement nor body movement has occurred.
Step 203: when it is determined that head movement and/or body movement occurs based on the movement data, a first scaling factor is acquired based on the movement data.
In the related art, whether head movement or body movement occurs or not, the intelligent device loads a non-scaled texture image matched with a virtual model of a virtual scene to be rendered, and renders the virtual scene based on the loaded texture image.
In the embodiment of the present application, when it is determined that head movement and/or body movement occurs, a first scaling factor corresponding to the movement data may be obtained, and then, virtual scenes are rendered based on texture images scaled according to the first scaling factor, that is, scene rendering is performed according to a lower texture resolution in a relatively stationary state, so that in a user movement state, the rendering quality may be reduced, the rendering efficiency may be improved, and thus, the delay may be reduced. Moreover, since the human eye is relatively insensitive to images when head and/or body movements occur, the image of the viewed object may fly across the retinal surface, in which case scene rendering with low texture resolution does not affect the user's visual experience.
The first scaling factor is a factor for reducing the original texture image, and corresponds to the motion speed indicated by the motion data, and the greater the motion speed of the user is, the greater the first scaling factor is, and the smaller the motion speed of the user is, the smaller the first scaling factor is. Further, scaling the original texture image by the first scaling factor generally refers to scaling the length and width of the original texture image by the first scaling factor, respectively. Alternatively, the first scaling factor is typically 2 n N is a positive integer. For example, the second scaling factor may be 2 1 、2 2 、2 3 、2 4 、2 5 、2 6 Or 2 7 Etc.
It should be noted that, as shown in fig. 3, the original texture image is typically 128×128 in size, according to 2 1 The size of the texture image obtained after scaling is 64×64, according to 2 2 The size of the texture image obtained after scaling was 32×32, according to 2 3 The texture image obtained after scaling has a size of 16×16, according to 2 4 The size of the texture image obtained after scaling is 8 x 8, according to 2 5 The size of the texture image obtained after scaling is 4 x 4, according to 2 6 The size of the texture image obtained after scaling is 2 x 2, according to 2 7 The size of the texture image obtained after scaling is 1×1.
Specifically, this step can be classified into the following three cases according to the detected movement.
First case: when the occurrence of head movement is determined, a scaling factor corresponding to the angular velocity is determined based on a first correspondence relation between the stored angular velocity and the scaling factor, and the scaling factor corresponding to the angular velocity is determined as the first scaling factor.
The angular velocity in the first corresponding relation is in direct proportion to the scaling multiple, and the scaling multiple is larger as the angular velocity is larger. The intelligent terminal can set different scaling factors for different angular speeds in advance, establish a first corresponding relation between the angular speed and the scaling factors, store the first corresponding relation, and directly determine the scaling factors corresponding to the current angular speed based on the stored first corresponding relation when head movement is determined.
For example, the first correspondence between the angular velocity and the scaling factor may be shown in table 1 below, where the angular velocity in table 1 is gradually increased, and the corresponding scaling factor is also gradually increased.
TABLE 1
Angular velocity of Scaling factor
W1 λ1
W2 λ2
W3 λ3
W4 λ4
... ...
Second case:when the body movement is determined to occur, determining a scaling factor corresponding to the body displacement based on a second corresponding relation between the stored displacement and the scaling factor, wherein the body displacement is determined according to the position information and the historical position information; determining a scaling factor corresponding to the body displacement asThe first scaling factor.
The displacement in the second corresponding relation is in direct proportion to the scaling multiple, and the larger the displacement is, the larger the scaling multiple is. The intelligent terminal can set different scaling factors for different displacements in advance, establish a second corresponding relation between the displacement and the scaling factors, store the second corresponding relation, and directly determine the scaling factors corresponding to the current body displacement based on the stored second corresponding relation when body movement is determined.
Third case:when the head movement and the body movement are determined, a second scaling factor corresponding to the angular velocity is determined based on the first corresponding relation between the stored angular velocity and the scaling factor, a third scaling factor corresponding to the body displacement is determined based on the second corresponding relation between the stored displacement and the scaling factor, the body displacement is determined according to the position information and the historical position information, and then the scaling factor with the largest value in the second scaling factor and the third scaling factor is determined as the first scaling factor.
For example, if the second scaling factor corresponding to the angular velocity is greater than the third scaling factor corresponding to the body displacement, determining the second scaling factor as the first scaling factor; and if the second scaling multiple corresponding to the angular velocity is smaller than the third scaling multiple corresponding to the body displacement, determining the third scaling multiple as the first scaling multiple.
Fourth case: when it is determined that head movement and body movement occur, a target head movement coefficient corresponding to the angular velocity is determined based on a third correspondence between the stored angular velocity and the head movement coefficient, and a target body movement coefficient corresponding to the body displacement is determined based on a fourth correspondence between the stored displacement and the body movement coefficient, and then a first scaling factor is determined based on the target head movement coefficient and the target body movement coefficient.
The body displacement is determined according to the position information and the historical position information, each head movement coefficient in the third corresponding relation is larger than 1, the head movement coefficient in the third corresponding relation is positively correlated with the angular velocity, each body movement coefficient in the fourth corresponding relation is larger than 1, and the body movement coefficient and the displacement in the fourth corresponding relation are positively correlated.
Specifically, the product between the target head motion coefficient and the target body motion coefficient may be determined as the first scaling factor, or the product between the target head motion coefficient, the target body motion coefficient, and a preset constant may be determined as the first scaling factor. The preset constant is a preset parameter for converting the target head motion coefficient and the target body motion coefficient into scaling factors.
Step 204: based on the head pose, a scene model of a first virtual scene to be rendered is determined.
When the user is in different head poses, the field of view of the user is correspondingly different, and the virtual scene to be rendered for the user is further different. For example, assuming that the user is in a virtual grassland, when the head pose of the user is a head pose, the first virtual scene to be rendered should be a sky scene, and when the head pose of the user is a low head pose, the first virtual scene to be rendered should be a grassland scene. Alternatively, assuming that the user is in a city, the first virtual scene to be rendered should be a building roof when the head pose of the user is a head pose, and should be a road scene when the head pose of the user is a low head pose.
In the embodiment of the present application, a first virtual scene to be rendered may be determined according to a head gesture of a user, and a scene model of the first virtual scene to be rendered may be created. The scene model may be a three-dimensional model.
Step 205: and obtaining a texture image which is matched with the scene model and zoomed according to the first zoom multiple.
After determining a scene model of a first virtual scene to be rendered, a texture image matched with the scene model can be acquired, so that the scene model is rendered based on the acquired texture image, and the textured virtual scene is drawn.
Specifically, before rendering, texture data may be loaded, and when rendering is required, a texture image that matches the scene model and scales by a first scaling factor is selected from the loaded texture data, so as to render the scene model based on the acquired texture image. For example, texture images of different scaling factors may be preloaded, and after determining a first scaling factor from the loaded texture images, the texture images of the first scaling factor may be selected for rendering.
In practical applications, the Mipmap technique (a computer graphics image technique) may be used to perform hierarchical processing on texture data in a rendered scene, where when a texture is loaded, not only one texture is loaded, but a series of textures from large to small are loaded, and then the most appropriate texture is selected according to a given state through OpenGL (Open Graphics Library ). For example, by using the Mipmap technique, the original texture image is scaled by a multiple of 2 until the original texture image is scaled to a texture image with a size of 1x1, so as to obtain a series of texture images as shown in fig. 3, and then the texture images are stored.
Step 206: rendering the scene model based on the acquired texture image.
That is, the scene model may be texture-drawn according to the acquired texture image, so as to draw a more realistic first virtual scene.
In the embodiment of the application, the characteristic that the human eyes do not need a high-resolution view in the process of rapid head rotation or rapid body movement is utilized, the motion state of a user is judged through the acquired motion data, namely the position gesture, and the rendering parameters are adaptively adjusted in real time according to the motion state, so that the occupation of rendering resources and the loss of calculation are greatly reduced, the operation time is saved, the rendering delay is reduced, and the VR/AR experience process is improved.
Further, after the first virtual scene at the current time is rendered based on the head gesture and the texture image with the first scaling multiple, the intelligent device may predict motion data at a second target time based on the operation data at the current time to obtain predicted motion data, where the second target time is a time after the current time and separated from the current time by a preset time length. Wherein the predicted motion data comprises a predicted angular velocity, or a predicted angular velocity and predicted position information.
Specifically, the motion data at the current time may include the motion speed, acceleration and position information of the body, and the position information at the second target time may be predicted based on the motion data at the current time by the following formula (1) or (2), to obtain predicted position information:
s 0 +v 0 ×t=s t (1)
wherein s is 0 V is the position information of the current moment 0 For the motion speed at the current moment, t is a preset time length, s t Is the predicted position information.
Figure BDA0001930775690000141
Wherein s is 0 V is the position information of the current moment 0 For the motion speed at the current moment, a is the acceleration at the current moment, t is the preset duration, s t Is the predicted position information.
Furthermore, the historical position information can be optimized, for example, the historical position information is smoothed or jitter is removed, and the position information at the current moment is obtained. For example, the latest 3 pieces of historical position information may be averaged as the position information at the current time and used for the position prediction. The optimization may be varied and is not limited.
Further, after obtaining the predicted motion data, the smart device may further determine, based on the predicted motion data, a predicted head pose at the second target time, and whether a head motion and/or a body motion occurs; when it is determined that head movement and/or body movement occurs based on the predicted movement data, a fourth scaling factor is acquired based on the predicted movement data, and then, at a second target time, a second virtual scene at the second target time is rendered based on the predicted head pose and the texture image of the fourth scaling factor.
The implementation manner of rendering the second virtual scene at the second target moment based on the predicted head pose and the texture image with the fourth scaling factor is the same as the manner of rendering the first virtual scene based on the head pose and the texture image with the first scaling factor, and the embodiments of the present application are not described herein again.
According to the method and the device for rendering the first virtual scene at the current moment, the motion data acquired at the current moment can be acquired, the head gesture is determined based on the motion data, and whether the head motion and/or the body motion occurs or not is determined. That is, in the embodiment of the present application, when the head movement or the body movement is detected, the virtual scene may be rendered based on the zoomed texture image, that is, the scene rendering is performed by adopting the low resolution texture, and since the human eyes do not need the high resolution image when the head movement or the body movement occurs, the rendering is performed by adopting the low resolution texture image, which can effectively reduce the calculation amount and improve the rendering efficiency. Therefore, when the user is in a static state, the scene rendering is performed by adopting the high-resolution texture, and when the user is in a motion state, the scene rendering is performed by adopting the low-resolution texture, and different texture scaling levels are set for different motion speeds, so that the scene rendering can be performed by adopting the corresponding texture scaling levels according to the motion speed of the user.
In addition, for a user, when looking directly through both eyes at the outside world, when the user's line of sight is rapidly shifted, the image of the object to be viewed may fly across the retinal surface, and the image of the outside world is effectively blurred. Based on this, if the resolution of scene rendering is the same as that of rendering without head rotation or eye rotation during rapid rotation of the user's head or rapid movement of the body, the user's eyes may feel "uncomfortable" and the user's eyes may be fatigued by processing excessive information, even dizziness. In the embodiment of the application, when the head rotation and/or the body movement are detected, the scene rendering can be performed by adopting the low-resolution texture, so that the definition of the virtual scene obtained by rendering is lower than that of the virtual scene rendered under the condition that the head rotation or the body movement does not occur, and thus, the human eye vision can be simulated more truly, and the physiological discomfort such as visual fatigue, dizziness and the like of a user can be effectively relieved.
Next, a description will be given of a volume rendering apparatus provided in an embodiment of the present application.
Fig. 4 is a block diagram of a volume rendering apparatus 400 according to an embodiment of the present application, where the apparatus 400 may be integrated into the smart device in the foregoing embodiment, and referring to fig. 4, the apparatus 400 includes:
A first obtaining module 401, configured to obtain motion data collected at a current moment;
a first determination module 402 for determining a head pose, and whether a head motion and/or a body motion occurs, based on the motion data;
a second acquisition module 403 for acquiring a first scaling factor based on the motion data when it is determined that a head motion and/or a body motion occurs based on the motion data;
the first rendering module 404 is configured to render a first virtual scene to be rendered based on the head pose and the texture image of the first scaling factor.
Optionally, the motion data includes angular velocity acquired by an IMU located at the head and position information acquired by a position tracking device;
the first determining module 402 is specifically configured to:
carrying out attitude calculation on the angular velocity to obtain the head attitude;
determining body displacement according to the position information and historical position information, wherein the historical position information refers to position information acquired by the position tracking equipment at a first target time before the current time and at a preset time interval from the current time;
judging whether the angular velocity is greater than an angular velocity threshold value, and judging whether the body displacement is greater than a displacement threshold value;
If the angular velocity is greater than the angular velocity threshold, the head movement is determined to occur, and if the body displacement is greater than the displacement threshold, the body movement is determined to occur.
Optionally, the motion data includes angular velocity acquired by an IMU located at the head and position information acquired by a position tracking device;
the second obtaining module 403 is specifically configured to:
determining a second scaling factor corresponding to the angular velocity based on a first corresponding relation between the stored angular velocity and the scaling factor, and determining a third scaling factor corresponding to the body displacement based on a second corresponding relation between the stored displacement and the scaling factor, wherein the body displacement is determined according to the position information and the historical position information;
and determining the scaling multiple with the largest numerical value in the second scaling multiple and the third scaling multiple as the first scaling multiple.
Optionally, the motion data includes angular velocity acquired by an IMU located at the head and position information acquired by a position tracking device;
the second obtaining module 403 is specifically configured to:
determining a target head motion coefficient corresponding to the angular velocity based on a third correspondence between the stored angular velocity and the head motion coefficient, and determining a target body motion coefficient corresponding to a body displacement based on a fourth correspondence between the stored displacement and the body motion coefficient, the body displacement being determined from the position information and the historical position information;
Wherein, each head motion coefficient in the third corresponding relation is greater than 1, and the head motion coefficient in the third corresponding relation is positively correlated with the angular velocity, each body motion coefficient in the fourth corresponding relation is greater than 1, and the body motion coefficient and the displacement in the fourth corresponding relation are positively correlated;
the first scaling factor is determined based on the target head motion factor and the body motion factor.
Optionally, the first rendering module 404 is specifically configured to:
determining a scene model of the first virtual scene to be rendered based on the head pose;
obtaining a texture image which is matched with the scene model and zoomed according to the first zoom multiple;
rendering the scene model based on the acquired texture image.
Optionally, the apparatus further comprises:
the prediction module is used for predicting the motion data of the second target moment based on the operation data to obtain predicted motion data, and the second target moment is after the current moment and is separated from the current moment by a preset time length;
a second determination module for determining, based on the predicted motion data, a predicted head pose at the second target instant, and whether a head motion and/or a body motion occurs;
A third acquisition module for acquiring a fourth scaling factor based on the predicted motion data when it is determined that head motion and/or body motion occurs based on the predicted motion data;
and the second rendering module is used for rendering the second virtual scene to be rendered based on the predicted head gesture and the texture image with the fourth scaling multiple at the second target moment.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
According to the method and the device for rendering the first virtual scene at the current moment, the motion data acquired at the current moment can be acquired, the head gesture is determined based on the motion data, and whether the head motion and/or the body motion occurs or not is determined. That is, in the embodiment of the present application, when the head movement or the body movement is detected, the virtual scene may be rendered based on the scaled texture image, that is, the low resolution texture is used for rendering, and since the human eyes do not need the high resolution image when the head movement or the body movement occurs, the low resolution texture image is used for rendering, so that the calculation amount may be effectively reduced, and the rendering efficiency may be improved.
It should be noted that: in the volume rendering device provided in the above embodiment, only the division of the above functional modules is used for illustration, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the volume rendering device and the volume rendering method provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the volume rendering device and the volume rendering method are detailed in the method embodiments, which are not repeated herein.
Fig. 5 is a block diagram of a smart device 500 according to an embodiment of the present application. The smart device 500 may be: notebook computers, desktop computers, smart phones or tablet computers, etc. The smart device 500 may also be referred to as a user device, portable terminal, laptop terminal, desktop terminal, etc.
In general, the smart device 500 includes: a processor 501 and a memory 502.
Processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 501 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 501 may also include a main processor and a coprocessor, the main processor being a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 501 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 501 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 502 is used to store at least one instruction for execution by processor 501 to implement the virtual scene rendering methods provided by the method embodiments herein.
In some embodiments, the smart device 500 may further optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502, and peripheral interface 503 may be connected by buses or signal lines. The individual peripheral devices may be connected to the peripheral device interface 503 by buses, signal lines or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 504, touch display 505, camera 506, audio circuitry 507, positioning component 508, and power supply 509.
Peripheral interface 503 may be used to connect at least one Input/Output (I/O) related peripheral to processor 501 and memory 502. In some embodiments, processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 501, memory 502, and peripheral interface 503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 504 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 504 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 504 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 504 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 504 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The display 505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 505 is a touch display, the display 505 also has the ability to collect touch signals at or above the surface of the display 505. The touch signal may be input as a control signal to the processor 501 for processing. At this time, the display 505 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 505 may be one, providing a front panel of the smart device 500; in other embodiments, the display screen 505 may be at least two, and disposed on different surfaces of the smart device 500 or in a folded design; in still other embodiments, the display 505 may be a flexible display disposed on a curved surface or a folded surface of the smart device 500. Even more, the display 505 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display 505 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 506 is used to capture images or video. Optionally, the camera assembly 506 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 506 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 507 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing, or inputting the electric signals to the radio frequency circuit 504 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple and separately disposed at different locations of the smart device 500. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 501 or the radio frequency circuit 504 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuitry 507 may also include a headphone jack.
The location component 508 is used to locate the current geographic location of the smart device 500 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 508 may be a positioning component based on the united states GPS (Global Positioning System ), the beidou system of china, or the galileo system of the european union.
The power supply 509 is used to power the various components in the smart device 500. The power supply 509 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 509 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the smart device 500 also includes one or more sensors 510. The one or more sensors 510 include, but are not limited to: an acceleration sensor 511, a gyro sensor 512, a pressure sensor 513, a fingerprint sensor 514, an optical sensor 515, and a proximity sensor 516.
The acceleration sensor 511 may detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the smart device 500. For example, the acceleration sensor 511 may be used to detect components of gravitational acceleration on three coordinate axes. The processor 501 may control the touch display 505 to display a user interface in a landscape view or a portrait view according to a gravitational acceleration signal acquired by the acceleration sensor 511. The acceleration sensor 511 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 512 may detect the body direction and the rotation angle of the smart device 500, and the gyro sensor 512 may collect the 3D motion of the user to the smart device 500 in cooperation with the acceleration sensor 511. The processor 501 may implement the following functions based on the data collected by the gyro sensor 512: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 513 may be disposed at a side frame of the smart device 500 and/or at an underlying layer of the touch display 505. When the pressure sensor 513 is disposed on a side frame of the smart device 500, a holding signal of the smart device 500 by a user may be detected, and the processor 501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 513. When the pressure sensor 513 is disposed at the lower layer of the touch display screen 505, the processor 501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 505. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 514 is used for collecting the fingerprint of the user, and the processor 501 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 514, or the fingerprint sensor 514 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 501 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 514 may be provided on the front, back or side of the smart device 500. When a physical key or vendor Logo is provided on the smart device 500, the fingerprint sensor 514 may be integrated with the physical key or vendor Logo.
The optical sensor 515 is used to collect the ambient light intensity. In one embodiment, the processor 501 may control the display brightness of the touch screen 505 based on the ambient light intensity collected by the optical sensor 515. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 505 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 505 is turned down. In another embodiment, the processor 501 may also dynamically adjust the shooting parameters of the camera assembly 506 based on the ambient light intensity collected by the optical sensor 515.
A proximity sensor 516, also referred to as a distance sensor, is typically provided on the front panel of the smart device 500. The proximity sensor 516 is used to collect the distance between the user and the front of the smart device 500. In one embodiment, when the proximity sensor 516 detects a gradual decrease in the distance between the user and the front of the smart device 500, the processor 501 controls the touch display 505 to switch from the bright screen state to the off screen state; when the proximity sensor 516 detects that the distance between the user and the front of the smart device 500 gradually increases, the processor 501 controls the touch display 505 to switch from the off-screen state to the on-screen state.
That is, the embodiments of the present application not only provide a volume rendering apparatus that may be applied to the above-described smart device 500, including a processor and a memory for storing instructions executable by the processor, where the processor is configured to perform the volume rendering method in the embodiments shown in fig. 1 and 2, but also provide a computer-readable storage medium storing a computer program that, when executed by the processor, may implement the virtual scene rendering method in the embodiments shown in fig. 1 and 2.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.

Claims (7)

1. A method of rendering a virtual scene, the method comprising:
acquiring motion data acquired at the current moment, wherein the motion data comprises angular speed acquired through an Inertial Measurement Unit (IMU) positioned at the head and position information acquired through position tracking equipment;
determining a head pose, and whether a head movement and/or a body movement occurs, based on the motion data;
when it is determined that head movement and/or body movement occurs based on the movement data, obtaining a first scaling factor based on the movement data;
rendering a first virtual scene to be rendered based on the head pose and the texture image of the first scaling factor;
the obtaining a first scaling factor based on the motion data includes:
determining a second scaling factor corresponding to the angular velocity based on a first corresponding relation between the stored angular velocity and the scaling factor, and determining a third scaling factor corresponding to the body displacement based on a second corresponding relation between the stored displacement and the scaling factor, wherein the body displacement is determined according to the position information and the historical position information; determining the scaling multiple with the largest numerical value in the second scaling multiple and the third scaling multiple as the first scaling multiple;
Or alternatively, the process may be performed,
determining a target head motion coefficient corresponding to the angular velocity based on a third corresponding relation between the stored angular velocity and the head motion coefficient, and determining a target body motion coefficient corresponding to a body displacement based on a fourth corresponding relation between the stored displacement and the body motion coefficient, wherein the body displacement is determined according to the position information and the historical position information; wherein each head motion coefficient in the third correspondence is greater than 1, and the head motion coefficients in the third correspondence are positively correlated with angular velocity, each body motion coefficient in the fourth correspondence is greater than 1, and the body motion coefficients and displacement in the fourth correspondence are positively correlated; the first scaling factor is determined based on the target head motion factor and the target body motion factor.
2. The method of claim 1, wherein the determining a head pose, and whether a head motion and/or a body motion occurs, based on the motion data comprises:
carrying out attitude calculation on the angular velocity to obtain the head attitude;
determining body displacement based on the position information and historical position information, wherein the historical position information refers to position information acquired by the position tracking device at a first target time before the current time and at a preset time from the current time;
Judging whether the angular velocity is greater than an angular velocity threshold and judging whether the body displacement is greater than a displacement threshold;
if the angular velocity is greater than the angular velocity threshold, determining that the head movement occurs, and if the body displacement is greater than the displacement threshold, determining that the body movement occurs.
3. The method of claim 1, wherein the rendering the first virtual scene to be rendered based on the head pose and the texture image of the first magnification comprises:
determining a scene model of a first virtual scene to be rendered based on the head pose;
obtaining a texture image which is matched with the scene model and zoomed according to the first zoom multiple;
and rendering the scene model based on the acquired texture image.
4. The method of claim 1, wherein the rendering the first virtual scene to be rendered based on the head pose and the texture image of the first zoom factor further comprises:
predicting the motion data of a second target moment based on the motion data to obtain predicted motion data, wherein the second target moment is after the current moment and is separated from the current moment by a preset time length;
Determining a predicted head pose at the second target instant based on the predicted motion data, and whether head motion and/or body motion occurs;
obtaining a fourth scaling factor based on the predicted motion data when it is determined that head motion and/or body motion occurs based on the predicted motion data;
and at the second target moment, rendering a second virtual scene to be rendered based on the predicted head gesture and the texture image of the fourth scaling multiple.
5. A virtual scene rendering apparatus, the apparatus comprising:
the first acquisition module is used for acquiring motion data acquired at the current moment, wherein the motion data comprises angular speed acquired through an Inertial Measurement Unit (IMU) positioned at the head and position information acquired through position tracking equipment;
a first determination module for determining a head pose, and whether a head movement and/or a body movement occurs, based on the movement data;
a second acquisition module for acquiring a first scaling factor based on the motion data when it is determined that head motion and/or body motion occurs based on the motion data;
the first rendering module is used for rendering the first virtual scene to be rendered based on the head gesture and the texture image with the first scaling multiple;
The second obtaining module is specifically configured to:
determining a second scaling factor corresponding to the angular velocity based on a first corresponding relation between the stored angular velocity and the scaling factor, and determining a third scaling factor corresponding to the body displacement based on a second corresponding relation between the stored displacement and the scaling factor, wherein the body displacement is determined according to the position information and the historical position information; determining the scaling multiple with the largest numerical value in the second scaling multiple and the third scaling multiple as the first scaling multiple;
or alternatively, the process may be performed,
determining a target head motion coefficient corresponding to the angular velocity based on a third corresponding relation between the stored angular velocity and the head motion coefficient, and determining a target body motion coefficient corresponding to a body displacement based on a fourth corresponding relation between the stored displacement and the body motion coefficient, wherein the body displacement is determined according to the position information and the historical position information; wherein each head motion coefficient in the third correspondence is greater than 1, and the head motion coefficients in the third correspondence are positively correlated with angular velocity, each body motion coefficient in the fourth correspondence is greater than 1, and the body motion coefficients and displacement in the fourth correspondence are positively correlated; the first scaling factor is determined based on the target head motion factor and the target body motion factor.
6. The apparatus of claim 5, wherein the first determining module is specifically configured to:
carrying out attitude calculation on the angular velocity to obtain the head attitude;
determining body displacement according to the position information and historical position information, wherein the historical position information refers to position information acquired by the position tracking equipment at a first target time before the current time and at a preset time from the current time;
judging whether the angular velocity is greater than an angular velocity threshold and judging whether the body displacement is greater than a displacement threshold;
if the angular velocity is greater than the angular velocity threshold, determining that the head movement occurs, and if the body displacement is greater than the displacement threshold, determining that the body movement occurs.
7. An intelligent device, the intelligent device comprising:
a processor comprising an image processor GPU;
a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the steps of the method of any of claims 1-4.
CN201811639195.9A 2018-12-29 2018-12-29 Virtual scene rendering method and device and intelligent device Active CN109712224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811639195.9A CN109712224B (en) 2018-12-29 2018-12-29 Virtual scene rendering method and device and intelligent device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811639195.9A CN109712224B (en) 2018-12-29 2018-12-29 Virtual scene rendering method and device and intelligent device

Publications (2)

Publication Number Publication Date
CN109712224A CN109712224A (en) 2019-05-03
CN109712224B true CN109712224B (en) 2023-05-16

Family

ID=66260163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811639195.9A Active CN109712224B (en) 2018-12-29 2018-12-29 Virtual scene rendering method and device and intelligent device

Country Status (1)

Country Link
CN (1) CN109712224B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110022452B (en) * 2019-05-16 2021-04-30 深圳市芯动电子科技有限公司 Video frame extraction method and system suitable for holographic display device
CN110166758B (en) * 2019-06-24 2021-08-13 京东方科技集团股份有限公司 Image processing method, image processing device, terminal equipment and storage medium
CN110728749B (en) * 2019-10-10 2023-11-07 青岛大学附属医院 Virtual three-dimensional image display system and method
CN110930307B (en) * 2019-10-31 2022-07-08 江苏视博云信息技术有限公司 Image processing method and device
CN110766780A (en) * 2019-11-06 2020-02-07 北京无限光场科技有限公司 Method and device for rendering room image, electronic equipment and computer readable medium
CN110910509A (en) * 2019-11-21 2020-03-24 Oppo广东移动通信有限公司 Image processing method, electronic device, and storage medium
CN111008934B (en) * 2019-12-25 2023-08-29 上海米哈游天命科技有限公司 Scene construction method, device, equipment and storage medium
CN112380989B (en) * 2020-11-13 2023-01-24 歌尔科技有限公司 Head-mounted display equipment, data acquisition method and device thereof, and host
CN113515193B (en) * 2021-05-17 2023-10-27 聚好看科技股份有限公司 Model data transmission method and device
CN113797530B (en) * 2021-06-11 2022-07-12 荣耀终端有限公司 Image prediction method, electronic device and storage medium
CN113448437B (en) * 2021-06-19 2022-03-04 刘芮伶 Virtual reality image display method based on head-mounted display device and electronic equipment
CN113965768B (en) * 2021-09-10 2024-01-02 北京达佳互联信息技术有限公司 Live broadcasting room information display method and device, electronic equipment and server
CN114167992A (en) * 2021-12-17 2022-03-11 深圳创维数字技术有限公司 Display picture rendering method, electronic device and readable storage medium
CN115661373B (en) * 2022-12-26 2023-04-07 天津沄讯网络科技有限公司 Rotary equipment fault monitoring and early warning system and method based on edge algorithm

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516335A (en) * 2017-08-14 2017-12-26 歌尔股份有限公司 The method for rendering graph and device of virtual reality

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10089790B2 (en) * 2015-06-30 2018-10-02 Ariadne's Thread (Usa), Inc. Predictive virtual reality display system with post rendering correction
US9928660B1 (en) * 2016-09-12 2018-03-27 Intel Corporation Hybrid rendering for a wearable display attached to a tethered computer
CN107590859A (en) * 2017-09-01 2018-01-16 广州励丰文化科技股份有限公司 A kind of mixed reality picture processing method and service equipment
CN108305326A (en) * 2018-01-22 2018-07-20 中国人民解放军陆军航空兵学院 A method of mixing virtual reality

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516335A (en) * 2017-08-14 2017-12-26 歌尔股份有限公司 The method for rendering graph and device of virtual reality

Also Published As

Publication number Publication date
CN109712224A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN109712224B (en) Virtual scene rendering method and device and intelligent device
US11205282B2 (en) Relocalization method and apparatus in camera pose tracking process and storage medium
US20200316470A1 (en) Method and terminal for displaying distance information in virtual scene
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN108304265B (en) Memory management method, device and storage medium
CN111324250B (en) Three-dimensional image adjusting method, device and equipment and readable storage medium
CN112907725B (en) Image generation, training of image processing model and image processing method and device
CN110944374B (en) Communication mode selection method and device, electronic equipment and medium
WO2022134632A1 (en) Work processing method and apparatus
CN112581358B (en) Training method of image processing model, image processing method and device
CN110673944B (en) Method and device for executing task
CN113384880A (en) Virtual scene display method and device, computer equipment and storage medium
CN110992268B (en) Background setting method, device, terminal and storage medium
CN111275607B (en) Interface display method and device, computer equipment and storage medium
CN110728744B (en) Volume rendering method and device and intelligent equipment
CN115798418A (en) Image display method, device, terminal and storage medium
CN112967261B (en) Image fusion method, device, equipment and storage medium
WO2021218926A1 (en) Image display method and apparatus, and computer device
CN109685881B (en) Volume rendering method and device and intelligent equipment
CN110335224B (en) Image processing method, image processing device, computer equipment and storage medium
CN113240784A (en) Image processing method, device, terminal and storage medium
CN112560903A (en) Method, device and equipment for determining image aesthetic information and storage medium
CN111381765B (en) Text box display method and device, computer equipment and storage medium
CN113052408B (en) Method and device for community aggregation
CN113409235B (en) Vanishing point estimation method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant after: Hisense Video Technology Co.,Ltd.

Applicant after: BEIHANG University

Address before: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218

Applicant before: QINGDAO HISENSE ELECTRONICS Co.,Ltd.

Applicant before: BEIHANG University

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant