CN109712224A - Rendering method, device and the smart machine of virtual scene - Google Patents
Rendering method, device and the smart machine of virtual scene Download PDFInfo
- Publication number
- CN109712224A CN109712224A CN201811639195.9A CN201811639195A CN109712224A CN 109712224 A CN109712224 A CN 109712224A CN 201811639195 A CN201811639195 A CN 201811639195A CN 109712224 A CN109712224 A CN 109712224A
- Authority
- CN
- China
- Prior art keywords
- angular speed
- head
- exercise data
- displacement
- scaling multiple
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 75
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000004886 head movement Effects 0.000 claims abstract description 72
- 230000000875 corresponding effect Effects 0.000 claims description 87
- 238000006073 displacement reaction Methods 0.000 claims description 66
- 238000003860 storage Methods 0.000 claims description 37
- 238000004422 calculation algorithm Methods 0.000 claims description 11
- 230000002596 correlated effect Effects 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 abstract description 8
- 210000003128 head Anatomy 0.000 description 77
- 230000001133 acceleration Effects 0.000 description 15
- 230000036544 posture Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 12
- 210000001508 eye Anatomy 0.000 description 11
- 230000002093 peripheral effect Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000005484 gravity Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 239000011800 void material Substances 0.000 description 4
- 230000004927 fusion Effects 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 210000005252 bulbus oculi Anatomy 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 208000002173 dizziness Diseases 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000002207 retinal effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011960 computer-aided design Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000004043 dyeing Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
This application discloses a kind of rendering method of virtual scene, device and smart machines, belong to field of Computer Graphics.The application available current time collected exercise data, based on exercise data, determine head pose, and head movement and/or body kinematics whether occur, when determining generation head movement and/or body kinematics based on exercise data, multiple is scaled based on motion capture first, texture image based on head pose and the first scaling multiple, first virtual scene to be rendered is rendered, namely, when detecting head movement or body kinematics, it can be rendered using low resolution texture, so, calculation amount can be effectively reduced, improve rendering efficiency.
Description
Technical field
This application involves computer graphics techniques field, in particular to a kind of object plotting method, device and smart machine.
Background technique
The development of computer graphics has greatly facilitated game, film, animation, computer aided design and manufacture, void
The update iteration of the industries such as quasi- reality.In computer graphics techniques field, analogue simulation to real world and to void
The rendering of quasi- scene is always research hotspot.Specifically, the rendering of virtual scene refers to the model of place of building virtual scene, so
Texture is drawn on model of place afterwards, to obtain the three-dimensional virtual scene with presence.
In the related technology, when carrying out virtual scene rendering, the available IMU positioned at head of smart machine acquires used
Property data, then to the inertial data carry out attitude algorithm, obtain head pose, further according to head pose, determine to be rendered
The model of place of virtual scene, then load with the matched texture image of model of place, according to the texture image of load, to virtual
The model of place of scene is rendered.
When carrying out the rendering of virtual scene using the above method, when the texture of model of place is complex, rendering amount
To be extremely huge, in this way, not only resulting in the GPU (Graphics Processing Unit, graphics processor) of smart machine
Calculating power consumption it is excessive, and it is lower to will lead to rendering efficiency.
Summary of the invention
The embodiment of the present application provides rendering method, device and the smart machine of a kind of virtual scene, can be used for solving
GPU calculating power consumption is larger when virtual scene renders, the low problem of rendering efficiency.The technical solution is as follows:
In a first aspect, providing a kind of rendering method of virtual scene, which comprises
Obtain current time collected exercise data;
Based on the exercise data, head pose is determined, and head movement and/or body kinematics whether occur;
When determining generation head movement and/or body kinematics based on the exercise data, obtained based on the exercise data
Take the first scaling multiple;
Based on the head pose and it is described first scaling multiple texture image, to the first virtual scene to be rendered into
Row rendering.
Optionally, the exercise data include by be located at head Inertial Measurement Unit IMU acquire angular speed and
The location information acquired by location tracking equipment;
It is described to be based on the exercise data, determine head pose, and head movement and/or body kinematics whether occur,
Include:
Attitude algorithm is carried out to the angular speed, obtains the head pose;
Based on the location information and historical position information, determine that body is displaced, the historical position information, which refers to, is working as
At a distance of the first object moment of preset duration before the preceding moment and with current time, the position acquired by the location tracking equipment
Confidence breath;
Judge whether the angular speed is greater than angular speed threshold value, and judges whether the body displacement is greater than displacement threshold value;
If the angular speed is greater than the angular speed threshold value, it is determined that the head movement occurs, if the body is displaced
Greater than the displacement threshold value, it is determined that the body kinematics occur.
Optionally, the exercise data includes by being located at the angular speed and pass through location tracking that the IMU on head is acquired
The location information of equipment acquisition;
It is described that multiple is scaled based on the motion capture first, comprising:
First corresponding relationship of angular speed and scaling multiple based on storage determines corresponding second scaling of the angular speed
Multiple, and the second corresponding relationship of the displacement based on storage and scaling multiple determine that body is displaced corresponding third scaling multiple,
The body displacement is to obtain according to the positional information with historical position information determination;
By the maximum scaling multiple of numerical value in the second scaling multiple and third scaling multiple, it is determined as described the
One scaling multiple.
Optionally, the exercise data includes by being located at the angular speed and pass through location tracking that the IMU on head is acquired
The location information of equipment acquisition;
It is described that multiple is scaled based on the motion capture first, comprising:
Third corresponding relationship between angular speed based on storage and head movement coefficient determines that the angular speed is corresponding
Target cranial kinematic coefficient, and the 4th corresponding relationship between the displacement based on storage and body kinematics coefficient, determine body position
Move corresponding intended body kinematic coefficient, the body displacement is determining with historical position information according to the positional information
It arrives;
Wherein, each head movement coefficient in the third corresponding relationship is greater than 1, and in the third corresponding relationship
Head movement coefficient is positively correlated with angular speed, and each body kinematics coefficient in the 4th corresponding relationship is greater than 1, and described
Body kinematics coefficient and displacement in 4th corresponding relationship are positively correlated;
Based on the target cranial kinematic coefficient and the body kinematics coefficient, the first scaling multiple is determined.
Optionally, described based on the head pose and the texture image of the first scaling multiple, to be rendered the
One virtual scene is rendered, comprising:
Based on the head pose, the model of place of the first virtual scene to be rendered is determined;
Obtain the texture image for matching with the model of place and zooming in and out according to the first scaling multiple;
Texture image based on acquisition renders the model of place.
Optionally, described based on the head pose and the texture image of the first scaling multiple, to be rendered the
After one virtual scene is rendered, further includes:
Based on the operation data, the exercise data of the second object time is predicted, obtains predicted motion data, institute
The second object time is stated after current time and is separated by preset duration with current time;
Based on the predicted motion data, the prediction head pose of second object time is determined, and whether occur
Head movement and/or body kinematics;
When determining generation head movement and/or body kinematics based on the predicted motion data, transported based on the prediction
Dynamic data acquisition the 4th scales multiple;
In second object time, the texture image based on the prediction head pose and the 4th scaling multiple,
Second virtual scene to be rendered is rendered.
Second aspect, provides a kind of rendering device of virtual scene, and described device includes:
First obtains module, for obtaining current time collected exercise data;
First determining module determines head pose, and whether head movement occurs for being based on the exercise data
And/or body kinematics;
Second obtains module, is used for when determining generation head movement and/or body kinematics based on the exercise data, base
Multiple is scaled in the motion capture first;
First rendering module treats wash with watercolours for the texture image based on the head pose and the first scaling multiple
First virtual scene of dye is rendered.
Optionally, the exercise data includes by being located at the angular speed and pass through location tracking that the IMU on head is acquired
The location information of equipment acquisition;
First determining module is specifically used for:
Attitude algorithm is carried out to the angular speed, obtains the head pose;
According to the positional information and historical position information, determine that body is displaced, the historical position information, which refers to, is working as
At a distance of the first object moment of preset duration before the preceding moment and with current time, the position acquired by the location tracking equipment
Confidence breath;
Judge whether the angular speed is greater than angular speed threshold value, and judges whether the body displacement is greater than displacement threshold value;
If the angular speed is greater than the angular speed threshold value, it is determined that the head movement occurs, if the body is displaced
Greater than the displacement threshold value, it is determined that the body kinematics occur.
Optionally, the exercise data includes by being located at the angular speed and pass through location tracking that the IMU on head is acquired
The location information of equipment acquisition;
The second acquisition module is specifically used for:
First corresponding relationship of angular speed and scaling multiple based on storage determines corresponding second scaling of the angular speed
Multiple, and the second corresponding relationship of the displacement based on storage and scaling multiple determine that body is displaced corresponding third scaling multiple,
The body displacement is to obtain according to the positional information with historical position information determination;
By the maximum scaling multiple of numerical value in the second scaling multiple and third scaling multiple, it is determined as described the
One scaling multiple.
Optionally, the exercise data includes by being located at the angular speed and pass through location tracking that the IMU on head is acquired
The location information of equipment acquisition;
The second acquisition module is specifically used for:
Third corresponding relationship between angular speed based on storage and head movement coefficient determines that the angular speed is corresponding
Target cranial kinematic coefficient, and the 4th corresponding relationship between the displacement based on storage and body kinematics coefficient, determine body position
Move corresponding intended body kinematic coefficient, the body displacement is determining with historical position information according to the positional information
It arrives;
Wherein, each head movement coefficient in the third corresponding relationship is greater than 1, and in the third corresponding relationship
Head movement coefficient is positively correlated with angular speed, and each body kinematics coefficient in the 4th corresponding relationship is greater than 1, and described
Body kinematics coefficient and displacement in 4th corresponding relationship are positively correlated;
Based on the target cranial kinematic coefficient and the body kinematics coefficient, the first scaling multiple is determined.
Optionally, first rendering module is specifically used for:
Based on the head pose, the model of place of first virtual scene to be rendered is determined;
Obtain the texture image for matching with the model of place and zooming in and out according to the first scaling multiple;
Texture image based on acquisition renders the model of place.
Optionally, described device further include:
Prediction module is predicted the exercise data of the second object time, is obtained pre- for being based on the operation data
Exercise data is surveyed, second object time is separated by preset duration after current time and with current time;
Second determining module determines the prediction head of second object time for being based on the predicted motion data
Posture, and head movement and/or body kinematics whether occur;
Third obtains module, for when based on the determining generation head movement of the predicted motion data and/or body kinematics
When, multiple is scaled based on the predicted motion data acquisition the 4th;
Second rendering module, for being contracted based on the prediction head pose and the described 4th in second object time
The texture image for putting multiple renders the second virtual scene to be rendered.
The third aspect, provides a kind of volume rendering apparatus, and described device includes:
Processor, the processor include image processor GPU;
Memory for storage processor executable instruction;
Wherein, the processor is configured to described in above-mentioned first aspect the step of any one method.
Fourth aspect provides a kind of computer readable storage medium, finger is stored on the computer readable storage medium
The step of enabling, any one method described in above-mentioned first aspect realized when described instruction is executed by processor.
Technical solution bring beneficial effect provided by the embodiments of the present application includes at least:
The embodiment of the present application collected exercise data of available current time, and it is based on the exercise data, determine head
Portion's posture, and head movement and/or body kinematics whether occur, when determined based on the exercise data occur head movement and/
Or when body kinematics, multiple is scaled based on the motion capture first, is then based on the head pose and the first scaling multiple
Texture image, first virtual scene at current time is rendered.That is, in the embodiment of the present application, when detection to the end
When portion's movement or body kinematics, the rendering of virtual scene can be carried out based on the texture image after scaling, that is, uses low resolution
Texture is rendered, and when due to head movement or body kinematics, human eye does not need high-definition picture, and therefore, use is low
Resolution texture image is rendered, and calculation amount can be effectively reduced, and improves rendering efficiency.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is a kind of flow chart of the rendering method of virtual scene provided by the embodiments of the present application;
Fig. 2 is the flow chart of the rendering method of another virtual scene provided by the embodiments of the present application;
Fig. 3 is the schematic diagram of the texture image of a variety of scaling multiples provided by the embodiments of the present application;
Fig. 4 is a kind of structural block diagram of volume rendering apparatus 400 provided by the embodiments of the present application;
Fig. 5 is a kind of structural block diagram of smart machine 500 provided by the embodiments of the present application.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with attached drawing to the application embodiment party
Formula is described in further detail.
Before carrying out detailed explanation to the embodiment of the present application, first to the invention relates to application scenarios
It is introduced.
Currently, in VR (Virtual Reality, virtual reality) or AR (Augmented Reality, augmented reality) skill
In art, when rendering virtual scene with high-resolution needed for high-immersion for user, the processing capacity of the GPU of smart machine is mentioned
High requirement is gone out.And for a user, low delay, high frame per second when smart machine renders, high image quality are to guarantee well
Virtual reality experience necessary condition.For example, low resolution will limit visual field for VR head-mounted display apparatus, lead
Cause user experience poor.And if improve VR head-mounted display apparatus resolution ratio, correspondingly need VR head-mounted display apparatus
GPU have higher processing capacity.Currently, high-end GPU can not bring optimal VR or AR experience still for user, therefore,
How the processing capacity of GPU is effectively utilized, to provide high quality VR or the AR content for being more in line with human eye perception for user
It is critical issue.And rendering method provided by the embodiments of the present application can be applied in above-mentioned scene, this method is in conventional wash with watercolours
On the basis of dyeing method, joined the judgement of motion state, according to judging result, adjust rendering parameter, then carry out scene rendering and
It has been shown that, to reduce the calculation amount of the GPU of smart machine, Jin Erti while meeting requirement of the user to the resolution ratio of image
High rendering efficiency.
Next the specific implementation of object plotting method provided by the embodiments of the present application is introduced.
Fig. 1 is a kind of flow chart of the rendering method of virtual scene provided by the embodiments of the present application, and this method can be used for
In smart machine, the VR wear-type which can be while being integrated with image processing function and display function, which is shown, to be set
It is standby, and the smart machine may include IMU (inertial measurement unit, Inertial Measurement Unit) or IMU and position
Set tracing equipment.Alternatively, the smart machine can be the terminals such as mobile phone, tablet computer, portable computer, desktop computer,
And the smart machine can connect VR head-mounted display apparatus or AR head-mounted display apparatus, wherein the VR wear-type of connection
It shows equipment or AR head-mounted display apparatus includes IMU or IMU and location tracking equipment.As shown in Figure 1, this method include with
Lower step:
Step 101: obtaining current time collected exercise data.
Wherein, the exercise data of current time acquisition may include the angular speed acquired by IMU, or including passing through IMU
The angular speed of acquisition and the location information acquired by location tracking equipment.Wherein, IMU is located at user's head, location tracking
Equipment is located on user's body or is located at outside user's body, for example, for user's body outside laser position tracing equipment etc..
Step 102: being based on the exercise data, determine head pose, and head movement and/or body fortune whether occurs
It is dynamic.
Wherein, head pose may include the attitude datas such as head center of gravity and head inclination angle, for example, head pose can be
It bows, come back, the postures such as left-leaning or Right deviation.
Step 103: when determining generation head movement and/or body kinematics based on the exercise data, being based on the movement number
Multiple is scaled according to acquisition first.
Step 104: the texture image based on the head pose and the first scaling multiple, it is virtual to be rendered first
Scene is rendered.
The embodiment of the present application collected exercise data of available current time, and it is based on the exercise data, determine head
Portion's posture, and head movement and/or body kinematics whether occur, when determined based on the exercise data occur head movement and/
Or when body kinematics, multiple is scaled based on the motion capture first, is then based on the head pose and the first scaling multiple
Texture image, first virtual scene at current time is rendered.That is, in the embodiment of the present application, when detection to the end
When portion's movement or body kinematics, the rendering of virtual scene can be carried out based on the texture image after scaling, that is, uses low resolution
Texture carries out scene rendering, and when due to head movement or body kinematics, human eye does not need high-definition picture, therefore, adopts
It is rendered with low resolution texture image, can effectively reduce calculation amount, improve rendering efficiency.
Fig. 2 is the flow chart of the rendering method of another virtual scene provided by the embodiments of the present application, and this method can be used
In smart machine, which can be while being integrated with image processing function and the VR wear-type of display function is shown
Equipment, and the smart machine may include IMU or IMU and location tracking equipment.Alternatively, the smart machine can be such as hand
The terminals such as machine, tablet computer, portable computer, desktop computer, and the smart machine can connect VR head-mounted display apparatus
Or AR head-mounted display apparatus, wherein the VR head-mounted display apparatus or AR head-mounted display apparatus of connection include IMU or IMU
With location tracking equipment.As shown in Fig. 2, method includes the following steps:
Step 201: obtaining current time collected exercise data.
Wherein, which includes the inertial data acquired by IMU, or includes the inertial data acquired by IMU
With the location information acquired by location tracking equipment.Wherein, IMU is located at user's head, and location tracking equipment is located at user's body
On body or it is located at outside user's body, for acquiring the location information of user's body.
Wherein, angular speed is included at least by the inertial data of IMU acquisition, certainly can also includes acceleration or yaw angle
Deng.Wherein, angular velocity is integrated, available equipment posture.In addition, can be corrected and gravity direction phase using acceleration
The attitude misalignment of pass, i.e. acceleration can be used for correcting the angular deviation in posture.Yaw angle can be used for further correcting posture
Deviation.
Exemplary, IMU may include gyroscope, and gyroscope is for acquiring angular speed.Further, IMU can also include
Accelerometer and magnetometer, accelerometer is for acquiring acceleration, and magnetometer is for acquiring yaw angle.
Exemplary, location tracking equipment can be the equipment for carrying out location tracking using optical means, as laser position chases after
Track equipment, Light House (lamp house) tracking external member, " constellation " tracking external member of Oculus, VR show in tracking external member etc..
In practical application, if smart machine, together with VR integration of equipments, smart machine can be by VR equipment
IMU acquires angular speed.If in smart machine not including VR equipment, smart machine can be communicated with VR equipment, and be obtained
The angular speed that IMU in VR equipment is acquired at current time.It should be noted that since IMU is located at user's head, IMU
The angular speed of acquisition is actually the velocity of rotation of user's head.In addition, if smart machine and location tracking integration of equipments exist
Together, then smart machine can acquire the location information at current time by location tracking equipment.If not including in smart machine
Location tracking equipment, then smart machine can be communicated with location tracking equipment, and obtain location tracking equipment when current
Carve the location information of acquisition.It should be noted that location tracking equipment is used to track the position of user, acquisition is user's body
Location information.
In the embodiment of the present application, available current time collected exercise data, according to collected exercise data
It determines the motion state of user's head, and then rendering parameter is adjusted according to the motion state on head, to carry out virtual field for user
The rendering of scape.Alternatively, determining the motion state of user's head and the movement shape of user's body according to collected exercise data
State, and then rendering parameter is adjusted according to the motion state of head and body, to carry out the rendering of virtual scene for user.
Step 202: being based on the exercise data, determine head pose, and head movement and/or body fortune whether occurs
It is dynamic.
In the embodiment of the present application, after getting exercise data, Attitude Calculation can be carried out to the exercise data on one side, obtained
To the head pose of user, on one side according to the exercise data, it is determined whether head movement and/or body kinematics occur.
Specifically, attitude algorithm can be carried out to the inertial data acquired by IMU, obtains head pose.For example, to logical
The angular speed for crossing IMU acquisition carries out attitude algorithm, obtains head pose.Wherein, it includes pair that angular velocity, which carries out attitude algorithm,
Angular speed is integrated.
In one embodiment, if exercise data include by IMU acquire angular speed, get angular speed it
Afterwards, smart machine can carry out attitude algorithm with angular velocity, obtain head pose, and judge whether angular speed is greater than angular speed
Threshold value, if angular speed is greater than angular speed threshold value, it is determined that head movement occurs.
In another embodiment, if exercise data includes adopting by the angular speed of IMU acquisition and by location tracking equipment
The location information of collection, then after getting exercise data, smart machine can carry out attitude algorithm with angular velocity, obtain head
Posture is based on the location information and historical position information, determines that body is displaced, judges whether the angular speed is greater than angular speed threshold
Value, and judge whether body displacement is greater than displacement threshold value, if the angular speed is greater than angular speed threshold value, it is determined that head fortune occurs
It is dynamic, if body displacement is greater than displacement threshold value, it is determined that body kinematics occur.
Wherein, which refers to the first mesh before current time and with current time at a distance of preset duration
Mark moment, the location information acquired by location tracking equipment.It that is to say, location tracking equipment can be by the different moments of acquisition
Location information be sent to smart machine, stored by smart machine.When smart machine gets the position letter at current time
When breath, current time phase can be determined according to the historical position information of the location information and the first object moment obtained before
Displacement for the first object moment, and then obtain body displacement.
Wherein, angular speed threshold value can be angular speed when pre-set head starts turning.Displacement threshold value can be
The displacement of pre-set body setting in motion.
It should be noted that if angular speed is greater than angular speed threshold value, but body displacement is not more than displacement threshold value, then can be true
Surely head movement has occurred, but body kinematics do not occur.If angular speed is not more than angular speed threshold value, but body displacement is greater than
Displacement threshold value can then determine and body kinematics have occurred, but head movement does not occur.If angular speed is greater than angular speed threshold value,
And body displacement is greater than displacement threshold value, then can determine and head movement and body kinematics has occurred.If angular speed is no more than angle speed
Threshold value is spent, and body displacement is not more than displacement threshold value, then can determine and head movement does not both occur, body kinematics also do not occur.
Step 203: when determining generation head movement and/or body kinematics based on the exercise data, being based on the movement number
Multiple is scaled according to acquisition first.
It should be noted that in the related technology, regardless of whether head movement or body kinematics occur, smart machine is to add
The texture image with the matched no scaling of dummy model of virtual scene to be rendered is carried, then based on the texture image of load to void
Quasi- scene is rendered, and this rendering method does not consider the motion state of user, and calculation amount is larger, and rendering efficiency is lower.
And in the embodiment of the present application, when determining generation head movement and/or body kinematics, available and exercise data
Corresponding first scaling multiple is then based on and carries out wash with watercolours to virtual scene according to the texture image that the first scaling multiple zooms in and out
Dye, that is to say, scene rendering be carried out according to the lower fixture resolution of relative static conditions, in this way, can be in user movement shape
Under state, rendering quality is reduced, rendering efficiency is improved, to reduce delay.Moreover, because when head movement and/or body occurs
When movement, insensitive opposite for image of human eye, the picture of seen object can very fast inswept retinal surface, adopt in this case
Scene rendering is carried out with low fixture resolution, also will not influence the visual experience of user.
Wherein, which refers to the multiple reduced to original texture image, and the first scaling multiple with
The movement velocity of exercise data instruction is corresponding, and the movement velocity of user is bigger, then the first scaling multiple is bigger, the fortune of user
Dynamic speed is smaller, then the first scaling multiple is smaller.Moreover, being zoomed in and out according to the first scaling multiple to original texture image, lead to
It refers to the length of original texture image and wide is zoomed in and out respectively according to the first scaling multiple.Optionally, the first scaling multiple
Usually 2n, n is positive integer.Exemplary, the second scaling multiple can be 21、22、23、24、25、26Or 27Deng.
It should be noted that as shown in figure 3, the size of original texture image is usually 128 × 128, according to 21It contracts
The size of the texture image obtained after putting is 64 × 64, according to 22The size of the texture image obtained after zooming in and out is 32
× 32, according to 23The size of the texture image obtained after zooming in and out is 16 × 16, according to 24The line obtained after zooming in and out
The size for managing image is 8 × 8, according to 25The size of the texture image obtained after zooming in and out is 4 × 4, according to 26It zooms in and out
The size of the texture image obtained later is 2 × 2, according to 27The size of the texture image obtained after zooming in and out is 1 × 1.
Specifically, according to the difference of the movement detected, this step can be divided into following three kinds of situations.
The first situation: when determine head movement occurs when, based on the angular speed of storage with scale the first corresponding of multiple
Relationship determines the corresponding scaling multiple of angular speed, and the corresponding scaling multiple of angular speed is determined as the first scaling multiple.
Wherein, the angular speed in first corresponding relationship and scaling multiple are proportional, and angular speed is bigger, and scaling multiple is got over
Greatly.Can different scaling multiples be arranged for different angular speed in advance in intelligent terminal, and establishes angular speed and scale multiple
Then first corresponding relationship stores first corresponding relationship, when determine head movement occurs when, the of storage can be directly based upon
One corresponding relationship determines the corresponding scaling multiple of current angular velocity.
Exemplary, angular speed and the first corresponding relationship for scaling multiple can be as shown in table 1 below, angular speed in table 1 by
It is cumulative big, and corresponding scaling multiple is also gradually increased.
Table 1
Angular speed | Scale multiple |
W1 | λ1 |
W2 | λ2 |
W3 | λ3 |
W4 | λ4 |
... | ... |
Second situation:When determining generation body kinematics, the second corresponding pass being displaced with scaling multiple based on storage
System determines that body is displaced corresponding scaling multiple, and body displacement is determining according to the location information and historical position information
It arrives;Body is displaced corresponding scaling multiple and is determined as the first scaling multiple.
Wherein, the displacement in the second corresponding relationship and scaling multiple are proportional, and displacement is bigger, and scaling multiple is bigger.Intelligence
Can different scaling multiples be arranged for different displacements in advance in terminal, and establishes second corresponding pass of the displacement with scaling multiple
System, then stores second corresponding relationship, when determining generation body kinematics, can be directly based upon the second corresponding pass of storage
System determines that current body is displaced corresponding scaling multiple.
The third situation:When determining generation head movement and body kinematics, angular speed and scaling multiple based on storage
The first corresponding relationship, determine the corresponding second scaling multiple of the angular speed, and the displacement based on storage and the of scaling multiple
Two corresponding relationships determine that body is displaced corresponding third scaling multiple, and body displacement is according to the location information and history bit
Confidence breath determination obtains, and then, by the maximum scaling multiple of numerical value in the second scaling multiple and third scaling multiple, is determined as the
One scaling multiple.
For example, be displaced corresponding third if the corresponding second scaling multiple of angular speed is greater than body and scale multiple, by the
Two scaling multiples are determined as the first scaling multiple;If the corresponding second scaling multiple of angular speed is less than body and is displaced corresponding third
Multiple is scaled, then third scaling multiple is determined as the first scaling multiple.
4th kind of situation: when determining generation head movement and body kinematics, angular speed and head movement based on storage
Third corresponding relationship between coefficient, determines the corresponding target cranial kinematic coefficient of the angular speed, and the displacement based on storage with
The 4th corresponding relationship between body kinematics coefficient determines that body is displaced corresponding intended body kinematic coefficient, then, is based on mesh
Header portion kinematic coefficient and intended body kinematic coefficient determine the first scaling multiple.
Wherein, body displacement is obtained according to the location information and historical position information determination, in third corresponding relationship
Each head movement coefficient be greater than 1, and the head movement coefficient in third corresponding relationship is positively correlated with angular speed, the 4th pair
Each body kinematics coefficient in should being related to is greater than 1, and body kinematics coefficient in the 4th corresponding relationship and displacement are positively correlated.
Specifically he, can be determined as first for the product between target cranial kinematic coefficient and intended body kinematic coefficient
Multiple is scaled, alternatively, the product between target cranial kinematic coefficient, intended body kinematic coefficient and preset constant is determined as
First scaling multiple.The preset constant is pre-set parameter, for moving target cranial kinematic coefficient and intended body
Coefficient is converted to scaling multiple.
Step 204: being based on the head pose, determine the model of place of the first virtual scene to be rendered.
When user is in different head poses, the visual field of user is accordingly different, and then the void for needing to render for user
Quasi- scene is also different.Such as, it is assumed that user is in virtual grassland, to be rendered when the head pose of user is to face upward posture
The first virtual scene should be Sky Scene, when the head pose of user be nose-down attitude when, the first virtual scene to be rendered
It should be meadow scene.Or, it is assumed that user is in city, when the head pose of user is to face upward posture, to be rendered the
One virtual scene should be building roof, and when the head pose of user is nose-down attitude, the first virtual scene to be rendered be should be
Highway scene.
In the embodiment of the present application, the first virtual scene to be rendered can be determined, and create first according to the head pose of user
Build the model of place of the first virtual scene to be rendered.The model of place can be threedimensional model.
Step 205: obtaining the texture image for matching with the model of place and zooming in and out according to the first scaling multiple.
After the model of place for determining the first virtual scene to be rendered, it can obtain and the matched line of the model of place
Reason image renders model of place so as to the texture image based on acquisition, to draw textured virtual scene.
Specifically, before rendering, data texturing can be first loaded, when needing to render, then from the data texturing of load
Selection matches with the model of place and according to the texture image that the first scaling multiple zooms in and out, so as to the texture maps based on acquisition
As being rendered to model of place.It is exemplary, it can be with the texture image of pre-loaded different zoom multiple, according to operation data
After determining the first scaling multiple, the texture image that the first scaling multiple can be chosen from the texture image of load carries out wash with watercolours
Dye.
In practical application, Mipmap technology (a kind of computer graphic image technology) can use, by render scenes
In data texturing carry out hierarchical processing, when load texture, not merely be load one texture, but load one system
The texture of column from big to small, then by OpenGL (Open Graphics Library, open graphic library), according to given
State selects most suitable texture.For example, original texture image is zoomed in and out according to 2 multiple, directly using Mipmap technology
To the texture image of 1x1 size is zoomed to, so that a series of texture images as shown in Figure 3 are obtained, then these lines
Reason figure all stores, and when needing to carry out scene rendering, can select a suitable line according to the exercise data got
Image is managed, scene rendering is carried out.
Step 206: the texture image based on acquisition renders the model of place.
It that is to say, texture rendering can be carried out to model of place according to the texture image of acquisition, to draw out more true
First virtual scene of real image.
In the embodiment of the present application, during quickly being moved using head quick rotation or body, human eye does not need high score
The characteristic of resolution view judges the motion state of user by the exercise data got posture of ascending the throne, according to motion state reality
When adaptively adjust rendering parameter, thus greatly reduce the occupancy of rendering resources and the loss of calculating, save operation time,
Reduce rendering delay, improves VR/AR experience of the process.
Further, smart machine is in the texture image based on the head pose and the first scaling multiple, to current time
The first virtual scene rendered after, be also based on the operation data at current time, the movement to the second object time
Data are predicted, predicted motion data are obtained, and the second object time refers to after current time and is separated by with current time
At the time of preset duration.Wherein, which includes prediction angular speed or pre- angular velocity and predicted position information.
Specifically, the exercise data at current time may include movement velocity, acceleration and the location information of body, can be with
Exercise data based on current time predicts the location information of the second object time by following formula (1) or (2),
Obtain predicted position information:
s0+v0× t=st (1)
Wherein, s0For the location information at current time, v0For the movement velocity at current time, t is preset duration, stIt is pre-
Survey location information.
Wherein, s0For the location information at current time, v0For the movement velocity at current time, a is the acceleration at current time
Degree, t is preset duration, stFor predicted position information.
Further, historical position information can also be optimized, for example carries out smooth to historical position information or goes
Fall shake, obtains the location information at current time.For example, 3 nearest historical position informations can be averaged, as working as
The location information at preceding moment is used further to position prediction prediction.Optimal way can be different herein, with no restriction.
Further, after obtaining predicted motion data, smart machine is also based on the predicted motion data, determines
The prediction head pose of two object times, and head movement and/or body kinematics whether occur;When based on predicted motion data
When determining generation head movement and/or body kinematics, multiple is scaled based on predicted motion data acquisition the 4th, then in the second mesh
The moment is marked, the texture image based on prediction head pose and the 4th scaling multiple, to the second virtual scene of the second object time
It is rendered.
Wherein, the texture image based on prediction head pose and the 4th scaling multiple, it is empty to the second of the second object time
The implementation that quasi- scene is rendered, it is empty to first with the above-mentioned texture image based on head pose and the first scaling multiple
Similarly, details are not described herein for the embodiment of the present application for the mode that quasi- scene is rendered.
The embodiment of the present application collected exercise data of available current time, and it is based on the exercise data, determine head
Portion's posture, and head movement and/or body kinematics whether occur, when determined based on the exercise data occur head movement and/
Or when body kinematics, multiple is scaled based on the motion capture first, is then based on the head pose and the first scaling multiple
Texture image, first virtual scene at current time is rendered.That is, in the embodiment of the present application, when detection to the end
When portion's movement or body kinematics, the rendering of virtual scene can be carried out based on the texture image after scaling, that is, uses low resolution
Texture carries out scene rendering, and when due to head movement or body kinematics, human eye does not need high-definition picture, therefore, adopts
It is rendered with low resolution texture image, can effectively reduce calculation amount, improve rendering efficiency.In this way, realize with
When family remains static, scene rendering is carried out using high-resolution texture, when user is kept in motion, using low resolution
Rate texture carry out scene rendering, and for different movement velocitys be provided with different texture scale ranks, so as to according to
The movement velocity at family carries out scene rendering using corresponding texture scale rank.
In addition, for a user, when directly watching the external world by eyes, when the sight fast transfer of user
When, the picture of seen object can inswept retinal surface, the image in the extraneous world be actually very fast fuzzy.Based on this, if
During user's head quick rotation or body quickly move, the clarity of scene rendering do not occur head rotation or
The clarity rendered in the case where Rotation of eyeball is identical, then, the eyes of user have " inadaptable sense ", also, the eye of user
Eyeball can be tired because more information is processed, or even dizziness occurs.And in the embodiment of the present application, due to detecting head
When rotation and/or body kinematics, scene rendering, therefore, the virtual scene rendered can be carried out using low resolution texture
Clarity compared with head rotation or body kinematics do not occur in the case where the clarity that renders it is low, in this way, can be more true
Human eye vision is simulated, so that it is uncomfortable effectively to mitigate the physiologicals such as visual fatigue, the dizziness of user.
Next volume rendering apparatus provided by the embodiments of the present application is introduced.
Fig. 4 is a kind of structural block diagram of volume rendering apparatus 400 provided by the embodiments of the present application, which can integrate
In smart machine in previous embodiment, referring to fig. 4, which includes:
First obtains module 401, for obtaining current time collected exercise data;
First determining module 402 determines head pose, and whether head movement occurs for being based on the exercise data
And/or body kinematics;
Second obtains module 403, is used for when determining generation head movement and/or body kinematics based on the exercise data,
Multiple is scaled based on the motion capture first;
First rendering module 404, for the texture image based on the head pose and the first scaling multiple, to be rendered
The first virtual scene rendered.
Optionally, which includes setting by the angular speed acquired of the IMU positioned at head and by location tracking
The location information of standby acquisition;
First determining module 402 is specifically used for:
Attitude algorithm is carried out to the angular speed, obtains the head pose;
According to the location information and historical position information, determine that body is displaced, which refers to when current
At a distance of the first object moment of preset duration before quarter and with current time, believed by the position that the location tracking equipment acquires
Breath;
Judge whether the angular speed is greater than angular speed threshold value, and judges whether body displacement is greater than displacement threshold value;
If the angular speed is greater than the angular speed threshold value, it is determined that the head movement occurs, if body displacement is greater than the position
Move threshold value, it is determined that the body kinematics occur.
Optionally, which includes setting by the angular speed acquired of the IMU positioned at head and by location tracking
The location information of standby acquisition;
The second acquisition module 403 is specifically used for:
First corresponding relationship of angular speed and scaling multiple based on storage determines corresponding second scaling times of the angular speed
Number, and the second corresponding relationship of the displacement based on storage and scaling multiple determine that body is displaced corresponding third scaling multiple, should
Body displacement is obtained according to the location information and historical position information determination;
By the maximum scaling multiple of numerical value in the second scaling multiple and third scaling multiple, it is determined as first scaling
Multiple.
Optionally, which includes setting by the angular speed acquired of the IMU positioned at head and by location tracking
The location information of standby acquisition;
The second acquisition module 403 is specifically used for:
Third corresponding relationship between angular speed based on storage and head movement coefficient determines the corresponding mesh of the angular speed
Header portion kinematic coefficient, and the 4th corresponding relationship between the displacement based on storage and body kinematics coefficient determine that body is displaced
Corresponding intended body kinematic coefficient, body displacement are obtained according to the location information and historical position information determination;
Wherein, each head movement coefficient in the third corresponding relationship is greater than 1, and the head in the third corresponding relationship
Kinematic coefficient is positively correlated with angular speed, and it is corresponding that each body kinematics coefficient in the 4th corresponding relationship is greater than the 1, and the 4th
Body kinematics coefficient and displacement in relationship are positively correlated;
Based on the target cranial kinematic coefficient and the body kinematics coefficient, the first scaling multiple is determined.
Optionally, which is specifically used for:
Based on the head pose, the model of place of first virtual scene to be rendered is determined;
Obtain the texture image for matching with the model of place and zooming in and out according to the first scaling multiple;
Texture image based on acquisition renders the model of place.
Optionally, the device further include:
Prediction module is predicted the exercise data of the second object time, is predicted for being based on the operation data
Exercise data, second object time are separated by preset duration after current time and with current time;
Second determining module, for determining the prediction head pose of second object time based on the predicted motion data,
And head movement and/or body kinematics whether occur;
Third obtains module, is used for when determining generation head movement and/or body kinematics based on the predicted motion data,
Multiple is scaled based on the predicted motion data acquisition the 4th;
Second rendering module, for scaling multiple based on the prediction head pose and the 4th in second object time
Texture image, the second virtual scene to be rendered is rendered.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
The embodiment of the present application collected exercise data of available current time, and it is based on the exercise data, determine head
Portion's posture, and head movement and/or body kinematics whether occur, when determined based on the exercise data occur head movement and/
Or when body kinematics, multiple is scaled based on the motion capture first, is then based on the head pose and the first scaling multiple
Texture image, first virtual scene at current time is rendered.That is, in the embodiment of the present application, when detection to the end
When portion's movement or body kinematics, the rendering of virtual scene can be carried out based on the texture image after scaling, that is, uses low resolution
Texture is rendered, and when due to head movement or body kinematics, human eye does not need high-definition picture, and therefore, use is low
Resolution texture image is rendered, and calculation amount can be effectively reduced, and improves rendering efficiency.
It should be understood that volume rendering apparatus provided by the above embodiment is when carrying out volume drawing, only with above-mentioned each function
The division progress of module can according to need and for example, in practical application by above-mentioned function distribution by different function moulds
Block is completed, i.e., the internal structure of equipment is divided into different functional modules, to complete all or part of function described above
Energy.In addition, volume rendering apparatus provided by the above embodiment and object plotting method embodiment belong to same design, implemented
Journey is detailed in embodiment of the method, and which is not described herein again.
Fig. 5 is a kind of structural block diagram of smart machine 500 provided by the embodiments of the present application.Wherein, which can
To be: laptop, desktop computer, smart phone or tablet computer etc..Smart machine 500 is also possible to referred to as user and sets
Standby, portable terminal, laptop terminal, terminal console etc..
In general, smart machine 500 includes: processor 501 and memory 502.
Processor 501 may include one or more processing cores, such as 4 core processors, 8 core processors etc..Place
Reason device 501 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field-
Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed
Logic array) at least one of example, in hardware realize.Processor 501 also may include primary processor and coprocessor, master
Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing
Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.?
In some embodiments, processor 501 can be integrated with GPU (Graphics Processing Unit, image processor),
GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 501 can also be wrapped
AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning
Calculating operation.
Memory 502 may include one or more computer readable storage mediums, which can
To be non-transient.Memory 502 may also include high-speed random access memory and nonvolatile memory, such as one
Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 502 can
Storage medium is read for storing at least one instruction, at least one instruction performed by processor 501 for realizing this Shen
Please in embodiment of the method provide virtual scene rendering method.
In some embodiments, smart machine 500 is also optional includes: peripheral device interface 503 and at least one periphery
Equipment.It can be connected by bus or signal wire between processor 501, memory 502 and peripheral device interface 503.It is each outer
Peripheral equipment can be connected by bus, signal wire or circuit board with peripheral device interface 503.Specifically, peripheral equipment includes: to penetrate
At least one in frequency circuit 504, touch display screen 505, camera 506, voicefrequency circuit 507, positioning component 508 and power supply 509
Kind.
Peripheral device interface 503 can be used for I/O (Input/Output, input/output) is relevant outside at least one
Peripheral equipment is connected to processor 501 and memory 502.In some embodiments, processor 501, memory 502 and peripheral equipment
Interface 503 is integrated on same chip or circuit board;In some other embodiments, processor 501, memory 502 and outer
Any one or two in peripheral equipment interface 503 can realize on individual chip or circuit board, the present embodiment to this not
It is limited.
Radio circuit 504 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal.It penetrates
Frequency circuit 504 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 504 turns electric signal
It is changed to electromagnetic signal to be sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 504 wraps
It includes: antenna system, RF transceiver, one or more amplifiers, tuner, oscillator, digital signal processor, codec chip
Group, user identity module card etc..Radio circuit 504 can be carried out by least one wireless communication protocol with other terminals
Communication.The wireless communication protocol includes but is not limited to: WWW, Metropolitan Area Network (MAN), Intranet, each third generation mobile communication network (2G, 3G,
4G and 5G), WLAN and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, it penetrates
Frequency circuit 504 can also include NFC (Near Field Communication, wireless near field communication) related circuit, this
Application is not limited this.
Display screen 505 is for showing UI (User Interface, user interface).The UI may include figure, text, figure
Mark, video and its their any combination.When display screen 505 is touch display screen, display screen 505 also there is acquisition to show
The ability of the touch signal on the surface or surface of screen 505.The touch signal can be used as control signal and be input to processor
501 are handled.At this point, display screen 505 can be also used for providing virtual push button and/or dummy keyboard, also referred to as soft button and/or
Soft keyboard.In some embodiments, display screen 505 can be one, and the front panel of smart machine 500 is arranged;In other realities
It applies in example, display screen 505 can be at least two, be separately positioned on the different surfaces of smart machine 500 or in foldover design;?
In still other embodiments, display screen 505 can be flexible display screen, is arranged on the curved surface of smart machine 500 or folds
On face.Even, display screen 505 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Display screen 505 can be adopted
With LCD (Liquid Crystal Display, liquid crystal display), (Organic Light-Emitting Diode, has OLED
Machine light emitting diode) etc. materials preparation.
CCD camera assembly 506 is for acquiring image or video.Optionally, CCD camera assembly 506 include front camera and
Rear camera.In general, the front panel of terminal is arranged in front camera, the back side of terminal is arranged in rear camera.One
In a little embodiments, rear camera at least two is main camera, depth of field camera, wide-angle camera, focal length camera shooting respectively
Any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide-angle
Camera fusion realizes that pan-shot and VR (Virtual Reality, virtual reality) shooting function or other fusions are clapped
Camera shooting function.In some embodiments, CCD camera assembly 506 can also include flash lamp.Flash lamp can be monochromatic warm flash lamp,
It is also possible to double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for not
With the light compensation under colour temperature.
Voicefrequency circuit 507 may include microphone and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and will
Sound wave, which is converted to electric signal and is input to processor 501, to be handled, or is input to radio circuit 504 to realize voice communication.
For stereo acquisition or the purpose of noise reduction, microphone can be separately positioned on the different parts of smart machine 500 to be multiple.
Microphone can also be array microphone or omnidirectional's acquisition type microphone.Loudspeaker is then used to that processor 501 or radio frequency will to be come from
The electric signal of circuit 504 is converted to sound wave.Loudspeaker can be traditional wafer speaker, be also possible to piezoelectric ceramics loudspeaking
Device.When loudspeaker is piezoelectric ceramic loudspeaker, the audible sound wave of the mankind can be not only converted electrical signals to, can also be incited somebody to action
Electric signal is converted to the sound wave that the mankind do not hear to carry out the purposes such as ranging.In some embodiments, voicefrequency circuit 507 can be with
Including earphone jack.
Positioning component 508 is used for the current geographic position of positioning intelligent equipment 500, to realize navigation or LBS (Location
Based Service, location based service).Positioning component 508 can be the GPS (Global based on the U.S.
Positioning System, global positioning system), China dipper system or European Union Galileo system positioning component.
Power supply 509 is used to be powered for the various components in smart machine 500.Power supply 509 can be alternating current, direct current
Electricity, disposable battery or rechargeable battery.When power supply 509 includes rechargeable battery, which can be line charge
Battery or wireless charging battery.Wired charging battery is the battery to be charged by Wireline, and wireless charging battery is to pass through
The battery of wireless coil charging.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, smart machine 500 further includes having one or more sensors 510.The one or more passes
Sensor 510 includes but is not limited to: acceleration transducer 511, gyro sensor 512, pressure sensor 513, fingerprint sensor
514, optical sensor 515 and proximity sensor 516.
Acceleration transducer 511 can detecte the acceleration in three reference axis of the coordinate system established with smart machine 500
Spend size.For example, acceleration transducer 511 can be used for detecting component of the acceleration of gravity in three reference axis.Processor
The 501 acceleration of gravity signals that can be acquired according to acceleration transducer 511, control touch display screen 505 with transverse views or
Longitudinal view carries out the display of user interface.Acceleration transducer 511 can be also used for game or the exercise data of user
Acquisition.
Gyro sensor 512 can detecte body direction and the rotational angle of smart machine 500, gyro sensor
512 can cooperate with acquisition user to act the 3D of smart machine 500 with acceleration transducer 511.Processor 501 is according to gyroscope
The data that sensor 512 acquires, may be implemented following function: action induction (for example changed according to the tilt operation of user
UI), image stabilization, game control and inertial navigation when shooting.
The lower layer of side frame and/or touch display screen 505 in smart machine 500 can be set in pressure sensor 513.When
When the side frame of smart machine 500 is arranged in pressure sensor 513, user can detecte to the gripping signal of smart machine 500,
Right-hand man's identification or prompt operation are carried out according to the gripping signal that pressure sensor 513 acquires by processor 501.Work as pressure sensing
When the lower layer of touch display screen 505 is arranged in device 513, grasped by processor 501 according to pressure of the user to touch display screen 505
Make, realization controls the operability control on the interface UI.Operability control include button control, scroll bar control,
At least one of icon control, menu control.
Fingerprint sensor 514 is used to acquire the fingerprint of user, collected according to fingerprint sensor 514 by processor 501
The identity of fingerprint recognition user, alternatively, by fingerprint sensor 514 according to the identity of collected fingerprint recognition user.It is identifying
When the identity of user is trusted identity out, the user is authorized to execute relevant sensitive operation, the sensitive operation packet by processor 501
Include solution lock screen, check encryption information, downloading software, payment and change setting etc..Intelligence can be set in fingerprint sensor 514
Front, the back side or the side of equipment 500.When being provided with physical button or manufacturer Logo on smart machine 500, fingerprint sensor
514 can integrate with physical button or manufacturer Logo.
Optical sensor 515 is for acquiring ambient light intensity.In one embodiment, processor 501 can be according to optics
The ambient light intensity that sensor 515 acquires controls the display brightness of touch display screen 505.Specifically, when ambient light intensity is higher
When, the display brightness of touch display screen 505 is turned up;When ambient light intensity is lower, the display for turning down touch display screen 505 is bright
Degree.In another embodiment, the ambient light intensity that processor 501 can also be acquired according to optical sensor 515, dynamic adjust
The acquisition parameters of CCD camera assembly 506.
Proximity sensor 516, also referred to as range sensor are generally arranged at the front panel of smart machine 500.Proximity sensor
516 for acquiring the distance between the front of user Yu smart machine 500.In one embodiment, when proximity sensor 516 is examined
When measuring the distance between the front of user and smart machine 500 and gradually becoming smaller, touch display screen 505 is controlled by processor 501
Breath screen state is switched to from bright screen state;When proximity sensor 516 detect between user and the front of smart machine 500 away from
When from becoming larger, touch display screen 505 being controlled by processor 501 and is switched to bright screen state from breath screen state.
It that is to say, the embodiment of the present application provides not only a kind of volume rendering apparatus, which can be applied to above-mentioned intelligence
In equipment 500, including processor and for the memory of storage processor executable instruction, wherein processor is configured as holding
Object plotting method in row Fig. 1 and embodiment shown in Fig. 2, moreover, computer-readable being deposited the embodiment of the present application also provides a kind of
Storage media is stored with computer program in the storage medium, which may be implemented Fig. 1 and figure when being executed by processor
The rendering method of virtual scene in embodiment shown in 2.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware
It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely the alternative embodiments of the application, not to limit the application, it is all in spirit herein and
Within principle, any modification, equivalent replacement, improvement and so on be should be included within the scope of protection of this application.
Claims (10)
1. a kind of rendering method of virtual scene, which is characterized in that the described method includes:
Obtain current time collected exercise data;
Based on the exercise data, head pose is determined, and head movement and/or body kinematics whether occur;
Head movement occurs and/or when body kinematics when determining based on the exercise data, based on the motion capture the
One scaling multiple;
Texture image based on the head pose and the first scaling multiple, carries out wash with watercolours to the first virtual scene to be rendered
Dye.
2. the method according to claim 1, wherein the exercise data includes being surveyed by being located at the inertia on head
The angular speed of amount unit IMU acquisition and the location information acquired by location tracking equipment;
It is described to be based on the exercise data, determine head pose, and head movement and/or body kinematics whether occur, comprising:
Attitude algorithm is carried out to the angular speed, obtains the head pose;
Based on the location information and historical position information, determine that body is displaced, the historical position information refers to when current
The first object moment before quarter and with current time at a distance of preset duration is believed by the position that the location tracking equipment acquires
Breath;
Judge whether the angular speed is greater than angular speed threshold value, and judges whether the body displacement is greater than displacement threshold value;
If the angular speed is greater than the angular speed threshold value, it is determined that the head movement occurs, if body displacement is greater than
The displacement threshold value, it is determined that the body kinematics occur.
3. the method according to claim 1, wherein the exercise data includes being adopted by being located at the IMU on head
The angular speed of collection and the location information acquired by location tracking equipment;
It is described that multiple is scaled based on the motion capture first, comprising:
First corresponding relationship of angular speed and scaling multiple based on storage determines corresponding second scaling times of the angular speed
Number, and the second corresponding relationship of the displacement based on storage and scaling multiple determine that body is displaced corresponding third scaling multiple, institute
Stating body displacement is to obtain according to the positional information with historical position information determination;
By the maximum scaling multiple of numerical value in the second scaling multiple and third scaling multiple, it is determined as first contracting
Put multiple.
4. the method according to claim 1, wherein the exercise data includes being adopted by being located at the IMU on head
The angular speed of collection and the location information acquired by location tracking equipment;
It is described that multiple is scaled based on the motion capture first, comprising:
Third corresponding relationship between angular speed based on storage and head movement coefficient determines the corresponding target of the angular speed
Head movement coefficient, and the 4th corresponding relationship between the displacement based on storage and body kinematics coefficient determine body displacement pair
The intended body kinematic coefficient answered, the body displacement are to obtain according to the positional information with historical position information determination;
Wherein, each head movement coefficient in the third corresponding relationship is greater than 1, and the head in the third corresponding relationship
Kinematic coefficient is positively correlated with angular speed, and each body kinematics coefficient in the 4th corresponding relationship is greater than the 1, and the described 4th
Body kinematics coefficient and displacement in corresponding relationship are positively correlated;
Based on the target cranial kinematic coefficient and the intended body kinematic coefficient, the first scaling multiple is determined.
5. the method according to claim 1, wherein described scaled again based on the head pose with described first
Several texture images renders the first virtual scene to be rendered, comprising:
Based on the head pose, the model of place of the first virtual scene to be rendered is determined;
Obtain the texture image for matching with the model of place and zooming in and out according to the first scaling multiple;
Texture image based on acquisition renders the model of place.
6. the method according to claim 1, wherein described scaled again based on the head pose with described first
Several texture image, after being rendered to the first virtual scene to be rendered, further includes:
Based on the operation data, the exercise data of the second object time is predicted, obtains predicted motion data, described
Two object times are separated by preset duration after current time and with current time;
Based on the predicted motion data, the prediction head pose of second object time is determined, and whether head occurs
Movement and/or body kinematics;
When determining generation head movement and/or body kinematics based on the predicted motion data, it is based on the predicted motion number
Multiple is scaled according to obtaining the 4th;
In second object time, the texture image based on the prediction head pose and the 4th scaling multiple is treated
Second virtual scene of rendering is rendered.
7. a kind of rendering device of virtual scene, which is characterized in that the described method includes:
First obtains module, for obtaining current time collected exercise data;
First determining module, for be based on the exercise data, determine head pose, and whether occur head movement and/or
Body kinematics;
Second obtains module, for being based on institute when determining generation head movement and/or body kinematics based on the exercise data
It states motion capture first and scales multiple;
First rendering module, for the texture image based on the head pose and the first scaling multiple, to be rendered
First virtual scene is rendered.
8. device according to claim 7, which is characterized in that the exercise data includes being adopted by being located at the IMU on head
The angular speed of collection and the location information acquired by location tracking equipment;
First determining module is specifically used for:
Attitude algorithm is carried out to the angular speed, obtains the head pose;
According to the positional information and historical position information, determine that body is displaced, the historical position information refers to when current
At a distance of the first object moment of preset duration before quarter and with current time, believed by the position that the location tracking equipment acquires
Breath;
Judge whether the angular speed is greater than angular speed threshold value, and judges whether the body displacement is greater than displacement threshold value;
If the angular speed is greater than the angular speed threshold value, it is determined that the head movement occurs, if body displacement is greater than
The displacement threshold value, it is determined that the body kinematics occur.
9. device according to claim 7, which is characterized in that the exercise data includes being adopted by being located at the IMU on head
The angular speed of collection and the location information acquired by location tracking equipment;
The second acquisition module is specifically used for:
First corresponding relationship of angular speed and scaling multiple based on storage determines corresponding second scaling times of the angular speed
Number, and the second corresponding relationship of the displacement based on storage and scaling multiple determine that body is displaced corresponding third scaling multiple, institute
Stating body displacement is to obtain according to the positional information with historical position information determination;
By the maximum scaling multiple of numerical value in the second scaling multiple and third scaling multiple, it is determined as first contracting
Put multiple.
10. a kind of smart machine, which is characterized in that the smart machine includes:
Processor, the processor include image processor GPU;
For storing the memory of the processor-executable instruction;
Wherein, the processor is configured to described in claim 1-6 the step of any one method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811639195.9A CN109712224B (en) | 2018-12-29 | 2018-12-29 | Virtual scene rendering method and device and intelligent device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811639195.9A CN109712224B (en) | 2018-12-29 | 2018-12-29 | Virtual scene rendering method and device and intelligent device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109712224A true CN109712224A (en) | 2019-05-03 |
CN109712224B CN109712224B (en) | 2023-05-16 |
Family
ID=66260163
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811639195.9A Active CN109712224B (en) | 2018-12-29 | 2018-12-29 | Virtual scene rendering method and device and intelligent device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109712224B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110022452A (en) * | 2019-05-16 | 2019-07-16 | 深圳市芯动电子科技有限公司 | A kind of video pumping frame method and system suitable for holographic display |
CN110728749A (en) * | 2019-10-10 | 2020-01-24 | 青岛大学附属医院 | Virtual three-dimensional image display system and method |
CN110766780A (en) * | 2019-11-06 | 2020-02-07 | 北京无限光场科技有限公司 | Method and device for rendering room image, electronic equipment and computer readable medium |
CN110910509A (en) * | 2019-11-21 | 2020-03-24 | Oppo广东移动通信有限公司 | Image processing method, electronic device, and storage medium |
CN110930307A (en) * | 2019-10-31 | 2020-03-27 | 北京视博云科技有限公司 | Image processing method and device |
CN111008934A (en) * | 2019-12-25 | 2020-04-14 | 上海米哈游天命科技有限公司 | Scene construction method, device, equipment and storage medium |
WO2020259402A1 (en) * | 2019-06-24 | 2020-12-30 | 京东方科技集团股份有限公司 | Method and device for image processing, terminal device, medium, and wearable system |
CN112380989A (en) * | 2020-11-13 | 2021-02-19 | 歌尔光学科技有限公司 | Head-mounted display equipment, data acquisition method and device thereof, and host |
CN113448437A (en) * | 2021-06-19 | 2021-09-28 | 刘芮伶 | Virtual reality image display method based on head-mounted display device and electronic equipment |
CN113515193A (en) * | 2021-05-17 | 2021-10-19 | 聚好看科技股份有限公司 | Model data transmission method and device |
CN113797530A (en) * | 2021-06-11 | 2021-12-17 | 荣耀终端有限公司 | Image prediction method, electronic device and storage medium |
CN113965768A (en) * | 2021-09-10 | 2022-01-21 | 北京达佳互联信息技术有限公司 | Live broadcast room information display method and device, electronic equipment and server |
CN114167992A (en) * | 2021-12-17 | 2022-03-11 | 深圳创维数字技术有限公司 | Display picture rendering method, electronic device and readable storage medium |
CN115661373A (en) * | 2022-12-26 | 2023-01-31 | 天津沄讯网络科技有限公司 | Rotary equipment fault monitoring and early warning system and method based on edge algorithm |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170018121A1 (en) * | 2015-06-30 | 2017-01-19 | Ariadne's Thread (Usa), Inc. (Dba Immerex) | Predictive virtual reality display system with post rendering correction |
CN107516335A (en) * | 2017-08-14 | 2017-12-26 | 歌尔股份有限公司 | The method for rendering graph and device of virtual reality |
CN107590859A (en) * | 2017-09-01 | 2018-01-16 | 广州励丰文化科技股份有限公司 | A kind of mixed reality picture processing method and service equipment |
US20180075654A1 (en) * | 2016-09-12 | 2018-03-15 | Intel Corporation | Hybrid rendering for a wearable display attached to a tethered computer |
CN108305326A (en) * | 2018-01-22 | 2018-07-20 | 中国人民解放军陆军航空兵学院 | A method of mixing virtual reality |
-
2018
- 2018-12-29 CN CN201811639195.9A patent/CN109712224B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170018121A1 (en) * | 2015-06-30 | 2017-01-19 | Ariadne's Thread (Usa), Inc. (Dba Immerex) | Predictive virtual reality display system with post rendering correction |
US20180075654A1 (en) * | 2016-09-12 | 2018-03-15 | Intel Corporation | Hybrid rendering for a wearable display attached to a tethered computer |
CN107516335A (en) * | 2017-08-14 | 2017-12-26 | 歌尔股份有限公司 | The method for rendering graph and device of virtual reality |
CN107590859A (en) * | 2017-09-01 | 2018-01-16 | 广州励丰文化科技股份有限公司 | A kind of mixed reality picture processing method and service equipment |
CN108305326A (en) * | 2018-01-22 | 2018-07-20 | 中国人民解放军陆军航空兵学院 | A method of mixing virtual reality |
Non-Patent Citations (1)
Title |
---|
丁剑飞等: "基于GPU的自由立体显示器通用渲染算法", 《系统仿真学报》 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110022452A (en) * | 2019-05-16 | 2019-07-16 | 深圳市芯动电子科技有限公司 | A kind of video pumping frame method and system suitable for holographic display |
CN110022452B (en) * | 2019-05-16 | 2021-04-30 | 深圳市芯动电子科技有限公司 | Video frame extraction method and system suitable for holographic display device |
WO2020259402A1 (en) * | 2019-06-24 | 2020-12-30 | 京东方科技集团股份有限公司 | Method and device for image processing, terminal device, medium, and wearable system |
CN110728749A (en) * | 2019-10-10 | 2020-01-24 | 青岛大学附属医院 | Virtual three-dimensional image display system and method |
CN110728749B (en) * | 2019-10-10 | 2023-11-07 | 青岛大学附属医院 | Virtual three-dimensional image display system and method |
CN110930307A (en) * | 2019-10-31 | 2020-03-27 | 北京视博云科技有限公司 | Image processing method and device |
CN110766780A (en) * | 2019-11-06 | 2020-02-07 | 北京无限光场科技有限公司 | Method and device for rendering room image, electronic equipment and computer readable medium |
CN110910509A (en) * | 2019-11-21 | 2020-03-24 | Oppo广东移动通信有限公司 | Image processing method, electronic device, and storage medium |
CN111008934A (en) * | 2019-12-25 | 2020-04-14 | 上海米哈游天命科技有限公司 | Scene construction method, device, equipment and storage medium |
CN111008934B (en) * | 2019-12-25 | 2023-08-29 | 上海米哈游天命科技有限公司 | Scene construction method, device, equipment and storage medium |
CN112380989B (en) * | 2020-11-13 | 2023-01-24 | 歌尔科技有限公司 | Head-mounted display equipment, data acquisition method and device thereof, and host |
CN112380989A (en) * | 2020-11-13 | 2021-02-19 | 歌尔光学科技有限公司 | Head-mounted display equipment, data acquisition method and device thereof, and host |
US11836286B2 (en) | 2020-11-13 | 2023-12-05 | Goertek Inc. | Head-mounted display device and data acquisition method, apparatus, and host computer thereof |
CN113515193A (en) * | 2021-05-17 | 2021-10-19 | 聚好看科技股份有限公司 | Model data transmission method and device |
CN113515193B (en) * | 2021-05-17 | 2023-10-27 | 聚好看科技股份有限公司 | Model data transmission method and device |
CN113797530A (en) * | 2021-06-11 | 2021-12-17 | 荣耀终端有限公司 | Image prediction method, electronic device and storage medium |
CN113797530B (en) * | 2021-06-11 | 2022-07-12 | 荣耀终端有限公司 | Image prediction method, electronic device and storage medium |
CN113448437A (en) * | 2021-06-19 | 2021-09-28 | 刘芮伶 | Virtual reality image display method based on head-mounted display device and electronic equipment |
CN113965768A (en) * | 2021-09-10 | 2022-01-21 | 北京达佳互联信息技术有限公司 | Live broadcast room information display method and device, electronic equipment and server |
CN113965768B (en) * | 2021-09-10 | 2024-01-02 | 北京达佳互联信息技术有限公司 | Live broadcasting room information display method and device, electronic equipment and server |
CN114167992A (en) * | 2021-12-17 | 2022-03-11 | 深圳创维数字技术有限公司 | Display picture rendering method, electronic device and readable storage medium |
CN115661373A (en) * | 2022-12-26 | 2023-01-31 | 天津沄讯网络科技有限公司 | Rotary equipment fault monitoring and early warning system and method based on edge algorithm |
CN115661373B (en) * | 2022-12-26 | 2023-04-07 | 天津沄讯网络科技有限公司 | Rotary equipment fault monitoring and early warning system and method based on edge algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN109712224B (en) | 2023-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109712224A (en) | Rendering method, device and the smart machine of virtual scene | |
US11205282B2 (en) | Relocalization method and apparatus in camera pose tracking process and storage medium | |
US20200272825A1 (en) | Scene segmentation method and device, and storage medium | |
CN109308727B (en) | Virtual image model generation method and device and storage medium | |
CN109978936B (en) | Disparity map acquisition method and device, storage medium and equipment | |
AU2020256776B2 (en) | Method and device for observing virtual article in virtual environment, and readable storage medium | |
CN110148178B (en) | Camera positioning method, device, terminal and storage medium | |
CN110488977A (en) | Virtual reality display methods, device, system and storage medium | |
CN110276840A (en) | Control method, device, equipment and the storage medium of more virtual roles | |
CN109947886A (en) | Image processing method, device, electronic equipment and storage medium | |
CN111324250B (en) | Three-dimensional image adjusting method, device and equipment and readable storage medium | |
EP3960261A1 (en) | Object construction method and apparatus based on virtual environment, computer device, and readable storage medium | |
WO2022052620A1 (en) | Image generation method and electronic device | |
CN110097576A (en) | The motion information of image characteristic point determines method, task executing method and equipment | |
CN109166150B (en) | Pose acquisition method and device storage medium | |
CN110135336A (en) | Training method, device and the storage medium of pedestrian's generation model | |
CN110210573A (en) | Fight generation method, device, terminal and the storage medium of image | |
CN109862412A (en) | It is in step with the method, apparatus and storage medium of video | |
CN109948581A (en) | Picture and text rendering method, device, equipment and readable storage medium storing program for executing | |
CN110136236A (en) | Personalized face's display methods, device, equipment and the storage medium of three-dimensional character | |
CN110290426A (en) | Method, apparatus, equipment and the storage medium of showing resource | |
CN110288689A (en) | The method and apparatus that electronic map is rendered | |
CN109992685A (en) | A kind of method and device of retrieving image | |
CN108844529A (en) | Determine the method, apparatus and smart machine of posture | |
CN108196701A (en) | Determine the method, apparatus of posture and VR equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218 Applicant after: Hisense Visual Technology Co., Ltd. Applicant after: BEIHANG University Address before: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218 Applicant before: QINGDAO HISENSE ELECTRONICS Co.,Ltd. Applicant before: BEIHANG University |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |