CN111862288A - Pose rendering method, device and medium - Google Patents

Pose rendering method, device and medium Download PDF

Info

Publication number
CN111862288A
CN111862288A CN202010745000.XA CN202010745000A CN111862288A CN 111862288 A CN111862288 A CN 111862288A CN 202010745000 A CN202010745000 A CN 202010745000A CN 111862288 A CN111862288 A CN 111862288A
Authority
CN
China
Prior art keywords
image frames
reference image
frame
pose parameter
compensation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010745000.XA
Other languages
Chinese (zh)
Inventor
臧宇彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202010745000.XA priority Critical patent/CN111862288A/en
Publication of CN111862288A publication Critical patent/CN111862288A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The disclosure provides a pose rendering method, a pose rendering device and a pose rendering medium, wherein the pose rendering method comprises the following steps: determining N image frames for reference; calculating a first pose parameter under a real three-dimensional scene and a second pose parameter under a virtual three-dimensional scene according to the N reference image frames; calculating a first transformation from a first pose parameter to the second pose parameter; calculating a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to a first transformation mode; and performing pose rendering on the corresponding image frame for compensation under the virtual three-dimensional scene according to the third pose parameter. In the method, part of image frames acquired by the camera equipment are set as reference image frames, the pose parameters under the virtual three-dimensional scene are calculated only aiming at the reference image frames, and the pose parameters under the virtual three-dimensional scene calculated aiming at the reference image frames are used for carrying out pose compensation calculation on the compensation image frames, so that the processing speed is improved.

Description

Pose rendering method, device and medium
Technical Field
The present disclosure relates to the field of mobile terminal data processing technologies, and in particular, to a pose rendering method, apparatus, and medium.
Background
At present, the method for enhancing the virtual object aiming at the pose of the object in real time during object identification mainly comprises the following two schemes.
The first scheme is that the display purpose is achieved by using the prior known information as the constraint through a specific special scene and identifying the prior set characteristics and carrying out posture estimation and virtual object enhancement on the set object. The method has the advantages that due to the limited scene and the known part of prior information, the real-time performance of the algorithm can be guaranteed, the mobile terminal AR display under the limited scene and the part of prior information can be performed, but the actual augmented reality without the prior information under the natural scene can not be performed.
The first solution has the disadvantages that the scene needs to be limited, different methods need to be adopted according to the characteristics of different objects, and the universality is poor.
The second scheme is to use a deep learning model and a large amount of training data to calculate the posture required by augmented reality, and use the posture to render a virtual object to a real scene for augmented reality.
The second scheme has the defects that the calculated amount is large, real-time processing at the mobile terminal is difficult, the obtained pose is influenced by model precision and training data, the generated pose track is not smooth enough, and the augmented reality rendering result is influenced.
Disclosure of Invention
To overcome the problems in the related art, a pose rendering method, apparatus, and medium are provided.
According to a first aspect of embodiments herein, there is provided a pose rendering method, comprising:
determining N image frames for reference; n is an integer greater than 0;
calculating a first pose parameter under a real three-dimensional scene and a second pose parameter under a virtual three-dimensional scene according to the N reference image frames;
calculating a first transformation from the first pose parameter to the second pose parameter;
calculating a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation mode;
and performing pose rendering on the corresponding image frame for compensation under the virtual three-dimensional scene according to the third pose parameter.
In one embodiment, the determining N reference image frames includes: determining N reference image frames at intervals of set time length, or determining N reference image frames at intervals of set frame number;
the method further comprises the following steps: and determining a plurality of compensation image frames corresponding to the N reference image frames according to the image frame acquisition rate and the N reference image frames.
In one embodiment, determining a plurality of compensation image frames corresponding to the N reference image frames according to the image frame acquisition rate and the number of the N reference image frames in a first set duration or in a first acquisition period for acquiring image frames of a first set number of frames comprises:
determining a first frame number of continuous image frames contained in the set duration according to an image frame acquisition rate; or, the set frame number is taken as the first frame number;
determining the calculation duration required for calculating the first pose parameters under the real three-dimensional scene corresponding to the N reference image frames in the process of acquiring the image frames in real time according to the quantity and the positions of the N reference image frames; calculating a second frame number corresponding to the calculated duration according to the acquisition rate of the image frame;
and removing continuous image frames of a second frame number from continuous image frames of a first frame number which are contained by taking the image frame collected firstly in the N reference image frames as a starting point, wherein the continuous image frames are used as compensation image frames corresponding to the N reference image frames.
In one embodiment, determining a plurality of compensation image frames corresponding to the N reference image frames according to an image frame acquisition rate and the number of the N reference image frames in a non-first set duration or a non-first acquisition period includes:
determining a plurality of compensation image frames corresponding to the N reference image frames in the current set duration or the current acquisition period includes: removing the image frames of the continuous image frames of the second frame number from the image frames in the current set duration or the current acquisition period; and, the next set duration or the second number of frames of consecutive image frames before in the next acquisition period.
In one embodiment, calculating the first pose parameter in the real scene from the N reference image frames includes:
when the number of the N reference image frames is 1, calculating a second pose parameter under the virtual three-dimensional scene according to the reference image frames;
when the number of the N reference image frames is more than 1, respectively calculating pose parameters under a real three-dimensional scene according to each reference image frame, and calculating the average pose parameter of each pose parameter;
calculating a second pose parameter under the virtual three-dimensional scene according to the N reference image frames, comprising:
when the number of the N reference image frames is 1, calculating a second pose parameter under the virtual three-dimensional scene according to the reference image frames;
and when the number of the N reference image frames is more than 1, respectively calculating pose parameters under the real three-dimensional scene according to each reference image frame, and calculating the average pose parameter of each pose parameter.
In an embodiment, calculating a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation manner includes:
performing the following for each image frame for compensation:
and calculating a fourth pose parameter of the image frame for compensation in the real three-dimensional scene corresponding to the image frame for compensation, and calculating a third pose parameter of the image frame for compensation in the virtual three-dimensional scene according to the first transformation mode and the fourth pose parameter.
In an embodiment, calculating a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation manner includes:
performing the following operations for a compensating image frame closest to the at least one reference image:
calculating a fourth pose parameter of the image frame for compensation under the real three-dimensional scene corresponding to the image frame for compensation, calculating a third pose parameter of the image frame for compensation under the virtual three-dimensional scene according to the first transformation mode and the fourth pose parameter, and calculating a second transformation mode from the fourth pose parameter to the third pose parameter;
sequentially executing the following operations for each image frame for compensation except for the image frame for compensation closest to the at least one reference image:
determining a fourth pose parameter of the image frame for compensation under the real three-dimensional scene corresponding to the image frame for compensation, calculating a third pose parameter of the image frame for compensation under the virtual three-dimensional scene according to a second transformation mode corresponding to a previous non-reference image frame and the fourth pose parameter, and calculating a second transformation mode from the fourth pose parameter to the third pose parameter.
According to a second aspect of embodiments herein, there is provided a pose rendering apparatus including:
a first determination module configured to determine N reference image frames; n is an integer greater than 0;
a first calculation module configured to calculate a first pose parameter in the real three-dimensional scene and a second pose parameter in the virtual three-dimensional scene according to the N reference image frames;
a second calculation module configured to calculate a first transformation from the first pose parameter to the second pose parameter;
a third calculation module configured to calculate a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation manner;
and the rendering module is configured to perform pose rendering on the corresponding image frame for compensation in the virtual three-dimensional scene according to the third pose parameter.
In an embodiment, the first determining module is further configured to determine the N reference image frames using the following method: determining N reference image frames at intervals of set time length, or determining N reference image frames at intervals of set frame number;
the device further comprises:
a second determining module configured to determine a plurality of compensation image frames corresponding to the N reference image frames according to an image frame acquisition rate and the N reference image frames.
In one embodiment, the second determining module is further configured to determine, in a first set duration or in a first acquisition period for acquiring image frames of a first set number of frames, a plurality of compensation image frames corresponding to the N reference image frames according to an image frame acquisition rate and the number of the N reference image frames using the following method:
determining a first frame number of continuous image frames contained in the set duration according to an image frame acquisition rate; or, the set frame number is taken as the first frame number;
determining the calculation duration required for calculating the first pose parameters under the real three-dimensional scene corresponding to the N reference image frames in the process of acquiring the image frames in real time according to the quantity and the positions of the N reference image frames; calculating a second frame number corresponding to the calculated duration according to the acquisition rate of the image frame;
and removing continuous image frames of a second frame number from continuous image frames of a first frame number which are contained by taking the image frame collected firstly in the N reference image frames as a starting point, wherein the continuous image frames are used as compensation image frames corresponding to the N reference image frames.
In one embodiment, the second determining module is further configured to determine, in a non-first set duration or in a non-first acquisition period, a plurality of compensation image frames corresponding to the N reference image frames according to an image frame acquisition rate and the number of the N reference image frames, using the following method:
determining a plurality of compensation image frames corresponding to the N reference image frames in the current set duration or the current acquisition period includes: removing the image frames of the continuous image frames of the second frame number from the image frames in the current set duration or the current acquisition period; and, the next set duration or the second number of frames of consecutive image frames before in the next acquisition period.
In an embodiment, the first calculation module is further configured to calculate the first pose parameter in the real scene from the N reference image frames using the following method:
when the number of the N reference image frames is 1, calculating a second pose parameter under the virtual three-dimensional scene according to the reference image frames;
when the number of the N reference image frames is more than 1, respectively calculating pose parameters under a real three-dimensional scene according to each reference image frame, and calculating the average pose parameter of each pose parameter;
calculating a second pose parameter under the virtual three-dimensional scene from the N reference image frames using the following method:
when the number of the N reference image frames is 1, calculating a second pose parameter under the virtual three-dimensional scene according to the reference image frames;
and when the number of the N reference image frames is more than 1, respectively calculating pose parameters under the real three-dimensional scene according to each reference image frame, and calculating the average pose parameter of each pose parameter.
In an embodiment, the third calculation module is further configured to calculate a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation manner using the following method:
performing the following for each image frame for compensation:
and calculating a fourth pose parameter of the image frame for compensation in the real three-dimensional scene corresponding to the image frame for compensation, and calculating a third pose parameter of the image frame for compensation in the virtual three-dimensional scene according to the first transformation mode and the fourth pose parameter.
In an embodiment, the third calculation module is further configured to calculate a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation manner using the following method:
performing the following operations for a compensating image frame closest to the at least one reference image:
calculating a fourth pose parameter of the image frame for compensation under the real three-dimensional scene corresponding to the image frame for compensation, calculating a third pose parameter of the image frame for compensation under the virtual three-dimensional scene according to the first transformation mode and the fourth pose parameter, and calculating a second transformation mode from the fourth pose parameter to the third pose parameter;
sequentially executing the following operations for each image frame for compensation except for the image frame for compensation closest to the at least one reference image:
determining a fourth pose parameter of the image frame for compensation under the real three-dimensional scene corresponding to the image frame for compensation, calculating a third pose parameter of the image frame for compensation under the virtual three-dimensional scene according to a second transformation mode corresponding to a previous non-reference image frame and the fourth pose parameter, and calculating a second transformation mode from the fourth pose parameter to the third pose parameter.
According to a third aspect of embodiments herein, there is provided a pose rendering apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the executable instructions in the memory to implement the steps of the above-described method.
According to a fourth aspect of embodiments herein, there is provided a non-transitory computer readable storage medium having stored thereon executable instructions which, when executed by a processor, implement the steps of the above-described method.
The technical solutions provided by the embodiments herein may include the following beneficial effects: a part of image frames acquired by the camera equipment are set as reference image frames, pose parameters under a virtual three-dimensional scene are calculated only aiming at the reference image frames, the pose parameters under the virtual three-dimensional scene calculated aiming at the reference image frames are used for carrying out pose compensation calculation on the compensation image frames, and the pose parameters of the virtual three-dimensional scene consuming the calculation capacity are calculated only aiming at the reference image frames, so that the processing speed is improved, and the augmented reality can be completed in real time on equipment with limited calculation capacity.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a pose rendering method in accordance with an exemplary embodiment;
FIG. 2 is a diagram illustrating the processing of each image frame within a set duration in a specific example;
FIG. 3 is a diagram illustrating the processing of each image frame within a set duration in a specific example;
fig. 4 is a structural diagram showing a pose rendering apparatus according to an exemplary embodiment;
fig. 5 is a structural diagram showing a pose rendering apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects herein, as detailed in the appended claims.
The embodiment of the disclosure provides a pose rendering method. Referring to fig. 1, fig. 1 is a flowchart illustrating a pose rendering method according to an exemplary embodiment. As shown in fig. 1, the method includes:
step S11, determining N reference image frames; n is an integer greater than 0;
step S12, calculating a first pose parameter under a real three-dimensional scene and a second pose parameter under a virtual three-dimensional scene according to the N reference image frames;
step S13, calculating a first transformation from the first posture parameter to the second posture parameter;
step S14, calculating a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation mode;
and step S15, performing pose rendering on the corresponding compensation image frame in the virtual three-dimensional scene according to the third pose parameter.
The image frames in the method are image frames acquired by a camera device, and the camera device performs image frame acquisition at a set acquisition rate, for example, 25 frames per second, 30 frames per second, and the like.
In one embodiment, in step S12, when calculating the first pose parameter in the real three-dimensional scene, the pose parameter in the real three-dimensional scene of each reference image frame is calculated using a simultaneous localization and mapping (SLAM) technique. Among them, SLAM is a technique for obtaining a 3D structure of an unknown environment and sensor motion in the environment. SLAM-based applications have been widely expanded, such as online 3D modeling based on computer vision, visualization based on augmented reality, and auto-driving automobiles. Many different types of sensors are integrated in the SLAM algorithm.
In one embodiment, in step S12, when calculating the second pose parameter in the virtual three-dimensional scene, the pose parameter in the virtual three-dimensional scene is calculated using a deep neural network.
In one embodiment, the pose parameters are multiple degree of freedom (dof) pose parameters, such as 3dof pose parameters, or 6dof pose parameters.
In the embodiment, a part of image frames acquired by the camera device are set as reference image frames, the pose parameters under the virtual three-dimensional scene are calculated only for the reference image frames, the pose parameters under the virtual three-dimensional scene calculated for the reference image frames are used for carrying out pose compensation calculation on the compensation image frames, and the pose parameters of the virtual three-dimensional scene consuming the calculation capacity are calculated only for the reference image frames, so that the processing speed is increased, and the augmented reality can be completed in real time on the device with limited calculation capacity.
The embodiment of the disclosure provides a pose rendering method, which includes the method shown in fig. 1, and includes:
the determining N reference image frames in step S11 includes: and determining N reference image frames at set time intervals or determining N reference image frames at set frame intervals.
In one embodiment, the N reference image frames are consecutive image frames. For example: is the first N image frames in the corresponding plurality of image frames in the set time length. For example: is the first N image frames among a plurality of consecutive image frames of a set frame number.
In an embodiment, the N reference image frames are discontinuous image frames, and the first image frame acquired in the N reference image frames is the first image frame acquired in the plurality of corresponding image frames within the set duration. Or, the image frame acquired first in the N reference image frames is the image frame acquired first in a plurality of consecutive image frames of the set frame number.
Specific examples are as follows:
the first embodiment is as follows:
the image capture rate of the camera device is 25 frames per second. The set time period is 1 second. 25 image frames are acquired at set intervals. The number of reference image frames determined per set interval time period is 1.
Every 1 second, the newly acquired 1 image frame is taken as the image frame for reference. For example, 100 frames are acquired in 4 consecutive seconds, and the 1 st frame in the 100 frames is a reference image frame in the 1 st second, the 26 th frame is a reference image frame in the 2 nd second, the 51 st frame is a reference image frame in the 3 rd second, and the 76 th frame is a reference image frame in the 4 th second.
Example two:
the image capture rate of the camera device is 25 frames per second. The set time period is 1 second. 25 image frames are acquired at set intervals. The number of reference image frames determined per set interval time period is 2.
Every 1 second interval, the newly acquired 2 image frames are used as the image frames for reference. For example, 100 frames are acquired in 4 consecutive seconds, and the 1 st frame and the 2 nd frame in the 100 frames are reference image frames in the 1 st second, the 26 th frame and the 27 th frame are reference image frames in the 2 nd second, the 51 st frame and the 52 nd frame are reference image frames in the 3 rd second, and the 76 th frame and the 77 th frame are reference image frames in the 4 th second.
Example three:
the image capture rate of the camera device is 25 frames per second. The set time period is 1 second. 25 image frames are acquired at set intervals. The number of reference image frames determined per set interval time period is 2.
Every 1 second, the 1 st and 3 rd frames are acquired as reference image frames. For example, 100 frames are acquired in 4 consecutive seconds, and the 1 st frame and the 3 rd frame in the 100 frames are reference image frames in the 1 st second, the 26 th frame and the 28 th frame are reference image frames in the 2 nd second, the 51 st frame and the 53 th frame are reference image frames in the 3 rd second, and the 76 th frame and the 78 th frame are reference image frames in the 4 th second.
In one embodiment, the method further comprises: and determining a plurality of compensation image frames corresponding to the N reference image frames according to the image frame acquisition rate and the N reference image frames.
In one embodiment, in a first set duration or in a first acquisition period for acquiring image frames of a first set number of frames, determining a plurality of compensation image frames corresponding to N reference image frames according to an image frame acquisition rate and a number of the N reference image frames comprises:
step 1, determining a first frame number of continuous image frames contained in the set duration according to an image frame acquisition rate; or, the set frame number is used as the first frame number.
Step 2, determining a calculation time length required for calculating a first pose parameter under a real three-dimensional scene corresponding to the N reference image frames in the process of acquiring the image frames in real time according to the quantity and the positions of the N reference image frames; and calculating a second frame number corresponding to the calculated time length according to the acquisition rate of the image frames.
And 3, removing the continuous image frames of the second frame number containing the N reference image frames from the continuous image frames of the first frame number containing the image frame collected firstly in the N reference image frames as a starting point to serve as compensation image frames corresponding to the N reference image frames.
In a non-first set duration or a non-first acquisition period, determining a plurality of compensation image frames corresponding to the N reference image frames according to an image frame acquisition rate and the number of the N reference image frames, including:
determining a plurality of compensation image frames corresponding to the N reference image frames in the current set duration or the current acquisition period includes: removing the image frames of the continuous image frames of the second frame number from the image frames in the current set duration or the current acquisition period; and, the next set duration or the second number of frames of consecutive image frames before in the next acquisition period.
Specific examples are as follows:
the first embodiment is as follows:
the image capture rate of the camera device is 25 frames per second. The set time period is 1 second. The number of the reference image frames determined every set time interval is 1, and the first image frame in the corresponding image frames in every set time interval is the reference image frame. The first frame number of the continuous image frames included with the reference image frame as the start point is 25.
According to the number and the positions of the image frames for reference, the calculation time length required for calculating the first pose parameter in the real three-dimensional scene corresponding to the image frames for reference is determined to be 170 milliseconds, the acquisition rate of the image frames is 25 frames per second, the acquisition interval of two adjacent image frames is 40 milliseconds, and the second frame number corresponding to the calculation time length, namely the number of the image frames covered by the calculation time length is 5.
In the first set duration, continuous image frames of 25 continuous image frames including 1 reference image frame are determined, the continuous image frames excluding 5 continuous image frames including 1 reference image frame are 6 th to 25 th frames, and the 6 th to 25 th frames are used as compensation image frames corresponding to the 1 reference image frame in the first set duration.
And in the second set time length, determining the image frames of the image frames in the current set time length except the second frame number before the current set time length, namely the image frames after 5 continuous image frames are the 6 th frame to 25 th frame in the current set time length, and determining the image frames of the second frame number before the next set time length, namely 5 continuous image frames, as the 1 st frame to 5 th frame in the next set time length. Accordingly, the plurality of compensation image frames corresponding to the 1 reference image frame within the second set duration include: the 6 th frame to the 25 th frame in the second set duration and the 1 st frame to the 5 th frame in the third set duration.
The plurality of compensation image frames corresponding to the reference image frame in each subsequent set time length are all the 6 th frame to the 25 th frame in the current set time length and the 1 st frame to the 5 th frame in the next set time length.
Example two:
the image capture rate of the camera device is 25 frames per second. The set time period is 1 second. The number of the reference image frames determined by the set time length every interval is 2, and 2 reference image frames are adjacent. The first frame number of the continuous image frames included with the reference image frame as the start point is 25.
The calculation time required for calculating the first pose parameter in the real three-dimensional scene corresponding to one reference image frame is 170 ms. The acquisition rate of the image frames is 25 frames per second, and the acquisition interval of two adjacent image frames is 40 milliseconds. According to the number of the reference image frames and the adjacent position relation, the calculation time length required for calculating the first pose parameter in the real three-dimensional scene corresponding to the 2 adjacent reference image frames is determined to be 210 milliseconds. The second frame number corresponding to the calculation duration, i.e. the number of image frames covered by the calculation duration, is 6.
In the first set duration, continuous image frames of 25 continuous image frames including 2 reference image frames are determined, the continuous image frames excluding 6 continuous image frames including 2 reference image frames are 7 th to 25 th frames, and the 7 th to 25 th frames are used as compensation image frames corresponding to 1 reference image frame in the first set duration.
And in the second set time length, determining the image frames of the image frames in the current set time length except the second frame number before the current set time length, namely the image frames after 6 continuous image frames are the 7 th frame to 25 th frame in the current set time length, and determining the image frames of the second frame number before the next set time length, namely 6 continuous image frames, as the 1 st frame to the 6 th frame in the next set time length. Accordingly, the plurality of compensation image frames corresponding to the 2 reference image frames within the second set duration include: the 7 th frame to the 25 th frame in the second set duration and the 1 st frame to the 6 th frame in the third set duration.
The plurality of compensation image frames corresponding to the reference image frame in each subsequent set time length are all the 6 th frame to the 25 th frame in the current set time length and the 1 st frame to the 5 th frame in the next set time length.
Example three:
the image capture rate of the camera device is 25 frames per second. The set time period is 1 second. The number of the reference image frames determined by the set time length every interval is 2, and every two reference image frames are separated by one image frame in the 2 reference image frames. The first frame number of the continuous image frames included starting from the image frame acquired first among the reference image frames is 25.
The calculation time required for calculating the first pose parameter in the real three-dimensional scene corresponding to one reference image frame is 170 ms. The acquisition rate of the image frames is 25 frames per second, and the acquisition interval of two adjacent image frames is 40 milliseconds. According to the number of the reference image frames and the adjacent position relation, the calculation time length required for calculating the first posture parameter in the real three-dimensional scene corresponding to the 2 reference image frames separated by one image frame is determined to be 250 milliseconds. The second frame number corresponding to the calculation duration, i.e. the number of image frames covered by the calculation duration, is 7.
In the first set duration, continuous image frames of 25 continuous image frames including 2 reference image frames are determined, the continuous image frames excluding 7 continuous image frames including 2 reference image frames are 8 th to 25 th frames, and the 8 th to 25 th frames are used as compensation image frames corresponding to 1 reference image frame in the first set duration.
And in the second set time length, determining the image frames of the image frames in the current set time length except the second frame number before the current set time length, namely the image frames after 7 continuous image frames are the 8 th frame to 25 th frame in the current set time length, and determining the continuous image frames of the second frame number before the next set time length, namely 7, as the 1 st frame to the 7 th frame in the next set time length. Accordingly, the plurality of compensation image frames corresponding to the 2 reference image frames within the second set duration include: the 8 th frame to the 25 th frame in the second set duration and the 1 st frame to the 7 th frame in the third set duration.
The plurality of compensation image frames corresponding to the reference image frame in each subsequent set time length are all the 8 th frame to the 25 th frame in the current set time length and the 1 st frame to the 7 th frame in the next set time length.
The embodiment of the disclosure provides a pose rendering method, which includes the method shown in fig. 1, and includes: in step S12, calculating a first pose parameter under the real scene according to the N reference image frames, including:
when the number of the N reference image frames is 1, calculating a second attitude parameter under the virtual three-dimensional scene according to the reference image frames;
and when the number of the N reference image frames is more than 1, respectively calculating the pose parameters under the real three-dimensional scene according to each reference image frame, and calculating the average pose parameters of each pose parameter.
In step S12, calculating a second pose parameter under the virtual three-dimensional scene according to the N reference image frames, including:
when the number of the N reference image frames is 1, calculating a second attitude parameter under the virtual three-dimensional scene according to the reference image frames;
and when the number of the N reference image frames is more than 1, respectively calculating the pose parameters under the real three-dimensional scene according to each reference image frame, and calculating the average pose parameters of each pose parameter.
The embodiment of the disclosure provides a pose rendering method, which includes the method shown in fig. 1, and includes: step S13, calculating a first transformation from the first posture parameter to the second posture parameter, including:
calculating a first transformation matrix from a first pose parameter for an ith image frame to a second pose parameter for the ith image frame using the following formula:
T=V(i)*inv(R(i))
wherein, r (i) represents a first pose parameter of the ith image frame, v (i) represents a second pose parameter of the ith image frame, T is a first transformation matrix, and inv represents a matrix inversion operation.
The embodiment of the disclosure provides a pose rendering method, which includes the method shown in fig. 1, and includes: step S14, calculating a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation manner, including:
performing the following for each image frame for compensation: and calculating a fourth pose parameter of the image frame for compensation in the real three-dimensional scene corresponding to the image frame for compensation, and calculating a third pose parameter of the image frame for compensation in the virtual three-dimensional scene according to the first transformation mode and the fourth pose parameter.
Examples are as follows:
the first embodiment is as follows:
the image capture rate of the camera device is 25 frames per second. The set time period is 1 second. The number of reference image frames determined per set interval time period is 1. And the first image frame in the corresponding image frames in each set time length is the image frame for reference. The first frame number of the continuous image frames included with the reference image frame as the start point is 25.
According to the number and the positions of the image frames for reference, the calculation time length required for calculating the first pose parameter in the real three-dimensional scene corresponding to the image frames for reference is determined to be 170 milliseconds, the acquisition rate of the image frames is 25 frames per second, the interval between two adjacent image frames is 40 milliseconds, and the second frame number corresponding to the calculation time length, namely the number of the image frames covered by the calculation time length is 5.
Within the first set duration, the compensation frames corresponding to the reference frame are the 6 th to 25 th frames of the first set duration and the 1 st to 5 th frames of the second set duration.
As shown in fig. 2, within a first set duration, a first pose parameter in the real three-dimensional scene and a second pose parameter in the virtual three-dimensional scene are calculated according to the reference image frame of the 1 st frame, and a first transformation manner from the first pose parameter to the second pose parameter is calculated.
And performing no pose compensation on the 2 nd frame to the 4 th frame within the first set time length.
And respectively carrying out pose compensation on the 6 th frame to the 25 th frame within the first set duration by using a first transformation mode, namely calculating a fourth pose parameter under the real three-dimensional scene corresponding to each compensation image frame, and calculating a third pose parameter under the virtual three-dimensional scene of the compensation image frame according to the first transformation mode and the fourth pose parameter.
As shown in fig. 3, the ith setting time length represents the current setting time length, the (i-1) th setting time length represents the previous setting time length, and the (i + 1) th setting time length represents the previous two setting time lengths.
And performing pose compensation on the 1 st frame to the 5 th frame in the second set time length by using a first transformation mode. And the 1 st frame in the second set duration is a reference image frame in the second set duration, the first position and attitude parameters in the real three-dimensional scene and the second position and attitude parameters in the virtual three-dimensional scene are calculated according to the reference image frame, and a second conversion mode from the first position and attitude parameters to the second position and attitude parameters is calculated. And performing pose compensation on the 6 th frame to the 25 th frame in the second set time length and the 1 st frame to the 5 th frame in the third set time length by using a second transformation mode.
Subsequently, the 1 st frame in the third set duration is a reference image frame in the third set duration, the first pose parameter in the real three-dimensional scene and the second pose parameter in the virtual three-dimensional scene are calculated according to the reference image frame, and a third conversion mode from the first pose parameter to the second pose parameter is calculated. And executing pose compensation on the 6 th frame to the 25 th frame in the third set time length and the 1 st frame to the 5 th frame in the fourth set time length by using a third transformation mode.
And so on.
Example two:
the image capture rate of the camera device is 25 frames per second. The set time period is 1 second. The number of the reference image frames determined by the set time length every interval is 2, and 2 reference image frames are adjacent. And the 1 st frame and the 2 nd frame in the corresponding image frames in each set time length are image frames for reference. The first frame number of the continuous image frames included with the reference image frame as the start point is 25.
According to the number of the reference image frames and the adjacent position relation, the calculation time length required for calculating the first posture parameter under the real three-dimensional scene corresponding to the 2 reference image frames is determined to be 210 milliseconds, the acquisition rate of the image frames is 25 frames per second, the interval between two adjacent image frames is 40 milliseconds, and the second frame number corresponding to the calculation time length, namely the number of the image frames covered by the calculation time length is 6.
Within the first set time length, the compensation image frames corresponding to the reference image frames are the 7 th frame to the 25 th frame of the first set time length and the 1 st frame to the 6 th frame of the second set time length.
And calculating a first position and posture parameter under the real three-dimensional scene and a second position and posture parameter under the virtual three-dimensional scene according to the reference image frame of the 1 st frame within a first set time length, and calculating a first conversion mode from the first position and posture parameter to the second position and posture parameter.
And performing no pose compensation on the 2 nd frame to the 5 th frame within the first set time length.
And respectively carrying out pose compensation on the 7 th frame to the 25 th frame within the first set duration by using a first transformation mode, namely calculating a fourth pose parameter under the real three-dimensional scene corresponding to each compensation image frame, and calculating a third pose parameter under the virtual three-dimensional scene of the compensation image frame according to the first transformation mode and the fourth pose parameter.
And performing pose compensation on the 1 st frame to the 6 th frame in the second set time length by using a first transformation mode. And the 1 st frame in the second set duration is a reference image frame in the second set duration, the first position and attitude parameters in the real three-dimensional scene and the second position and attitude parameters in the virtual three-dimensional scene are calculated according to the reference image frame, and a second conversion mode from the first position and attitude parameters to the second position and attitude parameters is calculated. And performing pose compensation on the 7 th frame to the 25 th frame in the second set duration and the 1 st frame to the 6 th frame in the third set duration by using a second transformation mode.
Subsequently, the 1 st frame in the third set duration is a reference image frame in the third set duration, the first pose parameter in the real three-dimensional scene and the second pose parameter in the virtual three-dimensional scene are calculated according to the reference image frame, and a third conversion mode from the first pose parameter to the second pose parameter is calculated. And performing pose compensation on the 7 th frame to the 25 th frame in the third set time length and the 1 st frame to the 6 th frame in the fourth set time length by using a third transformation mode.
And so on.
Example three:
the image capture rate of the camera device is 25 frames per second. The set time period is 1 second. The number of the reference image frames determined every set time length is 2, and 2 reference image frames are separated by one image frame. And the 1 st frame and the 3 rd frame in the corresponding image frames in each set time length are image frames for reference. The first frame number of the continuous image frames included with the reference image frame as the start point is 25.
According to the number of the reference image frames and the adjacent position relation, the calculation time length required for calculating the first posture parameter under the real three-dimensional scene corresponding to the 2 reference image frames is determined to be 250 milliseconds, the acquisition rate of the image frames is 25 frames per second, the interval between two adjacent image frames is 40 milliseconds, and the second frame number corresponding to the calculation time length, namely the number of the image frames covered by the calculation time length is 7.
Within the first set duration, the compensation frames corresponding to the reference frame are the 8 th to 25 th frames of the first set duration and the 1 st to 7 th frames of the second set duration.
And calculating a first position and posture parameter under the real three-dimensional scene and a second position and posture parameter under the virtual three-dimensional scene according to the reference image frame of the 1 st frame within a first set time length, and calculating a first conversion mode from the first position and posture parameter to the second position and posture parameter.
And performing no pose compensation on the 2 nd frame to the 6 th frame within the first set time length.
And respectively carrying out pose compensation on the 8 th frame to the 25 th frame within the first set duration by using a first transformation mode, namely calculating a fourth pose parameter under the real three-dimensional scene corresponding to each compensation image frame, and calculating a third pose parameter under the virtual three-dimensional scene of the compensation image frame according to the first transformation mode and the fourth pose parameter.
And performing pose compensation on the 1 st frame to the 7 th frame in the second set time length by using a first transformation mode. And the 1 st frame in the second set duration is a reference image frame in the second set duration, the first position and attitude parameters in the real three-dimensional scene and the second position and attitude parameters in the virtual three-dimensional scene are calculated according to the reference image frame, and a second conversion mode from the first position and attitude parameters to the second position and attitude parameters is calculated. And performing pose compensation on the 8 th frame to the 25 th frame in the second set time length and the 1 st frame to the 7 th frame in the third set time length by using a second transformation mode.
Subsequently, the 1 st frame in the third set duration is a reference image frame in the third set duration, the first pose parameter in the real three-dimensional scene and the second pose parameter in the virtual three-dimensional scene are calculated according to the reference image frame, and a third conversion mode from the first pose parameter to the second pose parameter is calculated. And performing pose compensation on the 8 th frame to the 25 th frame in the third set time length and the 1 st frame to the 7 th frame in the fourth set time length by using a third transformation mode.
And so on.
The embodiment of the disclosure provides a pose rendering method, which includes the method shown in fig. 1, and includes: step S14, calculating a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation manner, including:
performing the following operations for a compensating image frame closest to the at least one reference image:
calculating a fourth pose parameter of the image frame for compensation under the real three-dimensional scene corresponding to the image frame for compensation, calculating a third pose parameter of the image frame for compensation under the virtual three-dimensional scene according to the first transformation mode and the fourth pose parameter, and calculating a second transformation mode from the fourth pose parameter to the third pose parameter;
sequentially executing the following operations for each image frame for compensation except for the image frame for compensation closest to the at least one reference image:
determining a fourth pose parameter of the image frame for compensation under the real three-dimensional scene corresponding to the image frame for compensation, calculating a third pose parameter of the image frame for compensation under the virtual three-dimensional scene according to a second transformation mode corresponding to a previous non-reference image frame and the fourth pose parameter, and calculating a second transformation mode from the fourth pose parameter to the third pose parameter.
The embodiment of the disclosure provides a pose rendering device. Referring to fig. 4, fig. 4 is a flowchart illustrating a pose rendering apparatus according to an exemplary embodiment. As shown in fig. 4, the apparatus includes:
a first determining module 401 configured to determine N reference image frames; n is an integer greater than 0;
a first calculation module 402 configured to calculate a first pose parameter in the real three-dimensional scene and a second pose parameter in the virtual three-dimensional scene from the N reference image frames;
a second calculation module 403 configured to calculate a first transformation from the first pose parameter to the second pose parameter;
a third calculating module 404, configured to calculate a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation manner;
a rendering module 405 configured to perform pose rendering on the corresponding image frame for compensation in the virtual three-dimensional scene according to the third pose parameter.
The embodiment of the present disclosure provides a pose rendering apparatus, which includes the apparatus shown in fig. 4, and: the first determining module 401 is further configured to determine the N reference image frames using the following method: determining N reference image frames at intervals of set time length, or determining N reference image frames at intervals of set frame number;
the device further comprises:
a second determining module configured to determine a plurality of compensation image frames corresponding to the N reference image frames according to an image frame acquisition rate and the N reference image frames.
In one embodiment, the second determining module is further configured to determine, in a first set duration or in a first acquisition period for acquiring image frames of a first set number of frames, a plurality of compensation image frames corresponding to the N reference image frames according to an image frame acquisition rate and the number of the N reference image frames using the following method:
determining a first frame number of continuous image frames contained in the set duration according to an image frame acquisition rate; or, the set frame number is taken as the first frame number;
determining the calculation duration required for calculating the first pose parameters under the real three-dimensional scene corresponding to the N reference image frames in the process of acquiring the image frames in real time according to the quantity and the positions of the N reference image frames; calculating a second frame number corresponding to the calculated duration according to the acquisition rate of the image frame;
and removing continuous image frames of a second frame number from continuous image frames of a first frame number which are contained by taking the image frame collected firstly in the N reference image frames as a starting point, wherein the continuous image frames are used as compensation image frames corresponding to the N reference image frames.
In one embodiment, the second determining module is further configured to determine, in a non-first set duration or in a non-first acquisition period, a plurality of compensation image frames corresponding to the N reference image frames according to an image frame acquisition rate and the number of the N reference image frames, using the following method:
determining a plurality of compensation image frames corresponding to the N reference image frames in the current set duration or the current acquisition period includes: removing the image frames of the continuous image frames of the second frame number from the image frames in the current set duration or the current acquisition period; and, the next set duration or the second number of frames of consecutive image frames before in the next acquisition period.
The embodiment of the present disclosure provides a pose rendering apparatus, which includes the apparatus shown in fig. 4, and: the first calculation module is further configured to calculate a first pose parameter in the real scene from the N reference image frames using the following method:
when the number of the N reference image frames is 1, calculating a second pose parameter under the virtual three-dimensional scene according to the reference image frames;
when the number of the N reference image frames is more than 1, respectively calculating pose parameters under a real three-dimensional scene according to each reference image frame, and calculating the average pose parameter of each pose parameter;
calculating a second pose parameter under the virtual three-dimensional scene from the N reference image frames using the following method:
when the number of the N reference image frames is 1, calculating a second pose parameter under the virtual three-dimensional scene according to the reference image frames;
and when the number of the N reference image frames is more than 1, respectively calculating pose parameters under the real three-dimensional scene according to each reference image frame, and calculating the average pose parameter of each pose parameter.
The embodiment of the present disclosure provides a pose rendering apparatus, which includes the apparatus shown in fig. 4, and: the third calculation module is further configured to calculate a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation manner by using the following method:
performing the following for each image frame for compensation:
and calculating a fourth pose parameter of the image frame for compensation in the real three-dimensional scene corresponding to the image frame for compensation, and calculating a third pose parameter of the image frame for compensation in the virtual three-dimensional scene according to the first transformation mode and the fourth pose parameter.
The embodiment of the present disclosure provides a pose rendering apparatus, which includes the apparatus shown in fig. 4, and: the third calculation module is further configured to calculate a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation manner by using the following method:
performing the following operations for a compensating image frame closest to the at least one reference image:
calculating a fourth pose parameter of the image frame for compensation under the real three-dimensional scene corresponding to the image frame for compensation, calculating a third pose parameter of the image frame for compensation under the virtual three-dimensional scene according to the first transformation mode and the fourth pose parameter, and calculating a second transformation mode from the fourth pose parameter to the third pose parameter;
sequentially executing the following operations for each image frame for compensation except for the image frame for compensation closest to the at least one reference image:
determining a fourth pose parameter of the image frame for compensation under the real three-dimensional scene corresponding to the image frame for compensation, calculating a third pose parameter of the image frame for compensation under the virtual three-dimensional scene according to a second transformation mode corresponding to a previous non-reference image frame and the fourth pose parameter, and calculating a second transformation mode from the fourth pose parameter to the third pose parameter.
An embodiment of the present disclosure provides a pose rendering apparatus, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the executable instructions in the memory to implement the steps of the above-described method.
A non-transitory computer readable storage medium having stored thereon executable instructions that, when executed by a processor, perform the steps of the method is provided in embodiments of the present disclosure.
Fig. 5 is a block diagram illustrating a pose rendering apparatus 500 according to an exemplary embodiment. For example, the apparatus 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, the apparatus 500 may include one or more of the following components: processing component 502, memory 504, power component 506, multimedia component 508, audio component 510, input/output (I/O) interface 512, sensor component 514, and communication component 516.
The processing component 502 generally controls overall operation of the device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 502 may include one or more processors 520 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operation at the device 500. Examples of such data include instructions for any application or method operating on device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 506 provides power to the various components of the device 500. The power components 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 500.
The multimedia component 508 includes a screen that provides an output interface between the device 500 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 500 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, audio component 510 includes a Microphone (MIC) configured to receive external audio signals when apparatus 500 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 504 or transmitted via the communication component 516. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 514 includes one or more sensors for providing various aspects of status assessment for the device 500. For example, the sensor assembly 514 may detect an open/closed state of the device 500, the relative positioning of the components, such as a display and keypad of the apparatus 500, the sensor assembly 514 may also detect a change in the position of the apparatus 500 or a component of the apparatus 500, the presence or absence of user contact with the apparatus 500, orientation or acceleration/deceleration of the apparatus 500, and a change in the temperature of the apparatus 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the apparatus 500 and other devices in a wired or wireless manner. The apparatus 500 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 504 comprising instructions, executable by the processor 520 of the apparatus 500 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Other embodiments of the invention herein will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles herein and including such departures from the present disclosure as come within known or customary practice in the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (16)

1. A pose rendering method is characterized by comprising the following steps:
determining N image frames for reference; n is an integer greater than 0;
calculating a first pose parameter under a real three-dimensional scene and a second pose parameter under a virtual three-dimensional scene according to the N reference image frames;
calculating a first transformation from the first pose parameter to the second pose parameter;
calculating a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation mode;
and performing pose rendering on the corresponding image frame for compensation under the virtual three-dimensional scene according to the third pose parameter.
2. The method of claim 1,
the determining the N reference image frames includes: determining N reference image frames at intervals of set time length, or determining N reference image frames at intervals of set frame number;
the method further comprises the following steps: and determining a plurality of compensation image frames corresponding to the N reference image frames according to the image frame acquisition rate and the N reference image frames.
3. The method of claim 2,
in a first set time length or in a first acquisition period for acquiring image frames of a first set frame number, determining a plurality of compensation image frames corresponding to the N reference image frames according to an image frame acquisition rate and the number of the N reference image frames, including:
determining a first frame number of continuous image frames contained in the set duration according to an image frame acquisition rate; or, the set frame number is taken as the first frame number;
determining the calculation duration required for calculating the first pose parameters under the real three-dimensional scene corresponding to the N reference image frames in the process of acquiring the image frames in real time according to the quantity and the positions of the N reference image frames; calculating a second frame number corresponding to the calculated duration according to the acquisition rate of the image frame;
and removing continuous image frames of a second frame number from continuous image frames of a first frame number which are contained by taking the image frame collected firstly in the N reference image frames as a starting point, wherein the continuous image frames are used as compensation image frames corresponding to the N reference image frames.
4. The method of claim 3,
in a non-first set duration or a non-first acquisition period, determining a plurality of compensation image frames corresponding to the N reference image frames according to an image frame acquisition rate and the number of the N reference image frames, including:
determining a plurality of compensation image frames corresponding to the N reference image frames in the current set duration or the current acquisition period includes: removing the image frames of the continuous image frames of the second frame number from the image frames in the current set duration or the current acquisition period; and, the next set duration or the second number of frames of consecutive image frames before in the next acquisition period.
5. The method of claim 1,
calculating a first pose parameter under a real scene according to the N reference image frames, including:
when the number of the N reference image frames is 1, calculating a second pose parameter under the virtual three-dimensional scene according to the reference image frames;
when the number of the N reference image frames is more than 1, respectively calculating pose parameters under a real three-dimensional scene according to each reference image frame, and calculating the average pose parameter of each pose parameter;
calculating a second pose parameter under the virtual three-dimensional scene according to the N reference image frames, comprising:
when the number of the N reference image frames is 1, calculating a second pose parameter under the virtual three-dimensional scene according to the reference image frames;
and when the number of the N reference image frames is more than 1, respectively calculating pose parameters under the real three-dimensional scene according to each reference image frame, and calculating the average pose parameter of each pose parameter.
6. The method of claim 1,
calculating a third pose parameter of each compensation image frame corresponding to the N reference image frames under the virtual three-dimensional scene according to the first transformation mode, wherein the third pose parameter comprises:
performing the following for each image frame for compensation:
and calculating a fourth pose parameter of the image frame for compensation in the real three-dimensional scene corresponding to the image frame for compensation, and calculating a third pose parameter of the image frame for compensation in the virtual three-dimensional scene according to the first transformation mode and the fourth pose parameter.
7. The method of claim 1,
calculating a third pose parameter of each compensation image frame corresponding to the N reference image frames under the virtual three-dimensional scene according to the first transformation mode, wherein the third pose parameter comprises:
performing the following operations for a compensating image frame closest to the at least one reference image:
calculating a fourth pose parameter of the image frame for compensation under the real three-dimensional scene corresponding to the image frame for compensation, calculating a third pose parameter of the image frame for compensation under the virtual three-dimensional scene according to the first transformation mode and the fourth pose parameter, and calculating a second transformation mode from the fourth pose parameter to the third pose parameter;
sequentially executing the following operations for each image frame for compensation except for the image frame for compensation closest to the at least one reference image:
determining a fourth pose parameter of the image frame for compensation under the real three-dimensional scene corresponding to the image frame for compensation, calculating a third pose parameter of the image frame for compensation under the virtual three-dimensional scene according to a second transformation mode corresponding to a previous non-reference image frame and the fourth pose parameter, and calculating a second transformation mode from the fourth pose parameter to the third pose parameter.
8. A pose rendering apparatus, comprising:
a first determination module configured to determine N reference image frames; n is an integer greater than 0;
a first calculation module configured to calculate a first pose parameter in the real three-dimensional scene and a second pose parameter in the virtual three-dimensional scene according to the N reference image frames;
a second calculation module configured to calculate a first transformation from the first pose parameter to the second pose parameter;
a third calculation module configured to calculate a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation manner;
and the rendering module is configured to perform pose rendering on the corresponding image frame for compensation in the virtual three-dimensional scene according to the third pose parameter.
9. The apparatus of claim 8,
the first determining module is further configured to determine the N reference image frames using: determining N reference image frames at intervals of set time length, or determining N reference image frames at intervals of set frame number;
the device further comprises:
a second determining module configured to determine a plurality of compensation image frames corresponding to the N reference image frames according to an image frame acquisition rate and the N reference image frames.
10. The apparatus of claim 9,
the second determining module is further configured to determine, in a first set duration or in a first acquisition period for acquiring image frames of a first set number of frames, a plurality of compensation image frames corresponding to the N reference image frames according to an image frame acquisition rate and the number of the N reference image frames using the following method:
determining a first frame number of continuous image frames contained in the set duration according to an image frame acquisition rate; or, the set frame number is taken as the first frame number;
determining the calculation duration required for calculating the first pose parameters under the real three-dimensional scene corresponding to the N reference image frames in the process of acquiring the image frames in real time according to the quantity and the positions of the N reference image frames; calculating a second frame number corresponding to the calculated duration according to the acquisition rate of the image frame;
and removing continuous image frames of a second frame number from continuous image frames of a first frame number which are contained by taking the image frame collected firstly in the N reference image frames as a starting point, wherein the continuous image frames are used as compensation image frames corresponding to the N reference image frames.
11. The apparatus of claim 8,
the second determining module is further configured to determine, in a non-first set duration or a non-first acquisition period, a plurality of compensation image frames corresponding to the N reference image frames according to an image frame acquisition rate and the number of the N reference image frames by using the following method:
determining a plurality of compensation image frames corresponding to the N reference image frames in the current set duration or the current acquisition period includes: removing the image frames of the continuous image frames of the second frame number from the image frames in the current set duration or the current acquisition period; and, the next set duration or the second number of frames of consecutive image frames before in the next acquisition period.
12. The apparatus of claim 8,
the first calculation module is further configured to calculate a first pose parameter in the real scene from the N reference image frames using the following method:
when the number of the N reference image frames is 1, calculating a second pose parameter under the virtual three-dimensional scene according to the reference image frames;
when the number of the N reference image frames is more than 1, respectively calculating pose parameters under a real three-dimensional scene according to each reference image frame, and calculating the average pose parameter of each pose parameter;
calculating a second pose parameter under the virtual three-dimensional scene from the N reference image frames using the following method:
when the number of the N reference image frames is 1, calculating a second pose parameter under the virtual three-dimensional scene according to the reference image frames;
and when the number of the N reference image frames is more than 1, respectively calculating pose parameters under the real three-dimensional scene according to each reference image frame, and calculating the average pose parameter of each pose parameter.
13. The apparatus of claim 8,
the third calculation module is further configured to calculate a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation manner by using the following method:
performing the following for each image frame for compensation:
and calculating a fourth pose parameter of the image frame for compensation in the real three-dimensional scene corresponding to the image frame for compensation, and calculating a third pose parameter of the image frame for compensation in the virtual three-dimensional scene according to the first transformation mode and the fourth pose parameter.
14. The apparatus of claim 8,
the third calculation module is further configured to calculate a third pose parameter of each compensation image frame corresponding to the N reference image frames in the virtual three-dimensional scene according to the first transformation manner by using the following method:
performing the following operations for a compensating image frame closest to the at least one reference image:
calculating a fourth pose parameter of the image frame for compensation under the real three-dimensional scene corresponding to the image frame for compensation, calculating a third pose parameter of the image frame for compensation under the virtual three-dimensional scene according to the first transformation mode and the fourth pose parameter, and calculating a second transformation mode from the fourth pose parameter to the third pose parameter;
sequentially executing the following operations for each image frame for compensation except for the image frame for compensation closest to the at least one reference image:
determining a fourth pose parameter of the image frame for compensation under the real three-dimensional scene corresponding to the image frame for compensation, calculating a third pose parameter of the image frame for compensation under the virtual three-dimensional scene according to a second transformation mode corresponding to a previous non-reference image frame and the fourth pose parameter, and calculating a second transformation mode from the fourth pose parameter to the third pose parameter.
15. A pose rendering apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute executable instructions in the memory to implement the steps of the method of any one of claims 1 to 7.
16. A non-transitory computer readable storage medium having stored thereon executable instructions, wherein the executable instructions, when executed by a processor, implement the steps of the method of any one of claims 1 to 7.
CN202010745000.XA 2020-07-29 2020-07-29 Pose rendering method, device and medium Pending CN111862288A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010745000.XA CN111862288A (en) 2020-07-29 2020-07-29 Pose rendering method, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010745000.XA CN111862288A (en) 2020-07-29 2020-07-29 Pose rendering method, device and medium

Publications (1)

Publication Number Publication Date
CN111862288A true CN111862288A (en) 2020-10-30

Family

ID=72945462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010745000.XA Pending CN111862288A (en) 2020-07-29 2020-07-29 Pose rendering method, device and medium

Country Status (1)

Country Link
CN (1) CN111862288A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629248A (en) * 2017-03-24 2018-10-09 成都理想境界科技有限公司 A kind of method and apparatus for realizing augmented reality
CN108682038A (en) * 2018-04-27 2018-10-19 腾讯科技(深圳)有限公司 Pose determines method, apparatus and storage medium
CN108765563A (en) * 2018-05-31 2018-11-06 北京百度网讯科技有限公司 Processing method, device and the equipment of SLAM algorithms based on AR
CN108932051A (en) * 2017-05-24 2018-12-04 腾讯科技(北京)有限公司 augmented reality image processing method, device and storage medium
CN109035303A (en) * 2018-08-03 2018-12-18 百度在线网络技术(北京)有限公司 SLAM system camera tracking and device, computer readable storage medium
US20190096081A1 (en) * 2017-09-28 2019-03-28 Samsung Electronics Co., Ltd. Camera pose determination and tracking
CN109636916A (en) * 2018-07-17 2019-04-16 北京理工大学 A kind of a wide range of virtual reality roaming system and method for dynamic calibration
CN109712172A (en) * 2018-12-28 2019-05-03 哈尔滨工业大学 A kind of pose measuring method of initial pose measurement combining target tracking
US20190197709A1 (en) * 2017-12-21 2019-06-27 Microsoft Technology Licensing, Llc Graphical coordinate system transform for video frames
CN110163909A (en) * 2018-02-12 2019-08-23 北京三星通信技术研究有限公司 For obtaining the method, apparatus and storage medium of equipment pose
CN110379017A (en) * 2019-07-12 2019-10-25 北京达佳互联信息技术有限公司 A kind of scenario building method, apparatus, electronic equipment and storage medium
WO2019223463A1 (en) * 2018-05-22 2019-11-28 腾讯科技(深圳)有限公司 Image processing method and apparatus, storage medium, and computer device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629248A (en) * 2017-03-24 2018-10-09 成都理想境界科技有限公司 A kind of method and apparatus for realizing augmented reality
CN108932051A (en) * 2017-05-24 2018-12-04 腾讯科技(北京)有限公司 augmented reality image processing method, device and storage medium
US20190096081A1 (en) * 2017-09-28 2019-03-28 Samsung Electronics Co., Ltd. Camera pose determination and tracking
US20190197709A1 (en) * 2017-12-21 2019-06-27 Microsoft Technology Licensing, Llc Graphical coordinate system transform for video frames
CN110163909A (en) * 2018-02-12 2019-08-23 北京三星通信技术研究有限公司 For obtaining the method, apparatus and storage medium of equipment pose
CN108682038A (en) * 2018-04-27 2018-10-19 腾讯科技(深圳)有限公司 Pose determines method, apparatus and storage medium
WO2019223463A1 (en) * 2018-05-22 2019-11-28 腾讯科技(深圳)有限公司 Image processing method and apparatus, storage medium, and computer device
CN108765563A (en) * 2018-05-31 2018-11-06 北京百度网讯科技有限公司 Processing method, device and the equipment of SLAM algorithms based on AR
CN109636916A (en) * 2018-07-17 2019-04-16 北京理工大学 A kind of a wide range of virtual reality roaming system and method for dynamic calibration
CN109035303A (en) * 2018-08-03 2018-12-18 百度在线网络技术(北京)有限公司 SLAM system camera tracking and device, computer readable storage medium
CN109712172A (en) * 2018-12-28 2019-05-03 哈尔滨工业大学 A kind of pose measuring method of initial pose measurement combining target tracking
CN110379017A (en) * 2019-07-12 2019-10-25 北京达佳互联信息技术有限公司 A kind of scenario building method, apparatus, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIN-CHUN PIAO等: "Real-Time Visual–Inertial SLAM Based on Adaptive Keyframe Selection for Mobile AR Applications", IEEE TRANSACTIONS ON MULTIMEDIA, no. 11, 30 November 2019 (2019-11-30), pages 2827, XP011752245, DOI: 10.1109/TMM.2019.2913324 *
张昊若: "面向机器人抓取的弱纹理物体六自由度位姿估计方法研究", 中国博士学位论文全文数据库 信息科技辑, no. 06, 15 June 2020 (2020-06-15), pages 140 - 33 *
洪亮;冯常;: "基于RGB-D相机数据的SLAM算法", 电子设计工程, no. 09, 31 May 2018 (2018-05-31), pages 147 - 157 *

Similar Documents

Publication Publication Date Title
US11636653B2 (en) Method and apparatus for synthesizing virtual and real objects
CN107945133B (en) Image processing method and device
US11061202B2 (en) Methods and devices for adjusting lens position
EP3136391A1 (en) Method, device and terminal device for video effect processing
CN111105454B (en) Method, device and medium for obtaining positioning information
CN107967459B (en) Convolution processing method, convolution processing device and storage medium
CN107341509B (en) Convolutional neural network training method and device and readable storage medium
CN112153400A (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN114025105B (en) Video processing method, device, electronic equipment and storage medium
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN110796012B (en) Image processing method and device, electronic equipment and readable storage medium
CN108629814B (en) Camera adjusting method and device
CN107239758B (en) Method and device for positioning key points of human face
CN107885464B (en) Data storage method, device and computer readable storage medium
CN116092147A (en) Video processing method, device, electronic equipment and storage medium
CN106919332B (en) Information transmission method and equipment
CN107122356B (en) Method and device for displaying face value and electronic equipment
CN114125528B (en) Video special effect processing method and device, electronic equipment and storage medium
CN110312117B (en) Data refreshing method and device
CN113315904B (en) Shooting method, shooting device and storage medium
CN108769513B (en) Camera photographing method and device
CN109407942B (en) Model processing method and device, control client and storage medium
CN111862288A (en) Pose rendering method, device and medium
CN108062787B (en) Three-dimensional face modeling method and device
CN113747113A (en) Image display method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination