CN114255258A - Object implantation method of multi-degree-of-freedom video, electronic device and storage medium - Google Patents

Object implantation method of multi-degree-of-freedom video, electronic device and storage medium Download PDF

Info

Publication number
CN114255258A
CN114255258A CN202111312711.9A CN202111312711A CN114255258A CN 114255258 A CN114255258 A CN 114255258A CN 202111312711 A CN202111312711 A CN 202111312711A CN 114255258 A CN114255258 A CN 114255258A
Authority
CN
China
Prior art keywords
virtual
coordinate system
space coordinate
real
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111312711.9A
Other languages
Chinese (zh)
Inventor
陈旭
李嘉伟
李静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202111312711.9A priority Critical patent/CN114255258A/en
Publication of CN114255258A publication Critical patent/CN114255258A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The embodiment of the application provides an object implantation method of a multi-degree-of-freedom video, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring coordinates of a real camera matrix in a real space coordinate system, wherein the real camera matrix is used for shooting a multi-degree-of-freedom video; constructing a virtual space coordinate system according to the coordinates of the real camera matrix in a real space coordinate system, wherein the virtual space coordinate system is a coordinate system corresponding to the virtual camera matrix in a virtual space; mapping the track information of the real camera matrix in the real space coordinate system into the track information of the virtual camera matrix in the virtual space coordinate system; and associating the track information of the virtual camera matrix in the virtual space coordinate system with a virtual object implanted with the multi-degree-of-freedom video to obtain implantation information for implanting the multi-degree-of-freedom video. The embodiment of the application can provide a foundation for accurate matching of the virtual object and the multi-degree-of-freedom video.

Description

Object implantation method of multi-degree-of-freedom video, electronic device and storage medium
Technical Field
The embodiment of the application relates to the technical field of videos, in particular to an object implantation method of a multi-degree-of-freedom video, an electronic device and a storage medium.
Background
The multi-degree-of-freedom video is an immersive video, and allows a user to adjust a viewing angle, so that an immersive video viewing experience is provided for the user. Virtual objects such as AR (Augmented Reality) elements can be implanted into a multi-degree-of-freedom video during rendering, and due to the characteristic that the visual angle of the multi-degree-of-freedom video can be changed, the conventional technology for implanting the virtual objects into the video is difficult to accurately match the visual angle change of the multi-degree-of-freedom video, and the situation that the implanted virtual objects cannot be accurately matched with the picture of the multi-degree-of-freedom video often exists. Therefore, how to provide an object implantation scheme for a multi-degree-of-freedom video to provide a basis for accurate matching of a virtual object and the multi-degree-of-freedom video becomes a technical problem which needs to be solved urgently by the technical staff in the field.
Disclosure of Invention
In view of this, embodiments of the present application provide an object implantation method for a multiple degree of freedom video, an electronic device, and a storage medium, so as to provide a basis for accurate matching between a virtual object and the multiple degree of freedom video.
In order to achieve the above object, the embodiments of the present application provide the following technical solutions.
In a first aspect, an embodiment of the present application provides an object implantation method for a multiple degree of freedom video, including:
acquiring coordinates of a real camera matrix in a real space coordinate system, wherein the real camera matrix is used for shooting a multi-degree-of-freedom video;
constructing a virtual space coordinate system according to the coordinates of the real camera matrix in a real space coordinate system, wherein the virtual space coordinate system is a coordinate system corresponding to the virtual camera matrix in a virtual space;
mapping the track information of the real camera matrix in the real space coordinate system into the track information of the virtual camera matrix in the virtual space coordinate system;
and associating the track information of the virtual camera matrix in the virtual space coordinate system with a virtual object implanted with the multi-degree-of-freedom video to obtain implantation information for implanting the multi-degree-of-freedom video.
In a second aspect, an embodiment of the present application provides an electronic device, including: at least one memory and at least one processor; the memory stores one or more computer-executable instructions that are invoked by the processor to perform the method for object implantation of multiple degree of freedom video as described in the first aspect above.
In a third aspect, embodiments of the present application provide a storage medium storing one or more computer-executable instructions that, when executed, implement the method for object implantation of multiple degrees of freedom video according to the first aspect.
According to the embodiment of the application, a virtual space coordinate system mapped with a real space coordinate system can be constructed based on the coordinates of the real camera matrix in the real space coordinate system, so that the shooting space of the real camera matrix can be reproduced in the virtual space; mapping the track information of the real camera matrix in the real space coordinate system into the track information of the virtual camera matrix in the virtual space coordinate system, so as to realize the reproduction of the track of the real camera matrix in the virtual space and match the motion of the real camera matrix with the motion of the virtual camera matrix; and further associating the track information of the virtual camera matrix in the virtual space coordinate system with a virtual object implanted with the multi-degree-of-freedom video to obtain implantation information for implanting the multi-degree-of-freedom video. Because the implantation information carries the track information of the virtual object and the virtual camera matrix in the virtual space coordinate system, when the virtual object is implanted into the multi-degree-of-freedom video, the virtual object implanted into the multi-degree-of-freedom video can be accurately matched with each shooting angle of the real camera matrix through the track information of the associated virtual camera matrix in the virtual space coordinate system, and the virtual object and the multi-degree-of-freedom video are accurately matched. According to the embodiment of the application, the virtual space coordinate system mapped with the real space coordinate system where the real camera matrix is located is constructed, and the track reproduction of the real camera matrix is realized in the virtual space coordinate system, so that the virtual object can be accurately matched with each shooting angle of the real camera matrix based on the camera track reproduced in the virtual space coordinate system, a foundation is provided for the accurate matching of the virtual object and the multi-degree-of-freedom video, and the possibility is provided for the virtual object to be accurately matched with the change of the picture visual angle of the multi-degree-of-freedom video.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1A is a schematic diagram of an arrangement of a real camera matrix.
Fig. 1B is an exemplary diagram of implantation of AR elements in a multiple degree of freedom video.
Fig. 2A is a schematic diagram of an execution stage of a subject implantation method according to an embodiment of the present disclosure.
Fig. 2B is a schematic mapping diagram of a real space coordinate system and a virtual space coordinate system according to an embodiment of the present disclosure.
Fig. 3A is a flowchart of a method for implanting a subject according to an embodiment of the present disclosure.
Fig. 3B is a flowchart of a method for constructing a virtual space coordinate system according to an embodiment of the present disclosure.
Fig. 3C is an exemplary diagram of a translation and rotation matrix of a real camera in a real space coordinate system.
Fig. 4 is a flowchart of a method for associating trajectory information of a virtual camera matrix with a virtual object according to an embodiment of the present disclosure.
Fig. 5 is a flowchart of a method for implanting a multi-degree-of-freedom video into a virtual object according to an embodiment of the present disclosure.
Fig. 6 is a diagram illustrating an example of a process for implanting an AR element into a 6Dof video according to an embodiment of the present application.
Fig. 7 is a block diagram of a multiple degree of freedom video object implantation device provided in an embodiment of the present application.
Fig. 8 is a block diagram of an electronic device.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The multiple degree of freedom video referred to in the embodiments of the present application includes, for example, a three degree of freedom video (3CoF), a six degree of freedom video (6DoF), and the like. Among them, the three-degree-of-freedom video has X, Y and the freedom of movement of the Z axis (also described as yaw, pitch, and roll), i.e., the three-degree-of-freedom video allows the user to make perspective adjustments while translating at X, Y and the Z axis, typical three-degree-of-freedom video such as panoramic video and the like. The six-degree-of-freedom video has X, Y degrees of freedom of movement along with the Z-axis and also has degrees of freedom of rotation about the X, Y and Z-axes, i.e., the six-degree-of-freedom video allows the user to perform perspective adjustment along with translation along the X, Y and Z-axes and also allows the user to perform perspective adjustment along with rotation about the X, Y and Z-axes.
The multiple degree of freedom video may be obtained by shooting a real camera matrix formed by multiple real cameras, and for convenience of description, taking shooting a basketball game as an example, fig. 1A schematically illustrates a setup diagram of the real camera matrix. As shown in fig. 1A, a photographer may set n real cameras C1 to Cn (the specific value of n may be determined according to actual conditions) along a certain path in a shooting space of a basketball game, Cm in fig. 1A is a middle real camera among the n real cameras, Cn-1 is a previous real camera of the nth real camera Cn, and the n real cameras C1 to Cn may be set along an arc. The real cameras C1 to Cn can shoot videos at different angles, and parameters of the real cameras C1 to Cn are combined to obtain the multi-degree-of-freedom videos. For example, the texture map captured by each of the real cameras C1 to Cn, the parameters of each real camera (including the internal reference and the external reference of the camera), the depth map of each real camera may form multi-angle video data. Based on the multi-angle video data, the virtual viewpoint of the real camera matrix can be generated, and therefore multi-degree-of-freedom video experience is provided. Based on the real camera matrix comprising a plurality of real cameras, the virtual viewpoints of the real cameras may comprise virtual viewpoints of the real cameras.
It should be noted that the virtual viewpoint is a three-dimensional concept representing the shooting angle of the real camera, and can be represented by coordinates with multiple degrees of freedom. Taking six degrees of freedom as an example, the virtual viewpoint may be represented by coordinates of six degrees of freedom, wherein the spatial position of the virtual viewpoint may be represented as (x, y, z), and the viewing angle may be represented as three rotational directions.
When the multi-degree-of-freedom video is displayed in a rendering mode, virtual objects such as AR elements are often required to be implanted, so that the display content of the video is enriched or the interestingness of the video is increased. For convenience of illustration, taking the example of shooting a basketball game by the real camera matrix illustrated in fig. 1A, an AR element (e.g., a map or the like) expressing a game score may be embedded in the shot multiple degree of freedom video. Fig. 1B schematically shows an example of implanting an AR element in a multiple degree of freedom video, and as shown in fig. 1B, "3: 2 "(as shown in the virtual box in fig. 1B) to enrich the content of the multiple degree of freedom video presentation.
The multi-degree-of-freedom video has the characteristic that the visual angle can be changed, if virtual objects such as AR elements are implanted into fixed positions of the multi-degree-of-freedom video according to a traditional special effect implantation mode, the situation that the implanted virtual objects are difficult to match with the change of the visual angle of the picture of the multi-degree-of-freedom video can be caused, the situation that the implanted virtual objects are not aligned with the picture of the multi-degree-of-freedom video is caused, and the visual flaw feeling exists when a user watches the multi-degree-of-freedom video.
In order to solve the above problems, embodiments of the present application provide a novel object implantation scheme for a multi-degree-of-freedom video, in which a virtual space coordinate system mapped with a real space coordinate system where a real camera matrix is located is constructed, and trajectory recurrence of the real camera matrix is realized in the virtual space coordinate system, so that a virtual object can accurately match each shooting position and angle of the real camera matrix based on a camera trajectory that is recurring in the virtual space coordinate system, and a basis is provided for accurate matching of the virtual object and the multi-degree-of-freedom video.
In some embodiments, fig. 2A schematically illustrates an implementation stage of a method for implanting a subject in multiple degrees of freedom video according to an embodiment of the present application. As shown in fig. 2A, the execution phase provided by the embodiment of the present application may include: a virtual space coordinate system construction phase 211, a trajectory reconstruction phase 212 and a virtual object association phase 213.
In the virtual space coordinate system constructing stage 211, a virtual space coordinate system mapped with the real space coordinate system in which the real camera matrix is located may be constructed in the embodiment of the present application. It should be noted that the real space coordinate system is a coordinate system of the real camera matrix, for example, a coordinate system corresponding to the shooting space of the real camera matrix; the virtual space coordinate system is a coordinate system corresponding to the virtual camera matrix. The real camera matrix may have a corresponding virtual camera matrix in the virtual space, for example, one real camera corresponds to one virtual camera in the virtual space, and a plurality of virtual cameras form the virtual camera matrix. In some embodiments, a real camera may be used for video capture and a virtual camera may be used for picture rendering.
For convenience of illustration, fig. 2B exemplarily shows a mapping diagram of a real space coordinate system and a virtual space coordinate system. As shown in fig. 2B, the real space coordinate system xyz (0 represents the origin) may be mapped with the constructed virtual space coordinate system x 'y' z 'o', so as to realize the shooting space reproduction of the real camera matrix in the virtual space coordinate system x 'y' z 'o'. In some embodiments, the virtual space coordinate system may be constructed based on the coordinates of the real camera matrix in the real space coordinate system.
In the track reconstruction stage 212, the embodiment of the present application may reconstruct a track of the virtual camera matrix corresponding to a track of the real camera matrix in the virtual space coordinate system, so as to obtain track information of the virtual camera matrix in the multi-degree-of-freedom video. In some embodiments, the trajectory of the real camera matrix expresses the trajectory information of the real camera matrix in each frame of the multi-degree-of-freedom video (for example, in a six-degree-of-freedom video, trajectory information corresponding to each virtual viewpoint of each frame of the six-degree-of-freedom video, and one virtual viewpoint corresponds to a shooting angle of one real camera).
Based on the virtual space coordinate system construction stage 211 and the track reconstruction stage 212, the embodiment of the application can construct a virtual space coordinate system mapped with a real space coordinate system, and realize the track information determination of a virtual camera matrix in a multi-degree-of-freedom video in the virtual space coordinate system, so that the track information of the virtual camera matrix in the multi-degree-of-freedom video corresponds to the track information of a real camera matrix in the multi-degree-of-freedom video, thereby realizing the track reproduction of the real camera matrix in the virtual space coordinate system and achieving the matching of the motion of the real camera matrix and the motion of the virtual camera matrix.
In the virtual object association stage 213, the embodiment of the present application may associate the trajectory information of the virtual camera matrix in the multi-degree-of-freedom video with the virtual object implanted with the multi-degree-of-freedom video, so as to provide implantation information for implanting the multi-degree-of-freedom video. Based on the fact that the track information of the virtual camera matrix in the multi-degree-of-freedom video corresponds to the track information of the real camera matrix in the multi-degree-of-freedom video, the virtual object implanted into the multi-degree-of-freedom video is associated with the track information of the virtual camera matrix, the virtual object can be accurately matched with each shooting angle of the real camera matrix, and a foundation is provided for accurate matching of the virtual object and the multi-degree-of-freedom video.
As an alternative implementation of the execution stage shown in fig. 2A, fig. 3A schematically shows a flowchart of an object implantation method of multiple degrees of freedom video provided by an embodiment of the present application. The method flow chart can be implemented by an electronic device with data processing capability, for example, by a server. The server can be a server for processing videos shot by a real camera matrix and obtaining multi-degree-of-freedom videos, and can also be an independent server. Referring to fig. 3A, the method flow may include the following steps.
In step S310, coordinates of a real camera matrix in a real space coordinate system are acquired, the real camera matrix being used for shooting a multi-degree-of-freedom video.
According to the method and the device, a real camera matrix for shooting the multi-degree-of-freedom video can be erected according to the field characteristics of a shooting site, for example, a six-degree-of-freedom real camera matrix for shooting the six-degree-of-freedom video is erected. The installed real camera matrix can have corresponding coordinates in a real space coordinate system. The real-camera-based matrix comprises a plurality of real cameras, and the coordinates of the real camera matrix in the real space coordinate system can comprise the coordinates of the plurality of real cameras in the real space coordinate system, and are used for expressing the distribution positions of the plurality of real cameras in the real space coordinate system.
Based on a real camera matrix erected on a shooting site, the embodiment of the application can carry out site marking on the real camera matrix and return the track and the parameters of the real camera matrix after the test is passed. In some embodiments, the coordinates of the real camera matrix may be carried in the parameters of the real camera matrix, and the embodiments of the present application may obtain the coordinates of the real camera matrix in the real space coordinate system by reading the parameters of the real camera matrix.
The purpose of camera calibration is to determine the parameter values of the camera. The parameters of the camera to be calibrated can be divided into internal parameters and external parameters. In some embodiments, the parameters of the real camera matrix may include external and internal parameters of the real camera matrix. The external parameters represent parameters of the camera in a field space coordinate system, such as the position, the rotation direction and the like of the camera. The internal reference represents parameters of the camera's own characteristics, such as the focal length, pixel size, etc. of the camera. As an alternative implementation, the embodiment of the present application may read the external parameters of the real camera matrix, so as to obtain the coordinates of the real camera matrix in the real space coordinate system from the external parameters of the real camera matrix, for example, obtain the coordinates of each real camera in the real space coordinate system from the external parameters of each real camera.
In step S320, a virtual space coordinate system is constructed according to the coordinates of the real camera matrix in the real space coordinate system, and the virtual space coordinate system is a coordinate system corresponding to the virtual camera matrix in the virtual space.
After the coordinates of the real camera matrix in the real space coordinate system are obtained, the virtual space coordinate system having a mapping relation with the real space coordinate system can be constructed in the embodiment of the application, so that the shooting space recurrence of the real camera matrix is realized in the virtual space, the virtual space can be provided with the virtual camera matrix corresponding to the real camera matrix, and the virtual camera matrix can be used for picture rendering. In some embodiments, the virtual camera matrix may include a plurality of virtual cameras, one virtual camera corresponding to one real camera.
In some embodiments, the real space coordinate system and the virtual space coordinate system may both be three-dimensional coordinate systems, the real space coordinate system may be represented as xyz, and the virtual space coordinate system may be represented as x 'y' z 'o', and embodiments of the present application may construct an origin o ', an x' axis, a y 'axis, and a z' axis of the virtual space coordinate system based on coordinates of the plurality of real cameras in the real space coordinate system, so as to implement constructing the virtual space coordinate system.
As an alternative implementation, in conjunction with fig. 1A, there are n real cameras C1 to Cn in the real camera matrix, where Cm is a middle real camera of the n real cameras (for example, a middle real camera of the n real cameras), and the embodiment of the present application may construct an origin o 'of the virtual space coordinate system and an x-axis basis vector (expressed by x') of the virtual space coordinate system based on coordinates of a first real camera C1 and a last real camera Cn in the real camera matrix; constructing a z-axis basis vector (denoted by z') of the virtual space coordinate system based on the coordinates of the intermediate real camera Cm, the first real camera C1 and the last real camera Cn; constructing a y-axis basis vector (expressed in y ') of the virtual space coordinate system based on the x-axis basis vector (x ') and the z-axis basis vector (z ') of the virtual space coordinate system; thereby forming a virtual space coordinate system x 'y' z 'o' based on the origin o ', the x-axis basis vector (x'), the y-axis basis vector (y '), and the z-axis basis vector (z') of the virtual space coordinate system.
In some embodiments, a mapping relationship may exist between the virtual space coordinate system and the real space coordinate system constructed by the embodiments of the present application, for example, based on the real space coordinate system and the virtual space coordinate system, the embodiments of the present application may determine the mapping relationship where the coordinates in the real space coordinate system are mapped to the virtual space coordinate system. In some embodiments, the mapping relationship may be represented by a mapping matrix, for example, coordinates in a real space coordinate system may be converted into coordinates in a virtual space coordinate system based on the mapping matrix. It should be noted that, in the embodiment of the present application, after the virtual space coordinate system is constructed based on the coordinates of the real camera matrix in the real space coordinate system, the mapping relationship (for example, the mapping matrix) between the real space coordinate system and the virtual space coordinate system is determined based on the constructed virtual space coordinate system.
In step S330, determining trajectory information of the virtual camera matrix in the virtual space coordinate system according to the trajectory information of the real camera matrix in the real space coordinate system.
The track information of the real camera matrix in the real space coordinate system can represent the track of the real camera matrix in the real space coordinate system when the real camera matrix carries out multi-degree-of-freedom video shooting. For example, the real camera matrix is a trajectory of each frame of the multi-degree-of-freedom video shot in the real space coordinate system. The real camera matrix comprises a plurality of real cameras, and the track information of the real camera matrix in the real space coordinate system can comprise track information of the plurality of real cameras in the real space coordinate system respectively.
After the virtual space coordinate system is constructed, based on the fact that the mapping relation exists between the virtual space coordinate system and the real space coordinate system, the trajectory information of the real camera matrix in the real space coordinate system can be mapped into the trajectory information of the virtual camera matrix in the virtual space coordinate system, and therefore the trajectory of the real camera matrix in the real space coordinate system can be reproduced through the trajectory of the virtual camera matrix in the virtual space coordinate system. As an optional implementation, in the embodiment of the present application, the trajectory information of each frame of the multiple degree of freedom video in the real space coordinate system of the real camera matrix may be mapped to the trajectory information of each frame of the video in the virtual space coordinate system of the virtual camera matrix. For example, based on that the real camera matrix includes multiple real cameras, the embodiment of the present application may map the trajectory information of each real camera in each frame of the multiple degree of freedom video to the trajectory information of each frame of the multiple degree of freedom video in the virtual space coordinate system of the corresponding virtual camera.
In some further embodiments, the trajectory information of the real camera matrix in the frame of the multi-degree-of-freedom video may be expressed by a translation and rotation matrix of the real camera matrix in each virtual viewpoint of the frame of the multi-degree-of-freedom video. As an optional implementation, based on that one virtual viewpoint corresponds to a shooting angle of one real camera, the embodiment of the present application may map a translational rotation matrix of each real camera at a virtual viewpoint of each frame of the multiple degree of freedom video to a translational rotation matrix of a virtual viewpoint of a corresponding virtual camera matrix at each frame of the multiple degree of freedom video.
In step S340, the trajectory information of the virtual camera matrix in the virtual space coordinate system is associated with a virtual object implanted with the multi-degree-of-freedom video, so as to obtain implantation information for implanting the multi-degree-of-freedom video.
In some embodiments, the trajectory information of each frame of video of the virtual camera matrix in the virtual space coordinate system may be associated with a virtual object implanted in each frame of multiple degrees of freedom video, so as to obtain implantation information for implanting each frame of multiple degrees of freedom video. Furthermore, based on the implantation information of each frame of the multi-degree-of-freedom video, when the virtual object is implanted into each frame of the multi-degree-of-freedom video, the virtual object can be accurately matched with the multi-degree-of-freedom video based on the track information of the associated virtual camera matrix. It can be understood that after the trajectory information of each frame of the multi-degree-of-freedom video of the plurality of real cameras is mapped to the trajectory information of each frame of the multi-degree-of-freedom video of the corresponding virtual camera in the virtual space coordinate system, the trajectory information of each virtual camera can correspond to the trajectory information of each real camera, so that when a virtual object is implanted into each frame of the multi-degree-of-freedom video, the virtual object can correspond to the trajectory information of each real camera through the associated trajectory information of each virtual camera, and therefore when the visual angle of each frame of the multi-degree-of-freedom video changes, the embodiment of the application can enable the virtual object to be correspondingly adjusted based on the visual angle change of each frame of the multi-degree-of freedom video.
According to the embodiment of the application, a virtual space coordinate system mapped with a real space coordinate system can be constructed based on the coordinates of the real camera matrix in the real space coordinate system, so that the shooting space of the real camera matrix can be reproduced in the virtual space; mapping the track information of the real camera matrix in the real space coordinate system into the track information of the virtual camera matrix in the virtual space coordinate system, so as to realize the reproduction of the track of the real camera matrix in the virtual space and match the motion of the real camera matrix with the motion of the virtual camera matrix; and further associating the track information of the virtual camera matrix in the virtual space coordinate system with a virtual object implanted with the multi-degree-of-freedom video to obtain implantation information for implanting the multi-degree-of-freedom video. Because the implantation information carries the track information of the virtual object and the virtual camera matrix in the virtual space coordinate system, when the virtual object is implanted into the multi-degree-of-freedom video, the virtual object implanted into the multi-degree-of-freedom video can be accurately matched with each shooting angle of the real camera matrix through the track information of the associated virtual camera matrix in the virtual space coordinate system, and the virtual object and the multi-degree-of-freedom video are accurately matched. According to the embodiment of the application, the virtual space coordinate system mapped with the real space coordinate system where the real camera matrix is located is constructed, and the track reproduction of the real camera matrix is realized in the virtual space coordinate system, so that the virtual object can be accurately matched with each shooting angle of the real camera matrix based on the camera track reproduced in the virtual space coordinate system, a foundation is provided for the accurate matching of the virtual object and the multi-degree-of-freedom video, and the possibility is provided for the virtual object to be accurately matched with the change of the picture visual angle of the multi-degree-of-freedom video.
After the real camera matrix is marked and the coordinates of the real camera matrix in the real space coordinate system are obtained through the parameters (such as external parameters) of the real camera matrix, the embodiment of the application can construct a virtual space coordinate system so as to reproduce the shooting space of the real camera matrix in the virtual space. In some embodiments, fig. 3B is a flowchart illustrating a method for constructing a virtual space coordinate system according to an embodiment of the present application. As shown in fig. 3B, the method flow may include the following steps.
In step S321, the center of the coordinate connection line between the first real camera and the last real camera is determined as the origin of the virtual space coordinate system.
The real camera matrix comprises a plurality of real cameras, and based on coordinates of a first real camera and a last real camera in a real space coordinate system in the plurality of real cameras, the coordinate connection line of the first real camera and the last real camera can be determined, and a midpoint of the coordinate connection line is determined as an origin of a virtual space coordinate system. In one implementation example, as shown in fig. 1A, the embodiment of the present application may use a middle point of a coordinate connection line between the first real camera C1 and the last real camera Cn as an origin o' of the virtual space coordinate system.
It should be noted that determining the center of the coordinate connection line between the first real camera and the last real camera as the origin of the virtual space coordinate system is only an optional implementation manner for determining the origin of the virtual space coordinate system. The center of the coordinate connection line between the first real camera and the last real camera can be regarded as the center position of the whole shooting scene, and the origin of the virtual space coordinate system can be determined by the centers of the coordinate connection lines of other real cameras in the embodiment of the application as long as the centers of the coordinate connection lines of other real cameras correspond to the center position of the whole shooting scene or are close to the center position of the shooting scene. In other possible implementations, the center of the coordinate connection line between the second real camera and the penultimate real camera may also be selected as the origin of the virtual space coordinate system in the embodiments of the present application.
In step S322, an x-axis basis vector of the virtual space coordinate system is determined based on the result of vector regularization of the coordinates of the first real camera to the coordinates of the last real camera.
Based on the coordinates of the first real camera and the last real camera in the real space coordinate system, the embodiment of the application can determine the vector regularization result from the coordinates of the first real camera to the coordinates of the last real camera, and the vector regularization result is used as the x-axis base vector of the virtual space coordinate system. In an implementation example, as shown in fig. 1A, the embodiment of the present application may normalize a vector from the coordinates of the first real camera C1 to the coordinates of the last real camera Cn, and obtain the result as an x-axis basis vector (x') of the virtual space coordinate system.
In step S323, cross multiplication is performed on coordinate connection line vectors of the middle real camera and the first real camera and the last real camera, respectively, and regularization of a cross multiplication result is used as a z-axis basis vector of the virtual space coordinate system.
Based on the coordinates of a middle real camera and a first real camera in the multiple real cameras in a real space coordinate system, the coordinate connecting line vector of the middle real camera and the coordinate connecting line vector of the first real camera can be determined; based on the coordinates of the middle real camera and the last real camera in a real space coordinate system, the coordinate connection vector of the middle real camera and the last real camera can be determined in the embodiment of the application; performing cross multiplication on coordinate connection line vectors of the middle real camera and the first real camera and coordinate connection vectors of the middle real camera and the last real camera, and regularizing cross multiplication results; the regularization result can then be used as the z-base vector of the virtual space coordinate system. In one implementation example, as shown in fig. 1A, the embodiment of the present application may cross-multiply the coordinate connection vectors of the intermediate real camera Cm and the first real camera C1 and the last real camera Cn, respectively, and then normalize the cross-product result as a z-axis basis vector (z') of the virtual space coordinate system.
In step S324, the x-axis basis vector and the z-axis basis vector of the virtual space coordinate system are cross-multiplied to obtain a y-axis basis vector of the virtual space coordinate system.
After the x-axis basis vector and the z-axis basis vector of the virtual space coordinate system are obtained, the two vectors can be cross-multiplied to obtain a y-axis basis vector of the virtual space coordinate system.
In step S325, a virtual space coordinate system is constructed based on the origin of the virtual space coordinate system, the x-axis basis vector, the y-axis basis vector, and the z-axis basis vector.
According to the method and the device, the virtual space coordinate system mapped with the real space coordinate system can be constructed according to the coordinates of the real camera matrix in the real space coordinate system, so that the shooting space of the real camera matrix is reproduced in the virtual space, and a foundation is provided for realizing the track reconstruction of the real camera matrix in the virtual space subsequently.
In some embodiments, the trajectory information of the real camera matrix at each frame of the multiple degree of freedom video may be expressed by a translation and rotation matrix corresponding to each virtual viewpoint of the real camera matrix at each frame of the multiple degree of freedom video. The real camera matrix comprises a plurality of real cameras, and each virtual viewpoint of the real camera matrix in a frame of multi-degree-of-freedom video can correspond to the shooting angle of each real camera, for example, one virtual viewpoint corresponds to the shooting angle of one real camera. It should be noted that, the translation and rotation matrix corresponding to the virtual viewpoint of the camera in one frame of video represents: the camera needs to undergo rotation and translation when the real-world content captured in a frame of video falls on the camera coordinates.
According to the embodiment of the application, the translation and rotation matrix of the real camera matrix at each virtual viewpoint of each frame of multi-degree-of-freedom video can be mapped into the translation and rotation matrix of the virtual camera matrix at each virtual viewpoint of each frame of multi-degree-of-freedom video according to the mapping relation (such as the mapping matrix) between the real space coordinate system and the virtual space coordinate system under the virtual space coordinate system, so that the track information of the virtual camera matrix at each frame of multi-degree-of-freedom video can be obtained. As an optional implementation, in the embodiment of the present application, the mapping matrix may be multiplied by a translation rotation matrix of the real camera matrix at each virtual viewpoint of each frame of the multiple degree of freedom video, so as to obtain a translation rotation matrix of the virtual camera matrix at each virtual viewpoint of each frame of the multiple degree of freedom video. For example, based on that the real camera matrix includes multiple real cameras, in the embodiment of the present application, the mapping matrix may be multiplied by the translation and rotation matrix corresponding to the virtual viewpoint of each real camera in each frame of the multiple degree of freedom video, so as to obtain the translation and rotation matrix corresponding to the virtual viewpoint of each virtual camera in each frame of the multiple degree of freedom video.
In one implementation example, fig. 3C illustrates an example diagram of a translation rotation matrix of a real camera in a real space coordinate system. As shown in fig. 3C, after erecting the real cameras C1 to Cn, in a frame of multi-degree-of-freedom video, the translation and rotation matrices corresponding to the virtual viewpoints of the real cameras C1 to Cn are rt1 to rtb, respectively; based on the mapping matrix M that the real space coordinate system xyz is mapped to the virtual space coordinate system x 'y' z 'o', the embodiment of the present application may respectively multiply the translation and rotation matrices rt1 to rtn corresponding to the virtual viewpoints of the real cameras C1 to Cn in a frame of the multi-degree-of-freedom video by the mapping matrix M, thereby obtaining the translation and rotation matrices corresponding to the virtual viewpoints of the virtual cameras in a frame of the multi-degree-of-freedom video.
After obtaining the trajectory information of the virtual camera matrix in the virtual space coordinate system (for example, in each frame of the multi-degree-of-freedom video, the translation and rotation matrix corresponding to the virtual viewpoint of each virtual camera), the embodiment of the application may associate the trajectory information of the virtual camera matrix in the virtual space coordinate system with the virtual object to be implanted with the multi-degree-of-freedom video, so as to obtain the implantation information. In some embodiments, the embodiments of the present application may fabricate a virtual object such as an AR element, and associate at least one channel information of the virtual object with trajectory information of the virtual camera matrix in a virtual space coordinate system, so as to associate the trajectory information of the virtual camera matrix with the virtual object. As an alternative implementation, fig. 4 is a flowchart illustrating a method for associating trajectory information of a virtual camera matrix with a virtual object according to an embodiment of the present application. As shown in fig. 4, the method flow may include the following steps.
In step S410, at least one type of channel information of the virtual object is extracted, and the at least one type of channel information is aggregated into a channel information sequence of the virtual object.
In some embodiments, the at least one channel information of the virtual object comprises: at least one of RGB (red green blue) channel information, Depth channel information, Alpha channel information, and the like. According to the method and the device, at least one channel information of the virtual object can be extracted by extracting at least one of RGB channel information, Depth channel information and Alpha channel information of the virtual object. As an alternative implementation, the RGB channel information of the virtual object may be a picture base map of the virtual object with color information; depth channel information can be Depth information of a virtual object in a virtual space and is linearized, and the Depth information has a corresponding relation with the real camera shooting Depth of a real space; the Alpha channel information is transparency information of the virtual object in the virtual space, and the transparency of the virtual object of various materials can be extracted through an inverse mixing algorithm.
In some embodiments, the method and the device for processing the virtual object can obtain RGB channel information and Alpha channel information of the virtual object by rendering a picture of the virtual object to a texture supporting RGBA and sampling; depth channel information of the virtual object is obtained by collecting Depth textures of a real camera matrix and calculating linear Depth. In further some embodiments, the virtual object may include AR elements, such as AR models/patches, AR particles/special effects, AR elements of non-transparent/semi-transparent material, UI (user interface design) class maps, and the like. When Depth channel information of a virtual object is calculated, for an AR particle, because the AR particle does not have Depth information generally, a rendering Pass can be additionally added when the AR particle is rendered, Depth information filling is performed in the form of an opaque patch, and meanwhile color drawing of the Pass is shielded to avoid influencing color and transparency information acquisition.
After obtaining at least one of RGB channel information, Depth channel information, and Alpha channel information of the virtual object, the embodiment of the present application may aggregate the at least one of the channel information into a channel information sequence, where the channel information sequence may include the at least one of the channel information. In optional implementation, the embodiment of the present application may aggregate three types of channel information, namely RGB channel information, Depth channel information, and Alpha channel information, of a virtual object into a multi-channel information sequence; of course, in other possible implementations, only the RGB channel information of the virtual object may be rendered in the embodiment of the present application, for example, only the RGB channel information of the virtual object is used as the channel information sequence, and at this time, because the depth of field fusion of the three-dimensional space is not performed, the sense of reality may have a certain loss.
In step S420, the channel information sequence of the virtual object is associated with the trajectory information of the virtual camera matrix in the virtual space coordinate system, so as to obtain implantation information.
After the channel information sequence of the virtual object is obtained, the embodiment of the application can correlate the channel information sequence of the virtual object with the track information of the virtual camera matrix in the virtual space coordinate system. For example, the channel information sequence of the virtual object to be implanted into each frame of the multi-degree-of-freedom video is associated with the track information of the virtual camera matrix in each frame of the multi-degree-of-freedom video, so as to obtain the implantation information of each frame of the multi-degree-of-freedom video. In some embodiments, the implantation information may be a data packet structure or a data sequence structure, and is used to represent the association relationship between the trajectory information of the virtual camera matrix in the virtual space coordinate system and the channel information sequence of the virtual object.
After the implantation information of the multi-degree-of-freedom video is obtained, the virtual object can be implanted into the multi-degree-of-freedom video, for example, the virtual object can be implanted into each frame of the multi-degree-of-freedom video. As an alternative implementation, fig. 5 schematically illustrates a flowchart of a method for implanting a virtual object into a multiple degree of freedom video according to an embodiment of the present application. As shown in fig. 5, the method flow may include the following steps.
In step S510, the internal parameters of the real camera matrix are synchronized with the virtual camera matrix.
Virtual object implantation multiple degree of freedom video may be considered a process of rendering virtual objects in multiple degree of freedom video. Based on the rendering function of the virtual camera matrix, when the virtual camera matrix renders a virtual object in the multi-degree-of-freedom video, in order to ensure that the rendering result is in high fit with the real picture of the multi-degree-of-freedom video visually, the embodiment of the application can synchronize the internal parameters of the virtual camera matrix with the internal parameters of the real camera matrix. For example, the embodiments of the present application may read the internal parameters of the real camera matrix, so as to synchronize the focal length and distortion parameters of the real camera matrix with the virtual camera matrix.
In step S520, the virtual object in the implantation information is rendered into the multi-degree-of-freedom video by using the internal reference synchronized by the virtual camera matrix, wherein the virtual object rendered into the multi-degree-of-freedom video is associated with the trajectory information of the virtual camera matrix carried in the implantation information in the virtual space coordinate system.
After the virtual camera matrix is synchronously referenced, based on the virtual object to be implanted carried in the implantation information, the virtual object can be rendered into the multi-degree-of-freedom video by using the virtual camera matrix synchronous reference, and the rendered virtual object is associated with the track information of the virtual camera matrix carried in the implantation information in the virtual space coordinate system, so that the virtual object can be adjusted based on the view angle change of the multi-degree-of-freedom video after being rendered into the multi-degree-of-freedom video.
As an optional implementation, if the implantation information carries a channel information sequence of the virtual object, and the channel information sequence includes RGB channel information, Depth channel information, and Alpha channel information, the embodiment of the present application may perform Depth-of-field occlusion and elimination of the RGB channel information in the multi-degree-of-freedom video by using internal parameters synchronized by a virtual camera matrix according to the Depth channel information and the Alpha channel information of the virtual object, so that the RGB channel information and the multi-degree-of-freedom video are fused with a sense of reality, thereby implementing rendering of the virtual object to the multi-degree-of-freedom video.
As an optional implementation, a fusion formula of the channel information sequence of the virtual object and the multiple degree of freedom video may be as follows:
Color=[Depthar/Depthimg]*RGBimg+[Depthimg/Depthar]*(RGBar+(1-Alphaar)*RGBimg) (ii) a Wherein, Color represents the fusion result of the channel information sequence of the virtual object and the multi-degree-of-freedom video, DeptharDepth channel information (Depth information) representing rendered virtual objects, DepthimgRepresenting real camera matrix shotsDepth information of original picture of multi-degree-of-freedom video, RGBarColor information, RGB, representing rendered virtual objectsimgColor information of original pictures, Alpha, representing multi-degree of freedom video shot by a real camera matrixarAlpha channel information (transparency information) representing a rendered virtual object is represented.
That is, the embodiment of the present application may determine the original frame using the multi-degree-of-freedom video or the fusion result of the multi-degree-of-freedom video and the virtual object according to the division result (0 or 1) between the depth of the virtual object and the frame depth of the multi-degree-of-freedom video as the rejection basis.
In an implementation example, taking a virtual object as an AR element as an example, fig. 6 is an exemplary diagram illustrating a process of implanting an AR element into a 6Dof video according to an embodiment of the present application. As shown in fig. 6, AR element implant 6Dof video may include the following process.
The real camera matrix is erected 61, for example, a six-degree-of-freedom camera matrix and related servers are erected according to site characteristics of a shooting site.
And calibrating the track and parameters 62 of the real camera matrix, for example, calibrating the real camera matrix on site, and returning the track and the internal and external parameter information of the real camera matrix after the test is passed.
And space and track reconstruction 63, which is to perform three-dimensional reconstruction on the real space and the shooting track where the real camera matrix is located through a reconstruction algorithm (the specific process can be described in combination with the corresponding parts, and is not expanded here), so that a highly accurate matching relationship is established between the virtual space and the real space, a unified virtual space coordinate system is generated, and the shooting track of the real camera matrix is reproduced in the virtual space coordinate system. In one example, the virtual spatial coordinate system may also be referred to as a unified standard spatial coordinate system.
And 6Dof video shooting 64, namely, after a real camera matrix is erected and calibrated, the 6Dof video pictures can be collected in the real space in the embodiment of the application, and the 6Dof video shooting is completed.
And (5) manufacturing the AR elements 65, and after the space and track reconstruction is completed, constructing multiple sets of matched AR content elements according to the target of the 6Dof video shot at this time in the embodiment of the application.
Determining 66 a channel information sequence, and after the AR element is manufactured, in the embodiment of the present application, RGB channel information, Depth channel information, and Alpha channel information of the AR element may be extracted and aggregated to form a multi-channel information sequence of the AR element, and the multi-channel information sequence of the AR element may be associated with trajectory information of the virtual camera matrix in a virtual space coordinate system to form a reusable AR sequence. The AR sequence may be regarded as a form of embedded information, and the AR sequence may be a data packet structure or a data sequence structure, for example, the AR sequence may indicate a multi-channel information sequence of AR elements that need to be embedded in each frame of 6Dof video, and a translation and rotation matrix of an associated virtual camera matrix at each virtual viewpoint of each frame of 6Dof video.
The rendering object and the 6Dof video are fused 67, based on a reusable AR sequence, the embodiment of the application can mix a multichannel information sequence in the AR sequence with the 6Dof positive film, and carry out depth of field shielding and elimination according to depth and transparency information, so that the positive film with the AR elements implanted into the 6Dof video is produced.
As an optional implementation, in the embodiment of the application, after the erection of the real camera matrix is completed, multiple sets of AR content elements are constructed according to the target of the 6Dof video shot at this time, and based on the reconstructed virtual space coordinate system and trajectory, a reusable AR sequence is determined to be used for subsequent implantation into the 6Dof video. In other possible implementations, the method and the device for processing the AR elements in the 6Dof video can also customize the AR elements after shooting the 6Dof video, and determine the reusable AR sequence based on the reconstructed virtual space coordinate system and the track, so that the AR elements are implanted into the 6Dof video based on the AR sequence, and the customization requirements of the AR elements are met.
According to the embodiment of the application, the shooting space and the shooting track can be reproduced in the virtual space according to the parameters of the real camera matrix, so that virtual objects such as AR elements can be matched with each shooting angle of the real camera matrix, and the problem that the traditional mode of implanting the virtual objects into the video cannot be suitable for the visual angle change of the multi-degree-of-freedom video is solved. According to the embodiment of the application, the multichannel information sequence of the virtual object is determined and rendered by constructing the virtual space coordinate system, so that the virtual object can be efficiently fused with the multi-degree-of-freedom original video film, the cost problem of time consumption in the later special effect manufacturing of the video is solved, the reusability of the virtual object material is realized, and one virtual object material can be used in a plurality of scenes.
In the following, a device for implanting an object with multiple degrees of freedom video provided in the embodiments of the present application is introduced, and the device content described below may be regarded as a functional module that is required to be set by an electronic device to implement the method for implanting an object with multiple degrees of freedom video provided in the embodiments of the present application. The device content described below may be referred to in correspondence with the above description.
Fig. 7 is a block diagram illustrating a multiple degree of freedom video subject implantation device provided by an embodiment of the present application. As shown in fig. 7, the apparatus may include:
a coordinate obtaining module 710, configured to obtain coordinates of a real camera matrix in a real space coordinate system, where the real camera matrix is used to shoot a multi-degree-of-freedom video;
a virtual space constructing module 720, configured to construct a virtual space coordinate system according to the coordinates of the real camera matrix in the real space coordinate system, where the virtual space coordinate system is a coordinate system corresponding to the virtual camera matrix in the virtual space;
a trajectory reconstruction module 730, configured to map trajectory information of the real camera matrix in the real space coordinate system to trajectory information of the virtual camera matrix in the virtual space coordinate system;
the associating module 740 is configured to associate the trajectory information of the virtual camera matrix in the virtual space coordinate system with a virtual object implanted with the multi-degree-of-freedom video, so as to obtain implantation information for implanting the multi-degree-of-freedom video.
In some embodiments, the real camera matrix comprises a plurality of real cameras; a virtual space constructing module 720, configured to construct a virtual space coordinate system according to the coordinates of the real camera matrix in the real space coordinate system, where the virtual space coordinate system includes:
based on the coordinates of the first real camera and the last real camera, constructing an origin and an x-axis base vector of a virtual space coordinate system;
constructing a z-axis basis vector of a virtual space coordinate system based on coordinates of the middle real camera, the first real camera and the last real camera;
constructing a y-axis basis vector of the virtual space coordinate system based on the x-axis basis vector and the z-axis basis vector of the virtual space coordinate system;
and forming a virtual space coordinate system based on the origin, the x-axis base vector, the y-axis base vector and the z-axis base vector of the virtual space coordinate system.
In some embodiments, the virtual space construction module 720, configured to construct the origin of the virtual space coordinate system based on the coordinates of the first real camera and the last real camera, includes:
and determining the center of a coordinate connecting line of the first real camera and the last real camera as the origin of the virtual space coordinate system.
In some embodiments, the virtual space construction module 720, configured to construct the x-axis basis vectors of the virtual space coordinate system based on the coordinates of the first real camera and the last real camera, includes:
and determining an x-axis base vector of the virtual space coordinate system based on the vector regularization result from the coordinate of the first real camera to the coordinate of the last real camera.
In some embodiments, the virtual space construction module 720, configured to construct the z-axis basis vectors of the virtual space coordinate system based on the coordinates of the intermediate real camera, the first real camera, and the last real camera, includes:
and performing cross multiplication on coordinate connecting line vectors of the middle real camera and the first real camera and the last real camera respectively, and regularizing a cross multiplication result to be used as a z-axis base vector of a virtual space coordinate system.
In some embodiments, the virtual space construction module 720, configured to construct the y-axis basis vectors of the virtual space coordinate system based on the x-axis basis vectors and the z-axis basis vectors of the virtual space coordinate system, includes:
and performing cross multiplication on the x-axis base vector and the z-axis base vector of the virtual space coordinate system to obtain a y-axis base vector of the virtual space coordinate system.
In some embodiments, the trajectory reconstruction module 730, configured to map the trajectory information of the real camera matrix in the real space coordinate system to the trajectory information of the virtual camera matrix in the virtual space coordinate system includes:
mapping the track information of each frame of the multi-degree-of-freedom video of the real camera matrix in a real space coordinate system into the track information of each frame of the video of the virtual camera matrix in a virtual space coordinate system;
correspondingly, the associating module 740 is configured to associate the trajectory information of the virtual camera matrix in the virtual space coordinate system with a virtual object implanted with the multiple degree of freedom video, so as to obtain implantation information for implanting the multiple degree of freedom video, where the associating module is configured to:
and associating the track information of each frame of video in the virtual space coordinate system of the virtual camera matrix with the virtual object implanted into each frame of multi-degree-of-freedom video to obtain implantation information for implanting each frame of multi-degree-of-freedom video.
In some embodiments, the trajectory reconstruction module 730, configured to map the trajectory information of each frame of the multiple degree of freedom video in the real space coordinate system of the real camera matrix into the trajectory information of each frame of the video in the virtual space coordinate system of the virtual camera matrix, includes:
mapping the translation and rotation matrix of the real camera matrix at each virtual viewpoint of each frame of the multi-degree-of-freedom video into the translation and rotation matrix of the virtual camera matrix at each virtual viewpoint of each frame of the multi-degree-of-freedom video, wherein one virtual viewpoint corresponds to the shooting angle of one real camera, and one real camera corresponds to one virtual camera in the virtual space.
In some embodiments, the trajectory reconstructing module 730, configured to map the translation and rotation matrix of the real camera matrix at each virtual viewpoint of each frame of the multiple degree of freedom video to the translation and rotation matrix of the virtual camera matrix at each virtual viewpoint of each frame of the multiple degree of freedom video, includes:
and mapping the real space coordinate system to a mapping matrix of the virtual space coordinate system, and respectively multiplying the mapping matrix by a translation and rotation matrix corresponding to the virtual viewpoint of each real camera in each frame of the multi-degree-of-freedom video to obtain the translation and rotation matrix corresponding to the virtual viewpoint of each virtual camera in each frame of the multi-degree-of-freedom video.
In some embodiments, the associating module 740, configured to associate the trajectory information of the virtual camera matrix in the virtual space coordinate system with a virtual object implanted in the multiple degree of freedom video, to obtain the implantation information for implanting the multiple degree of freedom video, includes:
extracting at least one channel information of the virtual object, and aggregating the at least one channel information into a channel information sequence of the virtual object;
and associating the channel information sequence of the virtual object with the track information of the virtual camera matrix in a virtual space coordinate system to obtain implantation information.
As shown in fig. 7, the apparatus provided in the embodiment of the present application may further include:
an implantation module 750 for synchronizing the internal parameters of the real camera matrix with the virtual camera matrix; and rendering the virtual object in the implantation information into the multi-degree-of-freedom video by utilizing the synchronous internal reference of the virtual camera matrix, wherein the virtual object rendered into the multi-degree-of-freedom video is associated with the track information of the virtual camera matrix carried in the implantation information in a virtual space coordinate system.
In some embodiments, the channel information sequence comprises: RGB channel information, Depth channel information and Alpha channel information of the virtual object; the implantation module 750 is configured to render the virtual object in the implantation information into the multiple degree of freedom video by using the internal reference synchronized by the virtual camera matrix, where:
and according to Depth channel information and Alpha channel information of the virtual object, performing Depth-of-field shielding and elimination of the RGB channel information in the multi-degree-of-freedom video by using internal parameters of virtual camera matrix synchronization.
The embodiment of the present application further provides an electronic device, and the electronic device may implement the object implantation method for a multiple degree of freedom video provided in the embodiment of the present application by loading the above-described object implantation device for a multiple degree of freedom video. In some embodiments, fig. 8 shows a block diagram of an electronic device, which, as shown in fig. 8, may include: at least one processor 1, at least one communication interface 2, at least one memory 3 and at least one communication bus 4.
In the embodiment of the present application, the number of the processor 1, the communication interface 2, the memory 3, and the communication bus 4 is at least one, and the processor 1, the communication interface 2, and the memory 3 complete mutual communication through the communication bus 4.
Alternatively, the communication interface 2 may be an interface of a communication module for performing network communication.
Alternatively, the processor 1 may be a CPU (central Processing Unit), a GPU (Graphics Processing Unit), an NPU (embedded neural network processor), an FPGA (Field Programmable Gate Array), a TPU (tensor Processing Unit), an AI chip, an asic (application Specific Integrated circuit), or one or more Integrated circuits configured to implement the embodiments of the present application.
The memory 3 may comprise a high-speed RAM memory and may also comprise a non-volatile memory, such as at least one disk memory.
The memory 3 stores one or more computer-executable instructions, and the processor 1 calls the one or more computer-executable instructions to execute the object implantation method for multiple degrees of freedom video provided by the embodiment of the application.
Embodiments of the present application also provide a storage medium that stores one or more computer-executable instructions that, when executed, implement a method for object implantation of multiple degree of freedom video as provided by embodiments of the present application.
Embodiments of the present application further provide a computer program, which when executed, implements the object implantation method for multiple degrees of freedom video provided in embodiments of the present application.
While various embodiments have been described above in connection with what are presently considered to be the embodiments of the disclosure, the various alternatives described in the various embodiments can be readily combined and cross-referenced without conflict to extend the variety of possible embodiments that can be considered to be the disclosed and disclosed embodiments of the disclosure.
Although the embodiments of the present application are disclosed above, the present application is not limited thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present disclosure, and it is intended that the scope of the present disclosure be defined by the appended claims.

Claims (12)

1. A method for implanting an object in a multi-degree-of-freedom video comprises the following steps:
acquiring coordinates of a real camera matrix in a real space coordinate system, wherein the real camera matrix is used for shooting a multi-degree-of-freedom video;
constructing a virtual space coordinate system according to the coordinates of the real camera matrix in a real space coordinate system, wherein the virtual space coordinate system is a coordinate system corresponding to the virtual camera matrix in a virtual space;
mapping the track information of the real camera matrix in the real space coordinate system into the track information of the virtual camera matrix in the virtual space coordinate system;
and associating the track information of the virtual camera matrix in the virtual space coordinate system with a virtual object implanted with the multi-degree-of-freedom video to obtain implantation information for implanting the multi-degree-of-freedom video.
2. The method of claim 1, wherein the real camera matrix comprises a plurality of real cameras, and wherein constructing a virtual space coordinate system from coordinates of the real camera matrix in a real space coordinate system comprises:
based on the coordinates of the first real camera and the last real camera, constructing an origin and an x-axis base vector of a virtual space coordinate system;
constructing a z-axis basis vector of a virtual space coordinate system based on coordinates of the middle real camera, the first real camera and the last real camera;
constructing a y-axis basis vector of the virtual space coordinate system based on the x-axis basis vector and the z-axis basis vector of the virtual space coordinate system;
and forming a virtual space coordinate system based on the origin, the x-axis base vector, the y-axis base vector and the z-axis base vector of the virtual space coordinate system.
3. The method of claim 2, wherein the constructing an origin of a virtual space coordinate system based on coordinates of the first real camera and the last real camera comprises:
determining the center of a coordinate connecting line of a first real camera and a last real camera as the origin of a virtual space coordinate system;
the constructing an x-axis basis vector of a virtual space coordinate system based on the coordinates of the first real camera and the last real camera comprises:
and determining an x-axis base vector of the virtual space coordinate system based on the vector regularization result from the coordinate of the first real camera to the coordinate of the last real camera.
4. The method of claim 2 or 3, wherein the constructing the z-axis basis vectors of the virtual space coordinate system based on the coordinates of the intermediate real camera, the first real camera, and the last real camera comprises:
performing cross multiplication on coordinate connecting line vectors of the middle real camera and the first real camera and the last real camera respectively, and regularizing a cross multiplication result to be used as a z-axis base vector of a virtual space coordinate system;
the building of the y-axis basis vector of the virtual space coordinate system based on the x-axis basis vector and the z-axis basis vector of the virtual space coordinate system comprises the following steps:
and performing cross multiplication on the x-axis base vector and the z-axis base vector of the virtual space coordinate system to obtain a y-axis base vector of the virtual space coordinate system.
5. The method according to claim 1, wherein the mapping the trajectory information of the real camera matrix in the real space coordinate system to the trajectory information of the virtual camera matrix in the virtual space coordinate system comprises:
mapping the track information of each frame of the multi-degree-of-freedom video of the real camera matrix in a real space coordinate system into the track information of each frame of the video of the virtual camera matrix in a virtual space coordinate system;
the associating the track information of the virtual camera matrix in the virtual space coordinate system with the virtual object implanted with the multi-degree-of-freedom video to obtain the implantation information for implanting the multi-degree-of-freedom video comprises:
and associating the track information of each frame of video in the virtual space coordinate system of the virtual camera matrix with the virtual object implanted into each frame of multi-degree-of-freedom video to obtain implantation information for implanting each frame of multi-degree-of-freedom video.
6. The method of claim 5, wherein the mapping the trajectory information of each frame of the multi-degree-of-freedom video of the real camera matrix in the real space coordinate system to the trajectory information of each frame of the video of the virtual camera matrix in the virtual space coordinate system comprises:
mapping the translation and rotation matrix of the real camera matrix at each virtual viewpoint of each frame of the multi-degree-of-freedom video into the translation and rotation matrix of the virtual camera matrix at each virtual viewpoint of each frame of the multi-degree-of-freedom video, wherein one virtual viewpoint corresponds to the shooting angle of one real camera, and one real camera corresponds to one virtual camera in the virtual space.
7. The method of claim 6, wherein the mapping the translation and rotation matrix of the real camera matrix at each virtual viewpoint of each frame of the multiple degree of freedom video to the translation and rotation matrix of the virtual camera matrix at each virtual viewpoint of each frame of the multiple degree of freedom video comprises:
and mapping the real space coordinate system to a mapping matrix of the virtual space coordinate system, and respectively multiplying the mapping matrix by a translation and rotation matrix corresponding to the virtual viewpoint of each real camera in each frame of the multi-degree-of-freedom video to obtain the translation and rotation matrix corresponding to the virtual viewpoint of each virtual camera in each frame of the multi-degree-of-freedom video.
8. The method according to claim 1, wherein the associating trajectory information of the virtual camera matrix in the virtual space coordinate system with a virtual object implanted with the multi-degree-of-freedom video to obtain implantation information for implanting the multi-degree-of-freedom video comprises:
extracting at least one channel information of the virtual object, and aggregating the at least one channel information into a channel information sequence of the virtual object;
and associating the channel information sequence of the virtual object with the track information of the virtual camera matrix in a virtual space coordinate system to obtain implantation information.
9. The method of claim 8, further comprising:
synchronizing the internal parameters of the real camera matrix with the virtual camera matrix;
and rendering the virtual object in the implantation information into the multi-degree-of-freedom video by utilizing the synchronous internal reference of the virtual camera matrix, wherein the virtual object rendered into the multi-degree-of-freedom video is associated with the track information of the virtual camera matrix carried in the implantation information in a virtual space coordinate system.
10. The method of claim 9, wherein the channel information sequence comprises: RGB channel information, Depth channel information and Alpha channel information of the virtual object; the rendering the virtual object implanted in the information to the multi-degree-of-freedom video by using the internal reference of the virtual camera matrix synchronization comprises the following steps:
and according to Depth channel information and Alpha channel information of the virtual object, performing Depth-of-field shielding and elimination of the RGB channel information in the multi-degree-of-freedom video by using internal parameters of virtual camera matrix synchronization.
11. An electronic device, comprising: at least one memory and at least one processor; the memory stores one or more computer-executable instructions that are invoked by the processor to perform the method for object implantation of multiple degree of freedom video according to any of claims 1-10.
12. A storage medium, wherein the storage medium stores one or more computer-executable instructions that, when executed, implement a method of object implantation for multiple degree of freedom video according to any one of claims 1-10.
CN202111312711.9A 2021-11-08 2021-11-08 Object implantation method of multi-degree-of-freedom video, electronic device and storage medium Pending CN114255258A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111312711.9A CN114255258A (en) 2021-11-08 2021-11-08 Object implantation method of multi-degree-of-freedom video, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111312711.9A CN114255258A (en) 2021-11-08 2021-11-08 Object implantation method of multi-degree-of-freedom video, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN114255258A true CN114255258A (en) 2022-03-29

Family

ID=80790531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111312711.9A Pending CN114255258A (en) 2021-11-08 2021-11-08 Object implantation method of multi-degree-of-freedom video, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114255258A (en)

Similar Documents

Publication Publication Date Title
US11076142B2 (en) Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene
US6084979A (en) Method for creating virtual reality
US10248993B2 (en) Systems and methods for generating photo-realistic images of virtual garments overlaid on visual images of photographic subjects
TWI451358B (en) Banana codec
JP2000503177A (en) Method and apparatus for converting a 2D image into a 3D image
US20080246757A1 (en) 3D Image Generation and Display System
CN108475327A (en) three-dimensional acquisition and rendering
CN108616731A (en) 360 degree of VR panoramic images images of one kind and video Real-time Generation
CN102834849A (en) Image drawing device for drawing stereoscopic image, image drawing method, and image drawing program
EP3340619A1 (en) Geometric warping of a stereograph by positional constraints
CN112446939A (en) Three-dimensional model dynamic rendering method and device, electronic equipment and storage medium
CN105007477A (en) Method for realizing naked eye 3D display based on Unity3D engine
EP4111677B1 (en) Multi-source image data synchronization
CN107005689B (en) Digital video rendering
CN107862718A (en) 4D holographic video method for catching
Suenaga et al. A practical implementation of free viewpoint video system for soccer games
WO2023207452A1 (en) Virtual reality-based video generation method and apparatus, device, and medium
CN107995481B (en) A kind of display methods and device of mixed reality
CN113781660A (en) Method and device for rendering and processing virtual scene on line in live broadcast room
CN108043027A (en) Storage medium, electronic device, the display methods of game picture and device
CN113012299A (en) Display method and device, equipment and storage medium
Schreer et al. Advanced volumetric capture and processing
CN213461894U (en) XR-augmented reality system
CN112003999A (en) Three-dimensional virtual reality synthesis algorithm based on Unity 3D
CN112017242A (en) Display method and device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination