CN117834800A - Video data processing method and device, electronic equipment and storage medium - Google Patents

Video data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117834800A
CN117834800A CN202311630046.7A CN202311630046A CN117834800A CN 117834800 A CN117834800 A CN 117834800A CN 202311630046 A CN202311630046 A CN 202311630046A CN 117834800 A CN117834800 A CN 117834800A
Authority
CN
China
Prior art keywords
video data
data
conference
video
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311630046.7A
Other languages
Chinese (zh)
Inventor
王亚军
沈世国
李阔
杨春晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Qiantang Shilian Information Technology Co ltd
Original Assignee
Hainan Qiantang Shilian Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan Qiantang Shilian Information Technology Co ltd filed Critical Hainan Qiantang Shilian Information Technology Co ltd
Priority to CN202311630046.7A priority Critical patent/CN117834800A/en
Publication of CN117834800A publication Critical patent/CN117834800A/en
Pending legal-status Critical Current

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the invention provides a video data processing method and device, electronic equipment and storage medium, wherein the method comprises the following steps: receiving initial video data transmitted in the conference process; and generating panoramic video data based on the initial video data when the first function of the conference is triggered, and displaying the panoramic video data on the conference terminal so that the panoramic video data presents three-dimensional video data on a virtual reality device. According to the embodiment of the invention, panoramic video data is fitted based on the initial video data in the conference process, virtual reality display is carried out, and an immersive atmosphere is created for a user.

Description

Video data processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of video data processing technologies, and in particular, to a video data processing method and apparatus, an electronic device, and a storage medium.
Background
In the existing conference system, a two-dimensional video image can be displayed on a conference terminal in the conference process, but the two-dimensional plane image is difficult to create an immersive sensation.
Disclosure of Invention
In view of the foregoing, it is proposed to provide a video data processing method and apparatus, an electronic device, a storage medium, which overcome or at least partially solve the foregoing problems, including:
a video data processing method applied to a conference terminal, the method comprising:
receiving initial video data transmitted in the conference process;
and generating panoramic video data based on the initial video data when the first function of the conference is triggered, and displaying the panoramic video data on the conference terminal so that the panoramic video data presents three-dimensional video data on a virtual reality device.
Optionally, the generating panoramic video data based on the initial video data includes:
determining first video frame data from the initial video data;
generating second video frame data with different space shooting angles with the first video frame data in the conference process according to the first video frame data in a fitting way;
panoramic video data is generated based on the first video frame data and the second video frame data.
Optionally, the generating, according to the fitting of the first video frame data, second video frame data with a different spatial shooting angle from the first video frame data in the conference process includes:
acquiring first texture data of first video frame data;
generating second texture data at a preset angle based on the first texture data fitting;
and generating second video frame data of the preset angle according to the second texture data mapping.
Optionally, the generating, according to the fitting of the first video frame data, second video frame data with a different spatial shooting angle from the first video frame data in the conference process includes:
acquiring first illumination data of first video frame data;
fitting second illumination data on the preset angle based on the first illumination data;
and fitting second video frame data on the preset angle based on the first video frame data and the second illumination data.
Optionally, the method further comprises:
when the conference terminal is detected to rotate, acquiring the rotation angle of the conference terminal;
and adjusting the panoramic video data currently displayed by the conference terminal according to the rotation angle.
Optionally, the method further comprises:
and when the panoramic video data currently displayed by the conference terminal is adjusted according to the rotation angle, reducing the frame rate of the panoramic video data on the conference terminal.
Optionally, the method further comprises:
and displaying the initial video data on the conference terminal when the conference is detected to close the first function.
A video data processing apparatus for use in a conference terminal, the apparatus comprising:
the initial video data receiving module is used for receiving initial video data transmitted in the conference process;
and the panoramic video data display module is used for generating panoramic video data based on the initial video data when the first function of the conference is triggered, and displaying the panoramic video data on the conference terminal so that the panoramic video data presents three-dimensional video data on virtual reality equipment.
An electronic device comprises a processor, a memory and a computer program stored on the memory and capable of running on the processor, which when executed by the processor implements a video data processing method as described above.
A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, which computer program, when being executed by a processor, implements a video data processing method as described above.
The embodiment of the invention has the following advantages:
the embodiment of the invention receives the initial video data transmitted in the conference process; when the first function of the conference is triggered, panoramic video data is generated based on the initial video data, and the panoramic video data is displayed on the conference terminal, so that the panoramic video data presents three-dimensional video data on the virtual reality device, the panoramic video data is fitted based on the initial video data in the conference process, virtual reality display is carried out, and an immersive atmosphere is created for a user.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the description of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flowchart illustrating steps of a method for video data according to an embodiment of the present invention;
FIG. 2a is a flowchart illustrating steps of another method for video data according to an embodiment of the present invention;
FIG. 2b is a schematic diagram of a video rendering process according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps of another method for video data according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video data device according to an embodiment of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In practical applications, a conference terminal may receive conference video data transmitted in a conference and display the conference video data on the conference terminal, where the conference video data is usually two-dimensional image data and cannot create an immersive feeling.
According to the embodiment of the invention, the virtual reality function can be provided in the conference process, so that the two-dimensional video data can be fitted to generate panoramic data under the condition that the virtual reality function is started, and a user can watch the three-dimensional video data in the conference process through the virtual reality equipment under the condition that the original structure of the conference is not changed, thereby realizing VR application in the conference.
Referring to fig. 1, a flowchart illustrating steps of a video data method according to an embodiment of the present invention is applied to a conference terminal, and may specifically include the following steps:
step 101, receiving initial video data transmitted in a conference process;
the conference in the embodiment of the invention can be a video networking conference, an internet conference or a conference combining video networking and internet. The conference terminal may be a video-on-network conference terminal or a participant terminal in the internet.
In the conference process, the conference management server can be used for data transmission among a plurality of conference terminals, the conference terminal serving as a speaking terminal can upload initial video data acquired in the conference process to the conference management server, and the conference management server can forward corresponding video data to the conference terminals according to main and auxiliary stream settings of each conference terminal and display the video data according to video display settings of the conference terminals, wherein the video display equipment can comprise, but is not limited to, main and auxiliary stream settings, volume settings, resolution settings and the like.
The conference terminal can receive the initial video data transmitted in real time in the conference process, and process and display the initial video data.
Step 102, when the first function of the conference is detected to be triggered, panoramic video data is generated based on the initial video data, and the panoramic video data is displayed on the conference terminal, so that the panoramic video data presents three-dimensional video data on the virtual reality device.
The first function can be a virtual reality function, panoramic video data can be generated in the conference terminal based on initial video data fitting actually transmitted in the conference when the first function of the conference is triggered, the panoramic video data can be video data obtained based on a stereoscopic space, and the user can watch the panoramic video data displayed on the conference terminal by wearing VR equipment to watch three-dimensional video.
In practical application, panoramic video data is required to be shot and acquired in 360 degrees in all directions by using a 3D camera, but in the embodiment of the invention, when a virtual reality function is started, initial video data acquired by common video acquisition equipment can be fit and encoded to generate panoramic video data so as to create an immersive atmosphere for a participant user.
In an example, a conference room may be created, and a virtual reality function of the conference may be set, so that after the conference is started, the virtual reality function is applied, and fitting the received initial video data is performed, so as to obtain panoramic video data; in another example, during the meeting, the virtual reality function may be turned on in response to a control operation of the meeting by the meeting administrator, and then the initial video data may be fitted.
In an embodiment of the invention, when the rotation of the conference terminal is detected, the rotation angle of the conference terminal is obtained; and adjusting the panoramic video data currently displayed by the conference terminal according to the rotation angle.
In practical application, a sensor (such as a gyroscope) for detecting the rotation angle of the terminal can be installed in the conference terminal, so that when a user rotates the conference terminal in space, the rotation angle of the conference terminal can be acquired through the sensor, panoramic video data are image data in a three-dimensional space, and when the conference terminal is at different angles, the panoramic video data displayed by the conference terminal can be adjusted along with the rotation angle of the conference terminal.
Specifically, panoramic video data can be used as a three-dimensional image fixed in space, a conference terminal is used as a virtual camera, when the conference terminal rotates by a moving angle, the position of the conference terminal relative to the three-dimensional image in space changes, and further panoramic video data displayed by the conference terminal after rotation is determined according to the rotating angle.
In an embodiment of the present invention, when panoramic video data currently displayed by a conference terminal is adjusted according to a rotation angle, a frame rate of the panoramic video data on the conference terminal is reduced.
In practical application, in the rotation process of the conference terminal, the frame rate can be reduced, so that the conference terminal can output images faster in the rotation process.
In one example, panoramic video data may be generated according to a preset resolution fit, where the preset resolution may include, but is not limited to, 800 x 600, 1600 x 1200.
In practical application, the distance between panoramic video data and two eyes can be calculated in the virtual reality equipment, and two images with different sizes are output according to the panoramic video data and the distance displayed by the conference terminal, so that different stereoscopic effects are realized.
In the embodiment of the invention, the initial video data transmitted in the conference process is received; when the first function of the conference is triggered, panoramic video data is generated based on the initial video data, and the panoramic video data is displayed on the conference terminal, so that the panoramic video data presents three-dimensional video data on the virtual reality device, the panoramic video data is fitted based on the initial video data in the conference process, virtual reality display is carried out, and an immersive atmosphere is created for a user.
Referring to fig. 2a, a flowchart illustrating steps of a video data method according to an embodiment of the present invention is applied to a conference terminal, and may specifically include the following steps:
step 201, receiving initial video data transmitted in a conference process;
the conference in the embodiment of the invention can be a video networking conference, an internet conference or a conference combining video networking and internet. The conference terminal may be a video-on-network conference terminal or a participant terminal in the internet.
In the conference process, the conference management server can be used for data transmission among a plurality of conference terminals, the conference terminal serving as a speaking terminal can upload initial video data acquired in the conference process to the conference management server, and the conference management server can forward corresponding video data to the conference terminals according to main and auxiliary stream settings of each conference terminal and display the video data according to video display settings of the conference terminals, wherein the video display equipment can comprise, but is not limited to, main and auxiliary stream settings, volume settings, resolution settings and the like.
The conference terminal can receive the initial video data transmitted in real time in the conference process, and processes and displays the initial video data.
Step 202, when the first function of the conference is detected to be triggered, panoramic video data is generated based on the initial video data, and the panoramic video data is displayed on the conference terminal, so that the panoramic video data presents three-dimensional video data on the virtual reality device.
The first function can be a virtual reality function, panoramic video data can be generated in the conference terminal based on initial video data fitting actually transmitted in the conference when the first function of the conference is triggered, the panoramic video data can be video data obtained based on a stereoscopic space, and the user can watch the panoramic video data displayed on the conference terminal by wearing VR equipment.
In practical application, panoramic video data is required to be acquired by using a 3D camera to carry out 360-degree shooting in all directions, but in the embodiment of the invention, when a virtual reality function is started, initial video data acquired by common video acquisition equipment can be fitted and encoded to generate panoramic video data so as to create an immersive atmosphere for a participant.
In an example, a conference room may be created, and a virtual reality function of the conference may be set, so that after the conference is started, the virtual reality function is applied, and fitting the received initial video data is performed, so as to obtain panoramic video data; in another example, during the meeting, the virtual reality function may be turned on in response to a control operation of the meeting by the meeting administrator, and then the initial video data may be fitted.
In an embodiment of the invention, when the rotation of the conference terminal is detected, the rotation angle of the conference terminal is obtained; and adjusting the panoramic video data currently displayed by the conference terminal according to the rotation angle.
In practical application, a sensor (such as a gyroscope) for detecting the rotation angle of the terminal can be installed in the conference terminal, so that when a user rotates the conference terminal in space, the rotation angle of the conference terminal can be acquired through the sensor, panoramic video data are image data in a three-dimensional space, and when the conference terminal is at different angles, the panoramic video data displayed by the conference terminal can be adjusted along with the rotation angle of the conference terminal.
Specifically, panoramic video data can be used as a three-dimensional image fixed in space, a conference terminal is used as a virtual camera, when the conference terminal rotates by a moving angle, the position of the conference terminal relative to the three-dimensional image in space changes, and further panoramic video data displayed by the conference terminal after rotation is determined according to the rotating angle.
In an embodiment of the present invention, when panoramic video data currently displayed by a conference terminal is adjusted according to a rotation angle, a frame rate of the panoramic video data on the conference terminal is reduced.
In practical application, in the rotation process of the conference terminal, the frame rate can be reduced, so that the conference terminal can output images faster in the rotation process.
In one example, panoramic video data may be generated according to a preset resolution fit, where the preset resolution may include, but is not limited to, 800 x 600, 1600 x 1200.
In practical application, the distance between panoramic video data and two eyes can be calculated in the virtual reality equipment, and two images with different sizes are output according to the panoramic video data and the distance displayed by the conference terminal, so that different stereoscopic effects are realized.
And step 203, displaying the initial video data on the conference terminal when the conference closing the first function is detected.
In the embodiment of the invention, under the condition that the first function of the conference is triggered, the conference terminal can fit the initial video data into the panoramic video data so that a user can watch the three-dimensional video data by adopting the virtual reality equipment, and when the conference closes the virtual reality function, the user can watch the initial video data directly.
As shown in fig. 2b, in an embodiment of the present invention, a user starts a conference APP, then creates a video conference, joins the conference, and may choose to start VR and close VR, if VR is started, then a VR panoramic video is rendered, and a professional VR device is worn to watch the conference; if VR is closed, the normal video stream is rendered.
The embodiment of the invention receives the initial video data transmitted in the conference process; when the first function of the conference is detected to be triggered, panoramic video data is generated based on the initial video data, the panoramic video data is displayed on the conference terminal, so that the panoramic video data presents three-dimensional video data on the virtual reality device, when the first function of the conference is detected to be closed, the initial video data is displayed on the conference terminal, the first function is triggered in the conference process, the panoramic video data is fitted based on the initial video data, virtual reality display is carried out, an immersive atmosphere is created for a user, the initial video data is displayed when the first function is closed, and free switching between the video initial data and the three-dimensional video data in the conference terminal is realized.
Referring to fig. 3, a flowchart illustrating steps of another video data method according to an embodiment of the present invention may specifically include the following steps:
step 301, receiving initial video data transmitted in a conference process;
the conference in the embodiment of the invention can be a video networking conference, an internet conference or a conference combining video networking and internet. The conference terminal may be a video-on-network conference terminal or a participant terminal in the internet.
In the conference process, the conference management server can be used for data transmission among a plurality of conference terminals, the conference terminal serving as a speaking terminal can upload initial video data acquired in the conference process to the conference management server, and the conference management server can forward corresponding video data to the conference terminals according to main and auxiliary stream settings of each conference terminal and display the video data according to video display settings of the conference terminals, wherein the video display equipment can comprise, but is not limited to, main and auxiliary stream settings, volume settings, resolution settings and the like.
The conference terminal can receive the initial video data transmitted in real time in the conference process, and processes and displays the initial video data.
Step 302, determining first video frame data from initial video data when the first function of the conference is detected to be triggered;
upon detecting that the conference first function is triggered, panoramic video data may be fitted frame by frame for the initial video data received in real-time.
Step 303, generating second video frame data with different space shooting angles with the first video frame data in the conference process according to the fitting of the first video frame data;
for the first video frame data extracted from the initial video data, the first video frame data can correspond to an initial space shooting angle, and then fitting is performed according to different space shooting angles based on the first video frame data, so that a plurality of second video frame data are obtained.
In one example, the first video data may be divided into four regions: upper left region, lower left region, upper right region, lower right region. And setting a space shooting angle in each region, and fitting according to the angle to generate second video frame data, for example, taking the center of the upper left region as a shooting focus, and setting a shooting angle to fit the upper left region.
Step 304, panoramic video data is generated based on the first video frame data and the second video frame data.
After the first video frame data and the second video frame data are obtained, the first video frame data and the second video frame data can be spliced, so that panoramic video data are generated.
The image stitching process comprises the following steps: identifying image features in the first video frame data and the second video frame data, wherein the image features can comprise image content features and image edge features, further performing image alignment, transformation and fusion according to the image features to generate a large panoramic video frame, and arranging the panoramic video frames of each frame according to a time sequence to obtain panoramic video data.
In an embodiment of the present invention, generating second video frame data with a different spatial shooting angle from first video frame data in a conference process according to first video frame data fitting includes:
acquiring first texture data of first video frame data; generating second texture data at a preset angle based on the first texture data fitting; and generating second video frame data of a preset angle according to the second texture data mapping.
In another embodiment of the present invention, generating second video frame data with a different spatial shooting angle from the first video frame data in a conference process according to the first video frame data fitting includes: acquiring first illumination data of first video frame data; fitting second illumination data at a preset angle based on the first illumination data; and fitting second video frame data at a preset angle based on the first video frame data and the second illumination data.
In practical application, the first illumination data in the first video frame data can be obtained, the position of the light source in the space can be determined by analyzing the first illumination data, and then the illumination distribution of each area in the space can be determined, so that the second illumination data on the preset angle can be obtained, and when the second video frame data of the preset angle is fitted, the adjustment is performed according to the second illumination data.
In one example, accurate second video frame data may be fitted in combination with angle calculations, texture mapping, and illumination calculations.
Step 305, displaying the panoramic video data on the conference terminal such that the panoramic video data presents three-dimensional video data on the virtual reality device.
In the embodiment of the invention, the initial video data transmitted in the conference process is received; determining first video frame data from the initial video data upon detecting that a first function of the conference is triggered; generating second video frame data with different space shooting angles with the first video frame data in the conference process according to the fitting of the first video frame data; panoramic video data is generated based on the first video frame data and the second video frame data, and the panoramic video data is displayed on the conference terminal, so that the panoramic video data presents three-dimensional video data on the virtual reality equipment, video images of different angles are fitted based on one frame of video image when a virtual reality function is triggered in the conference process, panoramic images are generated by splicing, and therefore three-dimensional video data can be displayed on the virtual reality equipment, and an immersive atmosphere is created for a user.
It should be noted that, for simplicity of description, the method embodiments are depicted as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 4, a schematic structural diagram of a video data apparatus according to an embodiment of the present invention is shown, and the video data apparatus is applied to a conference terminal, and may specifically include the following modules:
an initial video data receiving module 401, configured to receive initial video data transmitted during a conference;
and the panoramic video data display module 402 is configured to generate panoramic video data based on the initial video data when it is detected that the first function of the conference is triggered, and display the panoramic video data on the conference terminal, so that the panoramic video data presents three-dimensional video data on a virtual reality device.
In an embodiment of the present invention, the panoramic video data presentation module 402 may include:
a first video frame determination sub-module for determining first video frame data from the initial video data;
the second video frame fitting sub-module is used for fitting and generating second video frame data with different space shooting angles with the first video frame data in the conference process according to the first video frame data;
and the panoramic video generation sub-module is used for generating panoramic video data based on the first video frame data and the second video frame data.
In one embodiment of the present invention, the second video frame fitting submodule includes:
a first texture data determining unit for acquiring first texture data of first video frame data;
a second texture data determining unit, configured to generate second texture data at a preset angle based on the first texture data fitting;
and the second video frame data mapping unit is used for generating second video frame data of the preset angle according to the second texture data mapping.
In one embodiment of the present invention, the second video frame fitting submodule includes:
the first illumination data acquisition sub-module is used for acquiring first illumination data of the first video frame data;
the second illumination data fitting sub-module is used for fitting second illumination data on the preset angle based on the first illumination data;
and the second video frame data fitting unit is used for fitting second video frame data on the preset angle based on the first video frame data and the second illumination data.
In an embodiment of the invention, the apparatus further comprises:
the rotation angle determining module is used for acquiring the rotation angle of the conference terminal when the conference terminal is detected to rotate;
and the panoramic video adjusting module is used for adjusting the panoramic video data currently displayed by the conference terminal according to the rotation angle.
In an embodiment of the invention, the apparatus further comprises:
and the frame rate adjusting module is used for reducing the frame rate of the panoramic video data on the conference terminal when the panoramic video data currently displayed by the conference terminal is adjusted according to the rotation angle.
In an embodiment of the invention, the apparatus further comprises:
and the initial video data display module is used for displaying the initial video data on the conference terminal when the conference is detected to close the first function.
In the embodiment of the invention, the initial video data transmitted in the conference process is received; when the virtual reality function of the conference is detected to be started, panoramic video data is generated based on the initial video data, and the panoramic video data is displayed on the conference terminal, so that the panoramic video data presents three-dimensional video data on virtual reality equipment, the panoramic video data is fitted based on the initial video data in the conference process, virtual reality display is carried out, and an immersive atmosphere is created for a user.
An embodiment of the present invention also provides an electronic device that may include a processor, a memory, and a computer program stored on the memory and capable of running on the processor, the computer program implementing the video data method as above when executed by the processor.
An embodiment of the present invention also provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the above video data method.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing has described in detail a video data method and apparatus, electronic device, storage medium, and specific examples have been presented herein to illustrate the principles and embodiments of the present invention, and the above examples are only for the purpose of aiding in the understanding of the method and core concept of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. A video data processing method, characterized by being applied to a conference terminal, the method comprising:
receiving initial video data transmitted in the conference process;
and generating panoramic video data based on the initial video data when the first function of the conference is triggered, and displaying the panoramic video data on the conference terminal so that the panoramic video data presents three-dimensional video data on a virtual reality device.
2. The method of claim 1, wherein generating panoramic video data based on the initial video data comprises:
determining first video frame data from the initial video data;
generating second video frame data with different space shooting angles with the first video frame data in the conference process according to the first video frame data in a fitting way;
panoramic video data is generated based on the first video frame data and the second video frame data.
3. The method of claim 2, wherein generating second video frame data from the first video frame data fit that is at a different spatial capture angle than the first video frame data during the conference comprises:
acquiring first texture data of first video frame data;
generating second texture data at a preset angle based on the first texture data fitting;
and generating second video frame data of the preset angle according to the second texture data mapping.
4. A method according to claim 3, wherein generating second video frame data of a different spatial shooting angle from the first video frame data during the conference from the first video frame data fitting comprises:
acquiring first illumination data of first video frame data;
fitting second illumination data on the preset angle based on the first illumination data;
and fitting second video frame data on the preset angle based on the first video frame data and the second illumination data.
5. The method according to any one of claims 1 to 4, further comprising:
when the conference terminal is detected to rotate, acquiring the rotation angle of the conference terminal;
and adjusting the panoramic video data currently displayed by the conference terminal according to the rotation angle.
6. The method as recited in claim 5, further comprising:
and when the panoramic video data currently displayed by the conference terminal is adjusted according to the rotation angle, reducing the frame rate of the panoramic video data on the conference terminal.
7. The method according to any one of claims 1 to 5, further comprising:
and displaying the initial video data on the conference terminal when the conference is detected to close the first function.
8. A video data processing apparatus for use in a conference terminal, the apparatus comprising:
the initial video data receiving module is used for receiving initial video data transmitted in the conference process;
and the panoramic video data display module is used for generating panoramic video data based on the initial video data when the first function of the conference is triggered, and displaying the panoramic video data on the conference terminal so that the panoramic video data presents three-dimensional video data on virtual reality equipment.
9. An electronic device comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, which computer program, when executed by the processor, implements the video data processing method according to any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the video data processing method according to any of claims 1 to 7.
CN202311630046.7A 2023-11-30 2023-11-30 Video data processing method and device, electronic equipment and storage medium Pending CN117834800A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311630046.7A CN117834800A (en) 2023-11-30 2023-11-30 Video data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311630046.7A CN117834800A (en) 2023-11-30 2023-11-30 Video data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117834800A true CN117834800A (en) 2024-04-05

Family

ID=90510533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311630046.7A Pending CN117834800A (en) 2023-11-30 2023-11-30 Video data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117834800A (en)

Similar Documents

Publication Publication Date Title
US11076142B2 (en) Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene
US9286718B2 (en) Method using 3D geometry data for virtual reality image presentation and control in 3D space
JP2022166078A (en) Composing and realizing viewer's interaction with digital media
CN107844190B (en) Image display method and device based on virtual reality VR equipment
JP7101269B2 (en) Pose correction
CN108833877B (en) Image processing method and device, computer device and readable storage medium
TW201619913A (en) Simulating stereoscopic image display method and display device
TW201701051A (en) Panoramic stereoscopic image synthesis method, apparatus and mobile terminal
CN115529835A (en) Neural blending for novel view synthesis
JP2016504828A (en) Method and system for capturing 3D images using a single camera
CN111818265B (en) Interaction method and device based on augmented reality model, electronic equipment and medium
US11887249B2 (en) Systems and methods for displaying stereoscopic rendered image data captured from multiple perspectives
CN109872400B (en) Panoramic virtual reality scene generation method
CN117834800A (en) Video data processing method and device, electronic equipment and storage medium
CN107426561B (en) 3D 360-degree virtual reality live broadcasting method and device
CN115268650A (en) Picture screen capturing method and device, head-mounted virtual reality equipment and storage medium
Mori et al. An overview of augmented visualization: observing the real world as desired
KR101781900B1 (en) Hologram image providing system and method
TWI603288B (en) Method using 3d geometry data for virtual reality image presentation and control in 3d space
CN113485547A (en) Interaction method and device applied to holographic sand table
KR101960046B1 (en) Method for producing virtual reality image, portable device in which VR photographing program for performing the same is installed, and server supplying the VR photographing program to the portable device
WO2020201764A1 (en) Method and apparatus for generating three dimensional images
JP5878511B2 (en) Method of using 3D geometric data for representation and control of virtual reality image in 3D space
GB2518673A (en) A method using 3D geometry data for virtual reality presentation and control in 3D space
WO2023097791A1 (en) Photographing method of ar device and ar device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication