CN117830085A - Video conversion method and device - Google Patents

Video conversion method and device Download PDF

Info

Publication number
CN117830085A
CN117830085A CN202410020979.2A CN202410020979A CN117830085A CN 117830085 A CN117830085 A CN 117830085A CN 202410020979 A CN202410020979 A CN 202410020979A CN 117830085 A CN117830085 A CN 117830085A
Authority
CN
China
Prior art keywords
projection
video data
data
video
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410020979.2A
Other languages
Chinese (zh)
Inventor
陈冠伟
徐锋
袁礼程
寇玉柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Good Feeling Health Industry Group Co ltd
Original Assignee
Good Feeling Health Industry Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Good Feeling Health Industry Group Co ltd filed Critical Good Feeling Health Industry Group Co ltd
Priority to CN202410020979.2A priority Critical patent/CN117830085A/en
Publication of CN117830085A publication Critical patent/CN117830085A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Circuits (AREA)

Abstract

The embodiment of the specification provides a video conversion method and a device, wherein the video conversion method comprises the following steps: acquiring initial video data, performing noise reduction processing on the initial video data, and determining target video data; performing projection conversion on target video data to determine projection data; establishing a projection environment, and determining projection parameters based on the projection environment and a reference image; projection video data is determined based on the projection parameters and the projection data. The method comprises the steps of obtaining initial video data, carrying out noise reduction treatment on the initial video data, and determining target video data; performing projection conversion on target video data to determine projection data; establishing a projection environment, and determining projection parameters based on the projection environment and a reference image; the projection video data is determined based on the projection parameters and the projection data, so that the planar video data can be converted into the video data which can be projected, and the cost for manufacturing the projection video is reduced.

Description

Video conversion method and device
Technical Field
The embodiment of the specification relates to the technical field of video processing, in particular to a video conversion method.
Background
Spherical scene video generation techniques aim to capture and present a complete panoramic scene, enabling viewers to experience a real panoramic scene in a virtual reality environment. The development of this technology stems from the need for a more immersive, interactive and realistic experience involving many different technical fields and key technologies.
1. Panoramic shooting technology: panoramic imaging technology is the basis for spherical scene video generation. It involves the use of special camera equipment, such as a 360 degree camera or a camera system consisting of multiple cameras, to capture the complete panoramic environment. These cameras can take images in multiple directions simultaneously and then combine them into one spherical or cylindrical panoramic image.
2. Panoramic image stitching technique: when generating spherical scene video, images captured by a plurality of cameras are spliced together to form a complete panoramic image. This involves complex image processing algorithms and techniques such as image calibration, image registration and image fusion to ensure image continuity and consistency.
3. Virtual reality technology: virtual reality technology plays an important role in spherical scene video generation. It involves creating an immersive experience through the head mounted display device that lets the viewer feel himself personally on the scene. Virtual reality technology provides a more immersive viewing experience for spherical scene video and can be used in combination with other technologies such as position tracking and gesture recognition technologies.
4. Rendering and processing techniques: in order to enable smooth playback of spherical scene video, high performance rendering and processing techniques are required. These techniques involve real-time processing and rendering of large amounts of image and video data in order to play panoramic video at high quality on different devices.
5. Streaming media and distribution techniques: once spherical scene video is generated, streaming media and distribution techniques are required to transmit the video to the viewer's device. This may involve the use of specialized streaming servers, codecs and network protocols to ensure that video can be played with high quality and low latency.
Spherical scene video acquisition a common approach is to capture panoramic scenes using a panoramic camera or a special 360 degree camera. These cameras are typically equipped with multiple lenses to capture images in all directions. The main problems include:
1. expensive equipment: professional-grade panoramic cameras are often expensive.
2. Complicated post-processing: color correction, de-distortion, stabilization, etc. to ensure image quality, which requires a high level of image processing skill.
3. Complex post editing: conventional video editing tools are typically not suitable for panoramic video and therefore require specialized panoramic video editing software.
Thus, a better solution is needed.
Disclosure of Invention
In view of this, the present embodiments provide a video conversion method. One or more embodiments of the present specification relate to a video conversion apparatus, a computing device, a computer-readable storage medium, and a computer program that solve the technical drawbacks of the related art.
According to a first aspect of embodiments of the present disclosure, there is provided a video conversion method, including:
acquiring initial video data, performing noise reduction processing on the initial video data, and determining target video data;
performing projection conversion on target video data to determine projection data;
establishing a projection environment, and determining projection parameters based on the projection environment and a reference image;
projection video data is determined based on the projection parameters and the projection data.
In one possible implementation, initial video data is acquired, noise reduction processing is performed on the initial video data, and target video data is determined. Comprising the following steps:
acquiring initial video data; wherein, the video data is a planar video material;
and determining a noise reduction algorithm, performing noise reduction processing on the initial video data based on the noise reduction algorithm, and determining target video data.
In one possible implementation, performing projective transformation on target video data to determine projection data includes:
determining a projection conversion formula;
projection data is determined based on the target video data and the projection conversion formula.
In one possible implementation, the projection environment comprises a three-dimensional spherical environment;
accordingly, establishing a projection environment, determining projection parameters based on the projection environment and a reference image, includes:
establishing a three-dimensional spherical environment and determining a reference image;
the reference image is applied as an environment map of a three-dimensional spherical environment, and projection parameters are determined.
In one possible implementation, determining projection video data based on projection parameters and projection data includes:
rendering the projection data based on the projection parameters to obtain video frame data;
and optimizing the video frame data to determine projection video data.
In one possible implementation, optimizing video frame data to determine projected video data includes:
determining a picture quality parameter, a contrast parameter and a color saturation parameter according to a projection environment;
and optimizing the video frame data based on the picture quality parameter, the contrast parameter and the color saturation parameter to determine projection video data.
In one possible implementation, optimizing video frame data to determine projected video data includes:
determining elements to be added; the elements to be added comprise special effect elements, text elements and image elements;
rendering the element to be added into the video frame data, and determining projection video data.
According to a second aspect of embodiments of the present specification, there is provided a video conversion apparatus comprising:
the data acquisition module is configured to acquire initial video data, perform noise reduction processing on the initial video data and determine target video data;
the projection conversion module is configured to carry out projection conversion on the target video data and determine projection data;
a parameter determination module configured to establish a projection environment, determine projection parameters based on the projection environment and a reference image;
a video generation module configured to determine projection video data based on the projection parameters and the projection data.
According to a third aspect of embodiments of the present specification, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions that, when executed by the processor, perform the steps of the video conversion method described above.
According to a fourth aspect of embodiments of the present specification, there is provided a computer readable storage medium storing computer executable instructions which, when executed by a processor, implement the steps of the video conversion method described above.
According to a fifth aspect of embodiments of the present specification, there is provided a computer program, wherein the computer program, when executed in a computer, causes the computer to perform the steps of the video conversion method described above.
The embodiment of the specification provides a video conversion method and a device, wherein the video conversion method comprises the following steps: acquiring initial video data, performing noise reduction processing on the initial video data, and determining target video data; performing projection conversion on target video data to determine projection data; establishing a projection environment, and determining projection parameters based on the projection environment and a reference image; projection video data is determined based on the projection parameters and the projection data. The method comprises the steps of obtaining initial video data, carrying out noise reduction treatment on the initial video data, and determining target video data; performing projection conversion on target video data to determine projection data; establishing a projection environment, and determining projection parameters based on the projection environment and a reference image; the projection video data is determined based on the projection parameters and the projection data, so that the planar video data can be converted into the video data which can be projected, and the cost for manufacturing the projection video is reduced.
Drawings
Fig. 1 is a schematic view of a video conversion method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a video conversion method according to one embodiment of the present disclosure;
fig. 3 is a schematic spherical diagram of a video conversion method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a video conversion device according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of a computing device provided in one embodiment of the present description.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be embodied in many other forms than described herein and similarly generalized by those skilled in the art to whom this disclosure pertains without departing from the spirit of the disclosure and, therefore, this disclosure is not limited by the specific implementations disclosed below.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of this specification to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In the present specification, a video conversion method is provided, and the present specification relates to a video conversion apparatus, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments one by one.
Referring to fig. 1, fig. 1 is a schematic view illustrating a video conversion method according to an embodiment of the present disclosure.
In the application scenario of fig. 1, the computing device 101 may acquire initial video data, perform noise reduction processing on the initial video data, and determine the target video data 102. The computing device 101 may then perform a projective transformation on the target video data 102 to determine projection data 103. Thereafter, computing device 101 may establish a projection environment, determine projection parameters 104 based on the projection environment and the reference image. Finally, computing device 101 may determine projected video data from the projection parameters 104 and projection data 103, as indicated by reference numeral 105.
The computing device 101 may be hardware or software. When the computing device 101 is hardware, it may be implemented as a distributed cluster of multiple servers or terminal devices, or as a single server or single terminal device. When the computing device 101 is embodied as software, it may be installed in the hardware devices listed above. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
Referring to fig. 2, fig. 2 shows a flowchart of a video conversion method according to an embodiment of the present disclosure, which specifically includes the following steps.
Step 201: and acquiring initial video data, performing noise reduction processing on the initial video data, and determining target video data.
In one possible implementation, initial video data is acquired, noise reduction processing is performed on the initial video data, and target video data is determined. Comprising the following steps: acquiring initial video data; wherein, the video data is a planar video material; and determining a noise reduction algorithm, performing noise reduction processing on the initial video data based on the noise reduction algorithm, and determining target video data.
In practice, the planar video material is acquired and is ensured to be of sufficiently high resolution and quality to accommodate subsequent image processing. If there is any image noise or unclear portion in the planar video, it is considered to be noise-reduced or sharpened at this stage. The video quality of the material will largely influence the generated video quality, so in principle the higher the video quality the better.
For example, the video shot by the mobile phone is initial video data, noise reduction processing is performed on the initial video data through an artificial intelligence algorithm, and target video data is determined.
Step 202: and performing projection conversion on the target video data to determine projection data.
In one possible implementation, performing projective transformation on target video data to determine projection data includes: determining a projection conversion formula; projection data is determined based on the target video data and the projection conversion formula.
In practical application, the three-dimensional projection of the planar video can adopt ERP projection technology: the ERP projection technique maps warp lines to vertical lines of constant pitch and weft lines to horizontal lines of constant pitch. The most widely used ERP projection in 360-degree video at present, and most shooting sequences are stored in ERP format.
The specific technical principle is as follows:
on the sphere: lambda is warp, phi is weft, phi 1 is standard weft, lambda 0 is central meridian; on a plane: x is the horizontal coordinate and y is the vertical coordinate.
Referring to fig. 3, the spherical to planar projection relationship:
plane-to-sphere projection relationship:
the 2D-3D conversion formula is:
ϕ = (u − 0.5) * (2 * π)
θ = (0.5 − v) * π
wherein ϕ is longitude, θ latitude. (u, v) is the coordinates of the 2D plane, calculated from the sampling positions:
u = (m + 0.5) / W, 0≤ m<W
v = (n + 0.5) / H, 0 ≤ n<H
where (m, n) is the sampling position, W, H is the width and height of the original image.
Step 203: a projection environment is established, and projection parameters are determined based on the projection environment and a reference image.
In one possible implementation, the projection environment comprises a three-dimensional spherical environment; accordingly, establishing a projection environment, determining projection parameters based on the projection environment and a reference image, includes: establishing a three-dimensional spherical environment and determining a reference image; the reference image is applied as an environment map of a three-dimensional spherical environment, and projection parameters are determined.
In practical application, a virtual three-dimensional spherical environment is created to carry ERP projections. The reference image is imported and applied as an environment map, and the reference image can be a frame image cut from target video data or a customized image. The size and position of the ambient sphere are ensured to match the image, i.e. the projection parameters are determined so that the planar image appears correctly on the sphere.
Step 204: projection video data is determined based on the projection parameters and the projection data.
In one possible implementation, determining projection video data based on projection parameters and projection data includes: rendering the projection data based on the projection parameters to obtain video frame data; and optimizing the video frame data to determine projection video data.
Specifically, the optimizing processing is performed on the video frame data to determine projection video data, including: determining a picture quality parameter, a contrast parameter and a color saturation parameter according to a projection environment; and optimizing the video frame data based on the picture quality parameter, the contrast parameter and the color saturation parameter to determine projection video data.
In practical applications, the 3D environment and camera settings are rendered as a sequence of video frames using a 3D rendering engine. And performing color correction and post-processing on the spherical scene video to ensure consistency in terms of picture quality, contrast, color saturation and the like.
Further, the optimizing processing is performed on the video frame data to determine projection video data, including: determining elements to be added; the elements to be added comprise special effect elements, text elements and image elements; rendering the element to be added into the video frame data, and determining projection video data.
In practical applications, special effects, text or other graphic elements may also be added to enhance the visual appeal of the video.
The embodiment of the specification provides a video conversion method and a device, wherein the video conversion method comprises the following steps: acquiring initial video data, performing noise reduction processing on the initial video data, and determining target video data; performing projection conversion on target video data to determine projection data; establishing a projection environment, and determining projection parameters based on the projection environment and a reference image; projection video data is determined based on the projection parameters and the projection data. The method comprises the steps of obtaining initial video data, carrying out noise reduction treatment on the initial video data, and determining target video data; performing projection conversion on target video data to determine projection data; establishing a projection environment, and determining projection parameters based on the projection environment and a reference image; the projection video data is determined based on the projection parameters and the projection data, so that the planar video data can be converted into the video data which can be projected, and the cost for manufacturing the projection video is reduced.
Corresponding to the above method embodiments, the present disclosure further provides an embodiment of a video conversion device, and fig. 4 shows a schematic structural diagram of a video conversion device provided in one embodiment of the present disclosure. As shown in fig. 4, the apparatus includes:
the data acquisition module 401 is configured to acquire initial video data, perform noise reduction processing on the initial video data, and determine target video data;
a projection conversion module 402 configured to perform projection conversion on the target video data, determining projection data;
a parameter determination module 403 configured to establish a projection environment, determine projection parameters based on the projection environment and a reference image;
the video generation module 404 is configured to determine projection video data based on the projection parameters and the projection data.
In one possible implementation, the data acquisition module 401 is further configured to:
acquiring initial video data; wherein, the video data is a planar video material;
and determining a noise reduction algorithm, performing noise reduction processing on the initial video data based on the noise reduction algorithm, and determining target video data.
In one possible implementation, the projective transformation module 402 is further configured to:
determining a projection conversion formula;
projection data is determined based on the target video data and the projection conversion formula.
In one possible implementation, the parameter determination module 403 is further configured to:
the projection environment comprises a three-dimensional spherical environment;
accordingly, establishing a projection environment, determining projection parameters based on the projection environment and a reference image, includes:
establishing a three-dimensional spherical environment and determining a reference image;
the reference image is applied as an environment map of a three-dimensional spherical environment, and projection parameters are determined.
In one possible implementation, the video generation module 404 is further configured to:
rendering the projection data based on the projection parameters to obtain video frame data;
and optimizing the video frame data to determine projection video data.
In one possible implementation, the video generation module 404 is further configured to:
determining a picture quality parameter, a contrast parameter and a color saturation parameter according to a projection environment;
and optimizing the video frame data based on the picture quality parameter, the contrast parameter and the color saturation parameter to determine projection video data.
In one possible implementation, the video generation module 404 is further configured to:
determining elements to be added; the elements to be added comprise special effect elements, text elements and image elements;
rendering the element to be added into the video frame data, and determining projection video data.
The above is a schematic solution of a video conversion apparatus of the present embodiment. It should be noted that, the technical solution of the video conversion device and the technical solution of the video conversion method belong to the same conception, and details of the technical solution of the video conversion device, which are not described in detail, can be referred to the description of the technical solution of the video conversion method.
Fig. 5 illustrates a block diagram of a computing device 500 provided in accordance with one embodiment of the present description. The components of the computing device 500 include, but are not limited to, a memory 510 and a processor 520. Processor 520 is coupled to memory 510 via bus 530 and database 550 is used to hold data.
Computing device 500 also includes access device 540, access device 540 enabling computing device 500 to communicate via one or more networks 560. Examples of such networks include public switched telephone networks (PSTN, public Switched Telephone Network), local area networks (LAN, local Area Network), wide area networks (WAN, wide Area Network), personal area networks (PAN, personal Area Network), or combinations of communication networks such as the internet. The access device 540 may include one or more of any type of network interface, wired or wireless (e.g., network interface card (NIC, network interface controller)), such as an IEEE802.11 wireless local area network (WLAN, wireless Local Area Network) wireless interface, a worldwide interoperability for microwave access (Wi-MAX, worldwide Interoperability for Microwave Access) interface, an ethernet interface, a universal serial bus (USB, universal Serial Bus) interface, a cellular network interface, a bluetooth interface, near field communication (NFC, near Field Communication).
In one embodiment of the present description, the above-described components of computing device 500, as well as other components not shown in FIG. 5, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device shown in FIG. 5 is for exemplary purposes only and is not intended to limit the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 500 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or personal computer (PC, personal Computer). Computing device 500 may also be a mobile or stationary server.
Wherein the processor 520 is configured to execute computer-executable instructions that, when executed by the processor, perform the steps of the video conversion method described above. The foregoing is a schematic illustration of a computing device of this embodiment. It should be noted that, the technical solution of the computing device and the technical solution of the video conversion method belong to the same concept, and details of the technical solution of the computing device, which are not described in detail, can be referred to the description of the technical solution of the video conversion method.
An embodiment of the present disclosure also provides a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of the video conversion method described above.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the video conversion method described above belong to the same concept, and details of the technical solution of the storage medium which are not described in detail can be referred to the description of the technical solution of the video conversion method described above.
An embodiment of the present disclosure further provides a computer program, where the computer program, when executed in a computer, causes the computer to perform the steps of the video conversion method described above.
The above is an exemplary version of a computer program of the present embodiment. It should be noted that, the technical solution of the computer program and the technical solution of the video conversion method belong to the same conception, and details of the technical solution of the computer program, which are not described in detail, can be referred to the description of the technical solution of the video conversion method.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the embodiments are not limited by the order of actions described, as some steps may be performed in other order or simultaneously according to the embodiments of the present disclosure. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the embodiments described in the specification.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are merely used to help clarify the present specification. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the teaching of the embodiments. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. This specification is to be limited only by the claims and the full scope and equivalents thereof.

Claims (10)

1. A video conversion method, comprising:
acquiring initial video data, performing noise reduction processing on the initial video data, and determining target video data;
performing projection conversion on the target video data to determine projection data;
establishing a projection environment, and determining projection parameters based on the projection environment and a reference image;
projection video data is determined based on the projection parameters and the projection data.
2. The method of claim 1, wherein the obtaining the initial video data performs a noise reduction process on the initial video data to determine the target video data. Comprising the following steps:
acquiring initial video data; wherein, the video data is a planar video material;
and determining a noise reduction algorithm, and performing noise reduction processing on the initial video data based on the noise reduction algorithm to determine target video data.
3. The method of claim 1, wherein said projectively converting said target video data to determine projected data comprises:
determining a projection conversion formula;
projection data is determined based on the target video data and the projection conversion formula.
4. The method of claim 1, wherein the projection environment comprises a three-dimensional spherical environment;
accordingly, the establishing a projection environment, determining projection parameters based on the projection environment and a reference image, includes:
establishing the three-dimensional spherical environment and determining the reference image;
and applying the reference image as an environment map of the three-dimensional spherical environment, and determining projection parameters.
5. The method of claim 1, wherein the determining projection video data based on the projection parameters and the projection data comprises:
rendering the projection data based on the projection parameters to obtain video frame data;
and optimizing the video frame data to determine projection video data.
6. The method of claim 5, wherein optimizing the video frame data to determine projected video data comprises:
determining a picture quality parameter, a contrast parameter and a color saturation parameter according to the projection environment;
and optimizing the video frame data based on the picture quality parameter, the contrast parameter and the color saturation parameter to determine projection video data.
7. The method of claim 6, wherein optimizing the video frame data to determine projected video data comprises:
determining elements to be added; wherein the elements to be added comprise special effect elements, text elements and image elements;
and rendering the element to be added into the video frame data to determine projection video data.
8. A video conversion apparatus, comprising:
the data acquisition module is configured to acquire initial video data, perform noise reduction processing on the initial video data and determine target video data;
the projection conversion module is configured to perform projection conversion on the target video data and determine projection data;
a parameter determination module configured to establish a projection environment, determine projection parameters based on the projection environment and a reference image;
a video generation module configured to determine projection video data based on the projection parameters and the projection data.
9. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer executable instructions, and the processor is configured to execute the computer executable instructions, which when executed by the processor, implement the steps of the video conversion method of any one of claims 1 to 7.
10. A computer readable storage medium storing computer executable instructions which when executed by a processor implement the steps of the video conversion method of any one of claims 1 to 7.
CN202410020979.2A 2024-01-05 2024-01-05 Video conversion method and device Pending CN117830085A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410020979.2A CN117830085A (en) 2024-01-05 2024-01-05 Video conversion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410020979.2A CN117830085A (en) 2024-01-05 2024-01-05 Video conversion method and device

Publications (1)

Publication Number Publication Date
CN117830085A true CN117830085A (en) 2024-04-05

Family

ID=90518862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410020979.2A Pending CN117830085A (en) 2024-01-05 2024-01-05 Video conversion method and device

Country Status (1)

Country Link
CN (1) CN117830085A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180150994A1 (en) * 2016-11-30 2018-05-31 Adcor Magnet Systems, Llc System, method, and non-transitory computer-readable storage media for generating 3-dimensional video images
US10419716B1 (en) * 2017-06-28 2019-09-17 Vulcan Technologies Llc Ad-hoc dynamic capture of an immersive virtual reality experience
CN111640181A (en) * 2020-05-14 2020-09-08 佳都新太科技股份有限公司 Interactive video projection method, device, equipment and storage medium
CN112565736A (en) * 2020-11-25 2021-03-26 聚好看科技股份有限公司 Panoramic video display method and display equipment
CN115209125A (en) * 2021-04-08 2022-10-18 北京兰亭数字科技有限公司 Method for converting planar media resource into virtual reality panoramic media resource
CN116760965A (en) * 2023-08-14 2023-09-15 腾讯科技(深圳)有限公司 Panoramic video encoding method, device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180150994A1 (en) * 2016-11-30 2018-05-31 Adcor Magnet Systems, Llc System, method, and non-transitory computer-readable storage media for generating 3-dimensional video images
US10419716B1 (en) * 2017-06-28 2019-09-17 Vulcan Technologies Llc Ad-hoc dynamic capture of an immersive virtual reality experience
CN111640181A (en) * 2020-05-14 2020-09-08 佳都新太科技股份有限公司 Interactive video projection method, device, equipment and storage medium
CN112565736A (en) * 2020-11-25 2021-03-26 聚好看科技股份有限公司 Panoramic video display method and display equipment
CN115209125A (en) * 2021-04-08 2022-10-18 北京兰亭数字科技有限公司 Method for converting planar media resource into virtual reality panoramic media resource
CN116760965A (en) * 2023-08-14 2023-09-15 腾讯科技(深圳)有限公司 Panoramic video encoding method, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄转娣 等: "基于OMAP3530的2D到3D视频自动转换系统", 数据采集与处理, no. 06, 15 November 2012 (2012-11-15), pages 670 - 676 *

Similar Documents

Publication Publication Date Title
US11076142B2 (en) Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene
US10958834B2 (en) Method to capture, store, distribute, share, stream and display panoramic image or video
US11343591B2 (en) Method and system of presenting moving images or videos corresponding to still images
CN109587556B (en) Video processing method, video playing method, device, equipment and storage medium
CN115690382B (en) Training method of deep learning model, and method and device for generating panorama
WO2024022065A1 (en) Virtual expression generation method and apparatus, and electronic device and storage medium
CN112017222A (en) Video panorama stitching and three-dimensional fusion method and device
CN113688907B (en) A model training and video processing method, which comprises the following steps, apparatus, device, and storage medium
CN112927362A (en) Map reconstruction method and device, computer readable medium and electronic device
US11812154B2 (en) Method, apparatus and system for video processing
CN111985281A (en) Image generation model generation method and device and image generation method and device
CN107197135B (en) Video generation method and video generation device
CN112766215A (en) Face fusion method and device, electronic equipment and storage medium
CN114531553B (en) Method, device, electronic equipment and storage medium for generating special effect video
CN112017242B (en) Display method and device, equipment and storage medium
CN111818265B (en) Interaction method and device based on augmented reality model, electronic equipment and medium
CN112604279A (en) Special effect display method and device
CN109816791B (en) Method and apparatus for generating information
CN110084306B (en) Method and apparatus for generating dynamic image
CN116708862A (en) Virtual background generation method for live broadcasting room, computer equipment and storage medium
CN115002442B (en) Image display method and device, electronic equipment and storage medium
CN117830085A (en) Video conversion method and device
CN111314627B (en) Method and apparatus for processing video frames
CN113537194A (en) Illumination estimation method, illumination estimation device, storage medium, and electronic apparatus
KR102561903B1 (en) AI-based XR content service method using cloud server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination