CN112995134B - Three-dimensional video streaming media transmission method and visualization method - Google Patents

Three-dimensional video streaming media transmission method and visualization method Download PDF

Info

Publication number
CN112995134B
CN112995134B CN202110147341.1A CN202110147341A CN112995134B CN 112995134 B CN112995134 B CN 112995134B CN 202110147341 A CN202110147341 A CN 202110147341A CN 112995134 B CN112995134 B CN 112995134B
Authority
CN
China
Prior art keywords
data
elementary stream
media
frame
encoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110147341.1A
Other languages
Chinese (zh)
Other versions
CN112995134A (en
Inventor
范冲
王凤瑞
房骥
莫东霖
郭士祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202110147341.1A priority Critical patent/CN112995134B/en
Publication of CN112995134A publication Critical patent/CN112995134A/en
Application granted granted Critical
Publication of CN112995134B publication Critical patent/CN112995134B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/1059End-user terminal functionalities specially adapted for real-time communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/1063Application servers providing network services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/65Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]

Abstract

The invention provides a three-dimensional video streaming media transmission method, which comprises the following steps: s1, collecting the original data of the video sensor based on the single board computer running the Linux operating system, and converting the original data into a media elementary stream. S2, based on single board computer running Linux operation system, compressing and coding the media basic flow to obtain coded media basic flow. S3, based on a web server Nginx deployed in a Linux operating system, sending the coded media elementary stream to the client terminal through a network transmission protocol RTMP; and/or transmitting the coded media elementary stream to the client terminal through a network transmission protocol RTSP based on a streaming media server RTSP-simple-server deployed in a Linux operating system. The method can effectively reduce resource occupation, reduce time delay and improve transmission efficiency. The invention further provides a three-dimensional video streaming media visualization method.

Description

Three-dimensional video streaming media transmission method and visualization method
Technical Field
The invention relates to the technical field of streaming media playing, in particular to a three-dimensional video streaming media transmission method and a visualization method.
Background
With the wide application of the medium and close range remote sensing platforms represented by unmanned planes and mobile measuring vehicles, remote sensing data is developed from static images to dynamic image streams, data expression is expanded from two-dimensional space to three-dimensional space, and a data transmission mode is developed from post return to real-time return. Aiming at the novel three-dimensional image data, the problems of small data transmission amount, high delay, low display performance, low universality and the like exist in the current data return and visualization scheme.
For traditional two-dimensional video streaming media transmission, the two-dimensional video streaming media is mainly divided into a plurality of segments in terms of time and stored in a server in a multi-resolution version, a client distributes data according to a network environment and an actual request, and a traditional video adopts a strategy that video caching playing is often performed while the overall resolution of the video is reduced.
In the existing three-dimensional image transmission, for example, a 360-degree video is mostly based on user viewpoint prediction, only data corresponding to the current and predicted viewpoints of a user is transmitted, but not complete panoramic data, the viewpoint prediction quality of the scheme is very important, and the 360-degree video transmission based on viewpoint prediction generally has the problems of prediction deviation and the like; the other part of pyramid-based full-view transmission scheme is used for high-quality coding in a user view field, and the other areas reduce the image quality, so that the scheme brings great pressure to a server side, and the data of each view angle is far more than the original data, so that the data amount of the server side is excessive.
In conclusion, the traditional two-dimensional video transmission method mainly adopts a three-step mode of acquisition, temporary storage and transmission, and the core idea of the method, namely 'delayed processing' is fundamentally contradictory to real-time visualization. The 360-degree video transmission based on the viewpoint prediction generally has the problems of prediction deviation and the like. On one hand, the full-view panoramic transmission scheme causes resource waste, and on the other hand, massive data brings huge pressure to a server; and the massive image flow of 30 frames per second also brings very heavy burden to a client terminal CPU, the traditional quick visualization method is not suitable for the novel dynamic three-dimensional image data expressed in three-dimensional space and returned in real time, and the traditional quick visualization scheme based on the CPU cannot be competent.
Therefore, a method for transmitting three-dimensional video streaming media and a method for visualizing the same are needed.
Disclosure of Invention
Technical problem to be solved
In view of the problems in the art described above, the present invention is at least partially addressed. Therefore, an object of the present invention is to provide a three-dimensional video streaming media transmission method, which can effectively reduce resource occupation, reduce delay, and improve transmission efficiency.
The second purpose of the present invention is to provide a method for visualizing three-dimensional video streaming media.
(II) technical scheme
In order to achieve the above object, an aspect of the present invention provides a three-dimensional video streaming media transmission method, including:
s1, based on the single board computer running Linux operating system, the data acquisition module acquires the original data of the video sensor and converts the original data into a media elementary stream.
S2, based on single board computer running Linux operation system, the coding module compresses and codes the media basic flow to obtain the coded media basic flow.
S3, based on a web server Nginx deployed in a Linux operating system, sending the coded media elementary stream to the client terminal through a network transmission protocol RTMP; and/or transmitting the coded media elementary stream to the client terminal through a network transmission protocol RTSP based on a streaming media server RTSP-simple-server deployed in a Linux operating system.
Further, the video sensor includes a general surveillance camera, a panoramic camera, and a depth camera.
Further, in S1, the data acquisition module acquires raw data of uvc video sensor through the universal driver video4linux, and uvc video sensor is a video sensor with data transmission conforming to usb video class protocol.
The data acquisition module acquires Intel realsense series D435i depth sensors through a special driver.
Further, in S1, converting the original data into a media elementary stream, including: separating the original data according to frames, and putting the original frame data and the corresponding frame attribute description data into a first-in first-out frame buffer queue.
Further, in S2, the encoding module performs compression encoding on the media elementary stream, including: the encoding module circularly takes out original frame data from the first-in first-out frame buffer queue, and sends the original frame data into an encoding and packaging processing pipeline based on an open-source computer program FFmpeg frame by frame for compression encoding.
Further, in S2, the encoding module performs compression encoding on the media elementary stream, including: and the coding module circularly takes out the frame attribute description data from the first-in first-out frame buffer queue, codes the frame attribute description data into a JSON format and synchronously transmits the data in a subtitle form.
Further, encoding the frame attribute description data into JSON format includes: for the binary form of the frame attribute description data, the frame attribute description data is encoded into an ASCII string by a base64 encoder, and the ASCII string is encoded into a JSON format.
Further, the data acquisition module and the coding module are asynchronously transmitted with each other.
Another aspect of the present invention provides a three-dimensional video streaming media visualization method, including:
a1, the client terminal receives the coded media elementary stream transmitted according to the above method and decodes the coded media elementary stream into a bitmap sequence.
A2, based on a GPU, processing pixel by adopting a displacement mapping method according to a bitmap sequence, recovering the real position of a pixel point of a mapping object in a three-dimensional space, enhancing the visualization effect and obtaining a primary visualization result; and carrying out texture correction on the preliminary visualization result to obtain a final visualization result and outputting the final visualization result to a user screen.
Further, in a1, decoding the encoded media elementary stream into a bitmap sequence includes: based on the Unity platform, the encoding media elementary stream is decoded into a bitmap sequence by calling an open source computer program FFmpeg through C language middleware.
(III) advantageous effects
The invention has the beneficial effects that:
1. the three-dimensional video streaming media transmission method provided by the invention takes a low-cost small single-board computer as a hardware platform, receives the original data of each sensor, and carries out real-time coding by means of FFmpeg and pushes a real-time remote sensing data stream outwards. The invention realizes the network real-time transmission of the video stream based on the embedded type, and the general stream media system is built based on the PC end, compared with the embedded type, the invention is more flexible. Experiments prove that the real-time coding, the data acquisition module and the coding module are mutually asynchronously transmitted, the embedded transmission system is light in weight, and the metadata is synchronously transmitted, so that the time delay is reduced to a certain extent, the resource occupation is reduced, and the transmission efficiency is improved.
2. The invention provides a three-dimensional video streaming media visualization method, which calls FFmpeg in a Unity platform through a C middleware, wherein a user-defined data structure of the FFmpeg is isolated in an intermediate layer, and a receiving and decoding module is decoupled with a control module; the task of three-dimensional data visualization is migrated from the CPU to the GPU, and the GPU is reasonably utilized to greatly improve the operating efficiency; and displacement mapping and texture correction are realized based on the GPU, so that the stability during the processing of the three-dimensional vertex elements is ensured. Thereby reducing time delay, reducing resource occupation and improving operation efficiency.
Drawings
The invention is described with the aid of the following figures:
fig. 1 is a schematic diagram of hardware components for implementing a three-dimensional video streaming media transmission method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a three-dimensional video streaming method according to an embodiment of the invention;
FIG. 3 is a diagram illustrating the display of metadata in a generic player according to one embodiment of the present invention;
FIG. 4 is a flow chart of a method for visualizing three-dimensional video streaming according to an embodiment of the invention;
FIG. 5 is a schematic diagram illustrating real-time display effects of panoramic video in Unity according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating the effect of visualizing the depth image in Unity according to an embodiment of the present invention.
[ description of reference ]
11: a surveillance camera; 12: a panoramic camera; 13: a depth camera;
2: an embedded development board;
3: an IP link;
4: and a client terminal.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
The three-dimensional video streaming media transmission method provided by the invention is realized based on hardware composition as shown in figure 1. As shown in fig. 2, the three-dimensional video streaming media transmission method provided by the present invention includes the following steps:
and step S1, acquiring the original data of the video sensor by the data acquisition module based on the single board computer running the Linux operating system, and converting the original data into a media elementary stream.
Specifically, the single board computer running the Linux operating system is a raspberry pi 4B small single board computer with 2GB memory, and its full load power is only 7.6 Watt. The video sensor comprises a common monitoring camera, a panoramic camera and a depth camera.
Specifically, the data acquisition module acquires uvc raw data of the video sensor through a universal driver video4linux, and the uvc video sensor is a video sensor with data transmission conforming to the usb video class protocol. A common USB interface video sensor, whose data transmission generally follows USB video class (uvc) protocol, has Linux operating system capable of implementing a generic driver video4Linux (v4l) for uvc devices. Specifically, the data acquisition module acquires the Intel realsense series D435i depth sensor through a dedicated driver. This is because existing depth sensors all have their own dedicated drive and SDK.
Specifically, converting original data into a media elementary stream includes: separating the original data according to frames, and putting the original frame data and the corresponding frame attribute description data into a first-in first-out frame buffer queue.
Step S2, based on the single board computer running the Linux operating system, the encoding module performs compression encoding on the media elementary stream to obtain an encoded media elementary stream.
Specifically, the encoding module performs compression encoding on the media elementary stream, and includes: the encoding module circularly takes out original frame data from the first-in first-out frame buffer queue, and sends the original frame data into an encoding and packaging processing pipeline based on an open-source computer program FFmpeg frame by frame for compression encoding.
There is also a new extension of metadata for image data in the form of video streams. Most typically a high precision time stamp per frame, which can help synchronize multiple sensors to create a multi-sensor joint observation. The positioning information of the sensor integrated IMU and the GPS can be synchronously sent along with each frame, and great help is brought to the improvement of the speed and the quality of subsequent processing such as SLAM and the like. Little attention has been paid to the synchronous transmission of metadata by existing encoding formats, transmission protocols and corresponding tool chains designed for common multimedia video streams. Custom metadata sections are not reserved in the h.261, h.262 and h.263 codes, and although custom metadata sections named SEI are reserved in the h.264 definition, neither the commonly used coding tool chain x264 nor FFmpeg realizes support for the custom metadata sections.
To solve this problem, the inventors propose to encode the metadata into JSON format and send it into the subtitle track for transmission. Since the audio, video, subtitle and other track data in the common multimedia video stream also need to be played synchronously, the video frames can be associated with the metadata to which the video frames belong by means of the existing frame-by-frame synchronization mechanism, so that the requirement of synchronously transmitting the metadata is met. Since subtitles are a common data type of common multimedia video streams, and plain text subtitle streams are also defined as the RFC4103 standard, the existing tool chain provides more complete support. Even if the metadata is played at a playing end which is not specially designed, the metadata can be presented in subtitles in a common player, so that the metadata information can be acquired, as shown in fig. 3.
Therefore, the encoding module performs compression encoding on the media elementary stream, and further comprises: and the coding module circularly takes out the frame attribute description data from the first-in first-out frame buffer queue, codes the frame attribute description data into a JSON format and synchronously transmits the data in a subtitle form. The metadata is used for establishing the joint observation of multiple sensors. Further, encoding the frame attribute description data into JSON format includes: for the binary form of the frame attribute description data, the frame attribute description data is encoded into an ASCII string by a base64 encoder, and the ASCII string is encoded into a JSON format.
Further, the data acquisition module and the coding module are asynchronously transmitted with each other. The two are placed in different threads, so that a plurality of computing cores are fully utilized, and the influence of IO synchronous waiting on code sending performance during the collection of original data is prevented.
Step S3, based on a web server Nginx deployed in the Linux operating system, sending the coded media elementary stream to the client terminal through a network transmission protocol RTMP; and/or transmitting the coded media elementary stream to the client terminal through a network transmission protocol RTSP based on a streaming media server RTSP-simple-server deployed in a Linux operating system. The transmission of the encoded media elementary stream to the client terminal by means of the real-time protocol RTMP or RTSP is achieved.
Nginx is a lighter-weight open source proxy server in the web server, can be adapted to a Linux platform, and has stronger expandability; the RTMP server can be quickly built by combining the Nginx and the FFmpeg, and the FFmpeg is pushed to the Nginx after being coded, so that streaming media is provided for the client terminal. The data pushing end and the receiving end can be connected through the rtsp-simple-server, the RTSP-simple-server does not need any dependency item, is compatible with Linux/Windows, allows multiple streams to be issued at one time and multiple users to read, and is also responsible for transmitting metadata.
In summary, the three-dimensional video streaming media transmission method provided by the invention takes a low-cost small single board computer as a hardware platform, receives the original data of each sensor, and carries out real-time coding by means of FFmpeg and pushes a real-time remote sensing data stream outwards. The invention realizes the network real-time transmission of the video stream based on the embedded type, and the general stream media system is built based on the PC end, compared with the embedded type, the invention is more flexible. Experiments prove that the real-time coding, the data acquisition module and the coding module are mutually asynchronously transmitted, the embedded transmission system is light in weight, and the metadata is synchronously transmitted, so that the time delay is reduced to a certain extent, the resource occupation is reduced, and the transmission efficiency is improved.
Based on the three-dimensional video streaming media transmission method provided by the invention, the invention also provides a three-dimensional video streaming media visualization method, as shown in fig. 4, comprising the following steps:
step a1, the client terminal receives the coded media elementary stream transmitted according to the three-dimensional video streaming media transmission method, and decodes the coded media elementary stream into a bitmap sequence.
Specifically, decoding an encoded media elementary stream into a bitmap sequence includes: based on the Unity platform, the encoding media elementary stream is decoded into a bitmap sequence by calling an open source computer program FFmpeg through C language middleware.
The encoded media elementary stream is decoded using FFmpeg. Although the derived functions of FFmpeg are C language style and can be called directly in C # script of Unity platform in theory, the parameters of the functions use many customized structures, and the redefinition of all the structures in the C # script is very tedious. Moreover, the memory structure of the C # script structure is not always completely consistent with that of the C language, and may jeopardize the stable operation of the program. Considering that only the function of FFmpeg reception and decoding is required, the present invention creates an intermediate isolation layer in the C language to solve this problem. The middle layer only needs to provide a small number of interfaces for opening, closing, acquiring frame data and the like for the control module by standard data types, and the process of specifically calling FFmpeg is realized in the middle layer. The custom data structure of FFmpeg is isolated inside the middle layer, decoupling the decoding module from the control module.
Step A2, based on a GPU, processing pixel by adopting a displacement mapping method according to a bitmap sequence, recovering the real position of a pixel point of a mapping object in a three-dimensional space, enhancing the visualization effect and obtaining a primary visualization result; and carrying out texture correction on the preliminary visualization result to obtain a final visualization result and outputting the final visualization result to a user screen.
The real positions of the pixel points of the mapping object in the three-dimensional space comprise displacement in the depth direction and displacement in the X direction and the Y direction caused by the reverse center projection.
The three-dimensional visualization processing mainly comprises a control link and a rendering link, wherein the control link comprises main logic of a program and is responsible for calling a decoding module and sequentially inputting a decoded bitmap sequence into the rendering link, and the rendering link is mainly used for restoring the positions of points, lines and surfaces in a three-dimensional space according to input bitmap data, applying some post-processing algorithms to enhance the rendering effect or correct rendering errors and finally outputting a visualization result to a user screen.
In a general three-dimensional rendering task, preparation of elements of a three-dimensional scene is generally completed by a CPU, and the GPU is generally only responsible for projecting objects in a three-dimensional space to a two-dimensional screen space. For the tasks with the fast change of the three-dimensional elements such as the vertex, the great data volume and the high delay requirement, if the traditional rendering workflow is continuously used, the CPU will be overloaded. Therefore, the invention introduces a method for mapping in graphics, and migrates the task constructed by the three-dimensional scene from the CPU to the GPU. The GPU is a parallelized executor, is very suitable for processing pixel by pixel and vertex by vertex, and the operating efficiency of the system is greatly improved by reasonably utilizing the GPU.
Since the Y-axes of the texture and image coordinates are opposite, which means that the image data is copied directly to the texture, the final display result is upside down, so that texture correction needs to be added here.
To sum up, the three-dimensional video streaming media visualization method provided by the invention receives a coded media basic stream transmitted based on an embedded system; based on the Unity platform, calling an open-source computer program FFmpeg through a C language middleware, decoding an encoded media elementary stream into a bitmap sequence, isolating a self-defined data structure of the FFmpeg inside a middle layer, and decoupling a receiving and decoding module from a control module; the task of three-dimensional data visualization is migrated from the CPU to the GPU, and the GPU is reasonably utilized to greatly improve the operating efficiency.
The generality of the 360-degree panoramic video and depth image verification method is shown in fig. 5 and 6. And evaluating the performance from three aspects of delay, frame rate and CPU occupancy rate. The delay condition of the whole acquisition, coding, transmission, decoding and visualization loop is detected by a method of shooting a stopwatch in a screen of a receiving end; reading the CPU occupancy rate from the task manager; and calculating the frame rate according to the time difference of two adjacent frames. The highest rendering frame rate is set to 60fps, because an excessively high frame rate hardly contributes to improving the experience except for wasting system resources.
The usability of the method is shown by two examples of the panoramic image and the depth image, compared with a panoramic image visualization process, the depth image is the biggest difference that a calculation step of transforming vertex positions is added on a GPU side, and test results show that the GPU occupancy rates of the two examples are below 35%, the GPU does not form a bottleneck, and the performances of the two examples are similar.
The method is compared with the FFmpeg-carried lightweight player component FFplay, and the experimental result is shown in table 1. The experimental result shows that the delay is highest under the FFplay default setting, the delay of the FFplay player applying the non-buffer setting is greatly reduced but still higher than that under the default condition, and the delay of the method is the lowest, and the integral loop-back delay of 310ms can also meet the requirements of most application scenarios. FFplay has the lowest CPU occupancy because it does not handle the transformation of the three-dimensional space. In the aspect of frame rate, too high frame rate is easy to cause resource waste, and the rendering frame rate of 60fps in the text has good experience.
TABLE 1 comparison table of test results of the method of the present invention and the FFplay player with default setting and the FFplay player without buffer setting
Figure BDA0002931081090000091
Figure BDA0002931081090000101
It should be understood that the above description of specific embodiments of the present invention is only for the purpose of illustrating the technical lines and features of the present invention, and is intended to enable those skilled in the art to understand the contents of the present invention and to implement the present invention, but the present invention is not limited to the above specific embodiments. It is intended that all such changes and modifications as fall within the scope of the appended claims be embraced therein.

Claims (8)

1. A three-dimensional video streaming media transmission method is characterized by comprising the following steps:
s1, based on the single board computer running the Linux operating system, the data acquisition module acquires the original data of the video sensor and converts the original data into a media elementary stream;
s2, based on the single board computer running Linux operating system, the encoding module compresses and encodes the media elementary stream to obtain an encoded media elementary stream;
s3, based on a web server Nginx deployed in a Linux operating system, sending the coded media elementary stream to the client terminal through a network transmission protocol RTMP; and/or based on a streaming media server RTSP-simple-server deployed in a Linux operating system, sending the coded media elementary stream to the client terminal through a network transmission protocol RTSP;
in S1, the converting the original data into the media elementary stream includes: separating original data according to frames, and putting the original frame data and corresponding frame attribute description data into a first-in first-out frame buffer queue;
in S2, the encoding module performs compression encoding on the media elementary stream, including: and the coding module circularly takes out the frame attribute description data from the first-in first-out frame buffer queue, codes the frame attribute description data into a JSON format, sends the JSON format into a caption track and synchronously transmits the JSON format in a caption form.
2. The method of claim 1, wherein the video sensor comprises a general surveillance camera, a panoramic camera, and a depth camera.
3. The method according to claim 1, wherein, in S1,
the data acquisition module acquires uvc original data of the video sensor through a universal driver video4linux, wherein the uvc video sensor is a video sensor with data transmission conforming to a usb video class protocol;
the data acquisition module acquires Intel realsense series D435i depth sensors through a special driver.
4. The method according to claim 1, wherein in S2, the encoding module compression-encodes the media elementary stream, comprising:
the encoding module circularly takes out original frame data from the first-in first-out frame buffer queue, and sends the original frame data into an encoding and packaging processing pipeline based on an open-source computer program FFmpeg frame by frame for compression encoding.
5. The method of claim 1, wherein the encoding the frame attribute description data into a JSON format comprises:
for the binary form of the frame attribute description data, the frame attribute description data is encoded into an ASCII string by a base64 encoder, and the ASCII string is encoded into a JSON format.
6. The method of claim 1, wherein the data acquisition module and the encoding module are transmitted asynchronously with respect to each other.
7. A three-dimensional video streaming media visualization method is characterized by comprising the following steps:
a1, the client terminal receiving the coded media elementary stream transmitted according to the method of any one of claims 1 to 6 and decoding said coded media elementary stream into a bitmap sequence;
a2, based on a GPU, processing pixel by adopting a displacement mapping method according to the bitmap sequence, recovering the real position of the pixel point of the mapping object in a three-dimensional space, enhancing the visualization effect and obtaining a primary visualization result; and carrying out texture correction on the preliminary visualization result to obtain a final visualization result and outputting the final visualization result to a user screen.
8. The method of claim 7, wherein said decoding the encoded media elementary stream into a bitmap sequence in A1 comprises:
based on the Unity platform, the encoding media elementary stream is decoded into a bitmap sequence by calling an open source computer program FFmpeg through C language middleware.
CN202110147341.1A 2021-02-03 2021-02-03 Three-dimensional video streaming media transmission method and visualization method Active CN112995134B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110147341.1A CN112995134B (en) 2021-02-03 2021-02-03 Three-dimensional video streaming media transmission method and visualization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110147341.1A CN112995134B (en) 2021-02-03 2021-02-03 Three-dimensional video streaming media transmission method and visualization method

Publications (2)

Publication Number Publication Date
CN112995134A CN112995134A (en) 2021-06-18
CN112995134B true CN112995134B (en) 2022-03-18

Family

ID=76346297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110147341.1A Active CN112995134B (en) 2021-02-03 2021-02-03 Three-dimensional video streaming media transmission method and visualization method

Country Status (1)

Country Link
CN (1) CN112995134B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114326764A (en) * 2021-11-29 2022-04-12 上海岩易科技有限公司 Rtmp transmission-based smart forestry unmanned aerial vehicle fixed-point live broadcast method and unmanned aerial vehicle system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105025327A (en) * 2015-07-14 2015-11-04 福建富士通信息软件有限公司 Method and system for live broadcast of mobile terminal

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100581257C (en) * 2006-12-19 2010-01-13 浙江工业大学 Method and system for transmitting real time flow media based on video frequency frame splitting
CA2728326A1 (en) * 2008-06-30 2010-01-07 Thomson Licensing Method and apparatus for dynamic displays for digital cinema
CN101478669B (en) * 2008-08-29 2012-06-27 百视通网络电视技术发展有限责任公司 Media playing control method based on browser on IPTV system
CN102724561A (en) * 2012-05-16 2012-10-10 昆山日通电脑科技办公设备有限公司 Embedded real time streaming media network transmission method and implementation system thereof
WO2014172654A1 (en) * 2013-04-19 2014-10-23 Huawei Technologies Co., Ltd. Media quality information signaling in dynamic adaptive video streaming over hypertext transfer protocol
CN103646395B (en) * 2013-11-28 2016-06-01 中南大学 A kind of High-precision image method for registering based on grid method
CN104954812A (en) * 2014-03-27 2015-09-30 腾讯科技(深圳)有限公司 Video synchronized playing method, device and system
US9681111B1 (en) * 2015-10-22 2017-06-13 Gopro, Inc. Apparatus and methods for embedding metadata into video stream
CN106210451A (en) * 2016-08-02 2016-12-07 成都索贝数码科技股份有限公司 A kind of method and system of multi-track video editing based on html5
CN108881202A (en) * 2018-06-08 2018-11-23 北京联合众为科技发展有限公司 A kind of video monitoring system and method
CN109165186A (en) * 2018-07-09 2019-01-08 广州梦映动漫网络科技有限公司 A kind of reading method and electronic equipment of unrestrained shadow
CN110022297B (en) * 2019-03-01 2021-09-24 广东工业大学 High-definition video live broadcast system
CA3037908A1 (en) * 2019-03-25 2020-09-25 Kemal S. Ahmed Beat tracking visualization through textual medium
CN111064972A (en) * 2019-11-28 2020-04-24 湖北工业大学 Live video control method based on IPV9

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105025327A (en) * 2015-07-14 2015-11-04 福建富士通信息软件有限公司 Method and system for live broadcast of mobile terminal

Also Published As

Publication number Publication date
CN112995134A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN105637886A (en) A server for providing a graphical user interface to a client and a client
CN108810427B (en) Viewpoint-based panoramic video content representation method and apparatus
CN113891117B (en) Immersion medium data processing method, device, equipment and readable storage medium
US20220334789A1 (en) Augmented reality video stream synchronization
CN109040786A (en) Transmission method, device, system and the storage medium of camera data
CN112995134B (en) Three-dimensional video streaming media transmission method and visualization method
JP7440546B2 (en) Point cloud data processing device and method
KR102238091B1 (en) System and method for 3d model compression and decompression
WO2023226504A1 (en) Media data processing methods and apparatuses, device, and readable storage medium
CN114938408B (en) Data transmission method, system, equipment and medium of cloud mobile phone
KR20220110787A (en) Adaptation of 2D video for streaming to heterogeneous client endpoints
WO2024041239A1 (en) Data processing method and apparatus for immersive media, device, storage medium, and program product
WO2023169001A1 (en) Data processing method and apparatus for immersive media, and device and storage medium
CN115396645B (en) Data processing method, device and equipment for immersion medium and storage medium
WO2024037137A1 (en) Data processing method and apparatus for immersive media, and device, medium and product
CN115396647B (en) Data processing method, device and equipment for immersion medium and storage medium
WO2023169003A1 (en) Point cloud media decoding method and apparatus and point cloud media coding method and apparatus
US11974026B2 (en) Apparatus, a method and a computer program for volumetric video
US20230134675A1 (en) An apparatus, a method and a computer program for volumetric video
WO2022078148A1 (en) Media file encapsulation method, media file decapsulation method, and related device
CN117280313A (en) Smart client for streaming scene-based immersive media to a game engine
KR20230141816A (en) Immersive Media Data Complexity Analyzer for Conversion of Asset Formats
CN116569535A (en) Reuse of redundant assets with client queries
KR102273142B1 (en) System for cloud streaming service, method of image cloud streaming service using application code conversion and apparatus for the same
KR20230153467A (en) Method, device, and program for streaming 3D objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant