CN117596442A - Converged communication method and platform - Google Patents

Converged communication method and platform Download PDF

Info

Publication number
CN117596442A
CN117596442A CN202410060669.3A CN202410060669A CN117596442A CN 117596442 A CN117596442 A CN 117596442A CN 202410060669 A CN202410060669 A CN 202410060669A CN 117596442 A CN117596442 A CN 117596442A
Authority
CN
China
Prior art keywords
audio
video data
video
terminal
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410060669.3A
Other languages
Chinese (zh)
Inventor
章海新
杨华
李一帆
薛晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Star Network Communication Technology Co ltd
Original Assignee
Shenzhen Star Network Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Star Network Communication Technology Co ltd filed Critical Shenzhen Star Network Communication Technology Co ltd
Priority to CN202410060669.3A priority Critical patent/CN117596442A/en
Publication of CN117596442A publication Critical patent/CN117596442A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a converged communication method and a converged communication platform, and belongs to the technical field of communication. The converged communication method comprises the steps of obtaining network bandwidth of opposite terminal equipment, and pulling a plurality of corresponding first audio/video data from a plurality of main stream communication terminals; respectively converting the audio data and the video data in the plurality of first audio-video data into corresponding preset formats to obtain a plurality of second audio-video data; encoding the second audio and video data based on the encoding parameters to obtain third audio and video data; fusing the plurality of third audio and video data to obtain fourth audio and video data; based on the network bandwidth, performing code stream adjustment on the fourth audio/video data to obtain fifth audio/video data; and sending the fifth audio and video data to the opposite terminal equipment. The delay of the whole audio and video transmission process is reduced on the whole, so that the invention obviously reduces the picture delay of opposite terminal equipment.

Description

Converged communication method and platform
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a converged communication method and platform.
Background
In the application of converged communication, different audio and video terminal devices (such as an intelligent law enforcement instrument, national standard monitoring equipment, an unmanned aerial vehicle, a distributed control ball, a mobile phone and the like) are required to be converged to a converged communication platform, and audio and video consultation is carried out if necessary.
However, audio and video files of different devices usually adopt different types of audio and video coding formats and different communication protocols, and different audio and video data formats need to be converted into a uniform format, so that obvious picture delay occurs in pictures received by devices accessing the converged communication platform.
Disclosure of Invention
The invention mainly aims to provide a converged communication method and a converged communication platform, and aims to solve the technical problem that a picture received by equipment accessed to the converged communication platform in the related art has obvious picture delay.
In order to achieve the above object, the present invention provides a converged communication method, which includes the steps of:
acquiring network bandwidth of opposite terminal equipment, and pulling a plurality of corresponding first audio/video data from a plurality of main stream communication terminals; the main stream communication terminal comprises a national standard equipment terminal, an unmanned equipment terminal, an intelligent equipment terminal or an emergency command terminal;
respectively converting the audio data and the video data in the plurality of first audio-video data into corresponding preset formats to obtain a plurality of second audio-video data;
encoding the plurality of second audio/video data based on the encoding parameters to obtain a plurality of third audio/video data; the coding parameters comprise a video coding slice value, an audio sampling rate and a video coding buffer zone value, wherein the video coding slice value is smaller than a reference video coding slice value, the audio sampling rate is smaller than the reference audio sampling rate, and the video coding buffer zone value is smaller than the reference video coding buffer zone value;
Fusing the plurality of third audio and video data to obtain fourth audio and video data;
based on network bandwidth, performing code stream adjustment on the fourth audio/video data to obtain fifth audio/video data;
and transmitting the fifth audio and video data to the opposite terminal equipment.
Optionally, fusing the plurality of third audio/video data to obtain fourth audio/video data, including the steps of;
acquiring current equipment information of all main stream communication terminals;
based on the current equipment information, creating a corresponding video soft terminal for each main stream communication terminal;
binding the third audio and video data with the corresponding video soft terminal;
and based on the video soft terminal, fusing the bound third audio and video data to obtain fourth audio and video data.
Optionally, the step of binding the third audio/video data with the corresponding video soft terminal includes:
binding audio data in the plurality of third audio-video data with the audio channels of the corresponding video soft terminals;
binding video data in the third audio and video data with the video channels of the corresponding video soft terminals.
Optionally, the opposite terminal equipment comprises a third party live broadcast platform or a high-definition wall terminal;
The step of sending fifth audio/video data to the opposite terminal device includes:
transmitting fifth audio and video data to a third party live broadcast platform; and/or
The step of sending fifth audio/video data to the opposite terminal device includes:
protocol packaging is carried out on the fifth audio and video data based on a first preset communication protocol; the first preset communication protocol comprises any one of a real-time message transmission RTMP protocol, a real-time streaming RTSP protocol, a network real-time communication WebRTC protocol or an HTTPS-based FLV video streaming HTTPS-FLV protocol;
based on the packaged fifth audio and video data and the fixed network address of the high-definition wall-mounted terminal, generating a unique hypertext markup language (HTML) page and sending the HTML page to the high-definition wall-mounted terminal.
Optionally, before the step of fusing the plurality of third audio/video data to obtain the fourth audio/video data, the method further includes:
pulling a corresponding plurality of sixth audio/video data from the plurality of high-speed wireless communication VoLTE terminals;
respectively carrying out protocol encapsulation on the plurality of sixth audio/video data based on a second preset communication protocol to obtain a plurality of seventh audio/video data; the second preset communication protocol is an audio and video universal transmission protocol;
after the step of fusing the plurality of third audio/video data to obtain fourth audio/video data, the method further includes:
And fusing the seventh audio and video data to obtain eighth audio and video data and sending the eighth audio and video data to the video terminal.
Optionally, before the step of fusing the plurality of third audio/video data to obtain the fourth audio/video data, the method further includes:
and pulling a plurality of seventh audio/video data from the accessed gateway device.
Optionally, after the step of performing code stream adjustment on the fourth audio/video data to obtain the fifth audio/video data based on the network bandwidth, the method further includes:
acquiring the current network bandwidths of all main stream communication terminals;
based on the current network bandwidth of the main stream communication terminal, performing code stream adjustment on the fourth audio/video data to obtain ninth audio/video data;
converting the format of the ninth audio/video data into an audio/video format corresponding to the main stream communication terminal to obtain tenth audio/video data;
and carrying out protocol encapsulation on tenth audio/video data based on the communication protocol corresponding to the main stream communication terminal, and sending back all the corresponding main stream communication terminals.
In addition, to achieve the above object, the present invention further provides a converged communication platform, the platform comprising: the device comprises a primary conversion module, an input coding module, an output coding module and an audio/video fusion module;
The primary conversion module is used for acquiring network bandwidth of opposite terminal equipment and pulling a plurality of corresponding first audio/video data from a plurality of main stream communication terminals; respectively converting the audio data and the video data in the plurality of first audio-video data into corresponding preset formats to obtain a plurality of second audio-video data; based on a second preset communication protocol, carrying out protocol encapsulation on a plurality of second audio/video data and sending the second audio/video data to an input coding module;
the input encoding module is used for encoding the plurality of second audio/video data based on the encoding parameters to obtain a plurality of third audio/video data; based on a second preset communication protocol, carrying out protocol encapsulation on a plurality of third audio/video data and sending the third audio/video data to an audio/video fusion module;
the audio/video fusion module is used for fusing the plurality of third audio/video data to obtain fourth audio/video data; the fourth audio and video data are subjected to protocol encapsulation based on a second preset communication protocol and are sent to an output coding module;
the output coding module is used for carrying out code stream adjustment on the fourth audio/video data based on the network bandwidth to obtain fifth audio/video data; and transmitting the fifth audio and video data to the opposite terminal equipment.
Optionally, the platform further comprises: a streaming media service module and a streaming media playing module;
The input coding module is specifically used for acquiring current equipment information of all main stream communication terminals; based on the current equipment information, creating a corresponding video soft terminal for each main stream communication terminal; binding the third audio and video data with the corresponding video soft terminal;
the audio and video fusion module is specifically used for fusing the bound third audio and video data based on the video soft terminal to obtain fourth audio and video data; and/or
The input encoding module is specifically configured to bind audio data in the plurality of third audio/video data with an audio channel of the corresponding video soft terminal; binding video data in the third audio and video data with the video channels of the corresponding video soft terminals.
The output coding module is specifically used for sending fifth audio and video data to the third-party live broadcast platform; and/or
The streaming media service module is used for carrying out protocol encapsulation on the fifth audio/video data based on a first preset communication protocol; sending the packaged fifth audio and video data to a streaming media playing module;
the streaming media playing module is used for generating a unique hypertext markup language (HTML) page based on the packaged fifth audio and video data and the fixed network address of the high-definition wall terminal, and sending the unique HTML page to the high-definition wall terminal.
Optionally, the platform further comprises a program-controlled exchange module;
the program control exchange module is used for pulling a plurality of corresponding sixth audio/video data from the plurality of high-speed wireless communication VoLTE terminals; respectively carrying out protocol encapsulation on the plurality of sixth audio/video data based on a second preset communication protocol to obtain a plurality of seventh audio/video data; transmitting the seventh audio and video data to an audio and video fusion module;
the audio/video fusion module is further used for fusing a plurality of seventh audio/video data to obtain eighth audio/video data and sending the eighth audio/video data to the video terminal; and/or
The audio and video fusion module is also used for pulling a plurality of seventh audio and video data from the accessed gateway equipment; and/or
The input coding module is also used for acquiring the current network bandwidths of all the main stream communication terminals;
the output coding module is also used for carrying out code stream adjustment on the fourth audio/video data based on the current network bandwidth of the main stream communication terminal to obtain ninth audio/video data; transmitting the ninth audio and video data to a preliminary conversion module;
the primary conversion module is further used for converting the format of the ninth audio/video data into an audio/video format corresponding to the main stream communication terminal to obtain tenth audio/video data; and carrying out protocol encapsulation on tenth audio/video data based on the communication protocol corresponding to the main stream communication terminal, and sending back all the corresponding main stream communication terminals.
On the basis of uniformly converting formats of first audio and video data pulled from different main stream communication terminals, the invention encodes the converted second audio and video data based on coding parameters such as video coding slice values, audio sampling rates, video coding buffer zone values and the like, reduces the video coding slice values, the audio sampling rates and the video coding buffer zone values of the encoded third audio and video data to be below reference values, and adjusts the code stream of the fused fourth audio and video data according to network bandwidth, thereby reducing the delay of the whole audio and video transmission process on the whole and obviously reducing the picture delay of opposite terminal equipment.
Drawings
FIG. 1 is a flow chart of a first embodiment of a converged communication method of the present invention;
FIG. 2 is a flow chart of a second embodiment of the converged communication method of the present invention;
FIG. 3 is a flow chart of a third embodiment of the converged communication method of the present invention;
fig. 4 is a schematic architecture diagram of the converged communication platform of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Analysis of the related art reveals that: in the conventional converged communication application, different audio and video terminal devices (such as an intelligent law enforcement instrument, a national standard monitoring device, an unmanned aerial vehicle, a distributed control ball, a mobile phone and the like) are required to be converged to a converged communication platform, and if necessary, audio and video consultation is performed.
However, audio and video files of different devices usually adopt different types of audio and video coding formats and different communication protocols, and different audio and video data formats need to be converted into a uniform format, so that obvious picture delay occurs in pictures received by devices accessing the converged communication platform.
Therefore, the invention encodes the converted second audio/video data based on the encoding parameters such as the video encoding slice value, the audio sampling rate, the video encoding buffer zone value and the like, reduces the video encoding slice value, the audio sampling rate and the video encoding buffer zone value of the encoded third audio/video data to be lower than the reference value, and adjusts the code stream of the fused fourth audio/video data according to the network bandwidth, thereby reducing the delay of the whole audio/video transmission process on the whole and obviously reducing the picture delay of opposite-end equipment.
The inventive concept of the present invention will be described in detail below by means of specific examples.
The embodiment of the invention provides a converged communication method, referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of the converged communication method.
In this embodiment, the converged communication method includes:
step S100: and acquiring network bandwidth of the opposite terminal equipment, and pulling a plurality of corresponding first audio/video data from the plurality of main stream communication terminals.
The main stream communication terminal comprises a national standard equipment terminal, an unmanned equipment terminal, an intelligent equipment terminal or an emergency command terminal;
specifically, in general, the convergence communication device mainly functions to merge audio and video data of the main stream communication terminal and send the merged audio and video data to the opposite terminal device. In this embodiment, the converged communication device first needs to pull a plurality of corresponding first audio/video data from a plurality of main stream communication terminals, and acquire network bandwidths of the peer devices.
The main stream communication terminal comprises a national standard equipment terminal (such as a national standard monitoring device and a spherical monitoring camera), an unmanned equipment terminal (such as an unmanned plane, an unmanned ship, an unmanned vehicle and an unmanned ship), an intelligent equipment terminal (such as an intelligent law enforcement instrument and intelligent glasses) or an emergency command terminal (an emergency command vehicle). In this embodiment, the converged communication device simultaneously performs the pulling of the first audio/video data from the plurality of device terminals.
The first audio and video data is original audio and video data.
Step S200: and respectively converting the audio data and the video data in the plurality of first audio-video data into corresponding preset formats to obtain a plurality of second audio-video data.
Step S300: and encoding the plurality of second audio/video data based on the encoding parameters to obtain a plurality of third audio/video data.
The encoding parameters include a video encoding slice value, an audio sampling rate, and a video encoding buffer value, and the video encoding slice value is less than the reference video encoding slice value, the audio sampling rate is less than the reference audio sampling rate, and the video encoding buffer value is less than the reference video encoding buffer value.
Specifically, the plurality of first audio/video data pulled from different equipment terminals includes a plurality of first audio/video data with different formats, and format conversion needs to be performed for each first audio/video data, so that the first audio/video data is uniformly converted into a preset format. The preset format may be any format satisfying the unification of audio and video formats, and in this embodiment, in order to facilitate rapid encoding of subsequent audio and video data, the video format is preferably an h.264 (a video compression standard) format, and the audio format is preferably a PCMA format (an audio codec format). Among other things, h.264 can compress video data to a smaller file size by using a more efficient compression algorithm, while maintaining high quality video. H.264 may provide better video quality at the same code rate or use a lower code rate at the same video quality than other video coding formats; the PCMA adopts an A-law algorithm to compress signals, can keep higher audio quality, and is suitable for scenes with higher requirements on sound quality, such as telephone communication.
And obtaining second audio and video data after carrying out format conversion on the first audio and video data, wherein the primary format conversion enables the pulled audio and video data to meet the basic conditions of audio and video fusion. On the basis, recoding the second audio-video data according to the coding parameters to obtain low-delay third audio-video data. The coding parameters include video coding slice value, audio sampling rate and video coding buffer value. Wherein the video coding slice value and the video buffer value are parameters related to video data coding.
It will be appreciated that video coding slices are partitions of a video stream into a series of small data blocks for more efficient processing and delivery during transmission. In this embodiment, in the encoding process, the video encoding slice value is set to be smaller than the reference video encoding slice value, so that the transmission time of each slice is correspondingly shorter, and the transmission delay of video data can be effectively reduced.
In addition, the video buffer is a temporary storage area for storing received video data. The larger buffer area can store more video data, so that smooth playing of the receiving end is ensured. However, a larger buffer also increases the playback delay, i.e. the time from the user requesting playback to the actual start of playback. In this embodiment, the video buffer value is controlled below the reference video buffer value, so that delay of video playing can be effectively reduced while smooth video playing is ensured.
Wherein the audio sampling rate is a parameter related to audio data encoding.
It is understood that the audio sampling rate refers to the number of times audio data is acquired and recorded per unit time, typically expressed in hertz (Hz). The sampling rate determines the accuracy and quality of the digital audio and also affects the audio file size and transmission bandwidth. A higher sampling rate means that more sampling points need to be transmitted and thus increases the transmission delay. In addition, if audio needs to be played in synchronization with other media (e.g., video), high sampling rate audio requires longer time for synchronization processing, thereby increasing synchronization delay. In this embodiment, the audio sampling rate of the audio data is controlled below the reference audio sampling rate, so that the transmission delay and the synchronization delay of the audio data can be effectively reduced while the audio quality is ensured.
Step S400: and fusing the plurality of third audio and video data to obtain fourth audio and video data.
Specifically, in the implementation process of converged communication, audio and video data from a plurality of communication terminals need to be converged and then pushed to the opposite terminal device, so that a user can see video pictures from the plurality of communication terminals from the opposite terminal device and hear audio from the plurality of terminals.
Further, in the implementation process, step S400 specifically includes:
step S410: acquiring current equipment information of all main stream communication terminals;
step S420: based on the current equipment information, creating a corresponding video soft terminal for each main stream communication terminal;
step S430: binding the third audio and video data with the corresponding video soft terminal;
step S440: and based on the video soft terminal, fusing the bound third audio and video data to obtain fourth audio and video data.
Specifically, first, current device information corresponding to a plurality of main stream communication terminals needs to be acquired. The device information includes information such as a device model number, a user name, or a network interface address of the main stream communication terminal. And then, according to the information, a corresponding video soft terminal is created for each main stream communication terminal and is used as a virtual terminal of the first communication equipment in the converged communication equipment.
Further, binding the obtained third audio/video data with the corresponding video soft terminal. In a specific implementation process, step S430 includes:
step S431: binding audio data in the plurality of third audio-video data with the audio channels of the corresponding video soft terminals;
Step S432: binding video data in the third audio and video data with the video channels of the corresponding video soft terminals.
Specifically, in this embodiment, each video soft terminal has an audio channel and a video channel corresponding to each other, and in the binding process of the third audio/video data and the corresponding video soft terminal, the audio data in the third audio/video data and the audio channel of the corresponding video soft terminal need to be bound, and the video data in the third audio/video data and the video data of the corresponding video soft terminal need to be bound.
It can be appreciated that in audio-video communication, a plurality of audio-video data need to be bound according to a certain rule to ensure that they can be synchronously transmitted and played. The binding process comprises the steps of binding the audio data with the corresponding audio channels and binding the video data with the corresponding video channels.
Specifically, in a video soft terminal, a user may simultaneously receive a plurality of audio and video data, and these data streams need to be decoded and processed and output to corresponding audio and video channels for viewing and listening. In order to ensure that different data streams can be correctly bound to the respective channels, each audio data needs to have its contained audio data bound to the corresponding audio channel. The binding process can be completed by matching the identifier, the time stamp and the like of the data stream, so that different audio data can be ensured to be correctly played; for each video data, it is necessary to bind the video data it contains to the corresponding video channel. The binding process may be accomplished by matching identifiers of the data streams, time stamps, etc., to ensure that the different video data is played correctly.
The video soft terminal comprises corresponding terminal information, and the terminal information is associated with equipment information of a corresponding main stream communication terminal. And according to the terminal information of the video soft terminal, the bound third audio and video data can be fused to obtain fourth audio and video data. That is, when the opposite terminal device performs audio/video playing according to the fourth audio/video data, the pictures and the audios from the plurality of main stream communication terminals can be simultaneously displayed in the playing content.
Step S500: and based on the network bandwidth, performing code stream adjustment on the fourth audio/video data to obtain fifth audio/video data.
Step S600: and transmitting the fifth audio and video data to the opposite terminal equipment.
Specifically, after the fourth audio/video data is obtained, according to the obtained network bandwidth of the opposite terminal device, the code stream adjustment is performed on the fourth audio/video data to obtain fifth audio/video data, so as to ensure that the opposite terminal device can smoothly play.
It will be appreciated that in the audio-video field, the code stream generally refers to the amount of data of an audio-video data stream, expressed in terms of bit rate (Bitrate), i.e. the amount of data transmitted or stored per second. The higher the bit rate, the more data is transferred or stored per unit time, and thus can also be used to measure the quality and clarity of audio-visual data in general. In the implementation process, when the network bandwidth of the opposite terminal device is smaller, the code stream of the fourth audio/video data can be continuously reduced according to the network bandwidth until the opposite terminal device can smoothly play.
And finally, carrying out protocol encapsulation on the obtained fifth audio/video data according to the protocol requirement of the opposite terminal equipment, and sending the fifth audio/video data to the opposite terminal equipment so that the opposite terminal equipment plays according to the fifth audio/video data.
Further, in this embodiment, the peer device includes a third party live broadcast platform or a high definition wall terminal.
If the peer device is a third party live platform, step S600 includes:
step S610: and sending the fifth audio and video data to a third party live broadcast platform.
If the opposite terminal device is a high definition wall terminal, step S600 includes:
step S620: and carrying out protocol encapsulation on the fifth audio and video data based on the first preset communication protocol.
The first preset communication protocol comprises any one of a real-time message transmission RTMP protocol, a real-time streaming RTSP protocol, a network real-time communication WebRTC protocol or an HTTPS-based FLV video streaming HTTPS-FLV protocol;
step S620: based on the packaged fifth audio and video data and the fixed network address of the high-definition wall-mounted terminal, generating a unique hypertext markup language (HTML) page and sending the HTML page to the high-definition wall-mounted terminal.
Specifically, if the opposite terminal device is a third-party live broadcast platform, the converged communication device directly sends the fifth audio and video data to the third-party live broadcast platform.
If the opposite terminal equipment is a high-definition wall-mounted terminal, the converged communication terminal needs to carry out protocol encapsulation on the fifth audio/video data based on a first preset communication protocol, and then a unique HTML page is generated according to a fixed network address corresponding to the high-definition wall-mounted terminal. And finally, the HTML page is sent to the high-definition wall-mounted terminal, so that the high-definition wall-mounted terminal plays the audio and video contents in the fifth audio and video data through the HTML page. The method and the device can avoid the high-definition wall terminal to request the same resource for multiple times by generating the unique HTML page according to the fixed network address.
The first preset communication protocol may be any one of a real-time messaging RTMP protocol, a real-time streaming RTSP protocol, a web real-time communication WebRTC protocol, or an HTTPS-based FLV video streaming HTTPS-FLV protocol.
The high-definition wall terminal can be a smart city service high-definition wall terminal, an emergency command service high-definition wall terminal or a millions of engineering high-definition wall terminal.
In this embodiment, on the basis of uniformly performing format conversion on the first audio/video data pulled from the different mainstream communication terminals, the converted second audio/video data is encoded based on encoding parameters such as a video encoding slice value, an audio sampling rate, a video encoding buffer zone value, and the like, so that the video encoding slice value, the audio sampling rate, and the video encoding buffer zone value of the encoded third audio/video data are reduced below the reference value, and meanwhile, the code stream of the fused fourth audio/video data is adjusted according to the network bandwidth, thereby reducing the delay of the whole audio/video transmission process on the whole, and significantly reducing the picture delay of the opposite terminal device.
Further, a second embodiment is proposed based on the first embodiment, referring to fig. 2, fig. 2 is a schematic flow chart of a second embodiment of the converged communication method of the present invention.
In this embodiment, before step S400, the method includes:
step a1: and pulling a corresponding plurality of sixth audio/video data from the plurality of high-speed wireless communication VoLTE terminals.
Step a2: and respectively carrying out protocol encapsulation on the plurality of sixth audio/video data based on a second preset communication protocol to obtain a plurality of seventh audio/video data.
The second preset communication protocol is an audio-video universal transmission protocol.
After step S400, the method includes:
step b1: and fusing the seventh audio and video data to obtain eighth audio and video data and sending the eighth audio and video data to the video terminal.
Further, before step S400, the method further includes:
step a3: and pulling a plurality of seventh audio/video data from the accessed gateway device.
Specifically, in this embodiment, the device accessing the converged communication device may also be a high-speed wireless communication VoLTE terminal, a wireless trunking terminal, or a landline phone. The high-speed wireless communication VoLTE terminal generally performs data interaction with the converged communication equipment directly, and the wireless trunking terminal and the fixed telephone perform data interaction with the converged communication equipment through own gateway equipment at the terminal side.
For the high-speed wireless communication VoLTE terminal, the converged communication device first needs to pull a plurality of sixth audio/video data from the plurality of high-speed wireless communication VoLTE terminals, and uniformly packages the plurality of sixth audio/video data according to a second preset communication protocol to obtain seventh audio/video data. And then fusing the obtained seventh audio and video data to obtain eighth audio and video data, and sending the eighth audio and video data to the video terminal.
And for the wireless trunking terminal and the fixed telephone, the convergence communication equipment directly pulls a plurality of seventh audio/video data from the corresponding gateway equipment and directly fuses the pulled seventh audio/video data.
The second preset communication protocol is an audio and video universal transmission protocol. In a specific implementation, the second preset communication protocol may be any one of an RTP (Real-time Transport Protocol, a network transmission protocol) protocol, an RTCP (Real-time Transport Control Protocol, a complementary protocol to RTP), an SIP (Session Initiation Protocol, a text-based application layer protocol), or a WebRTC (Web Real-Time Communication, a Real-time communication) universal transmission protocol.
It can be understood that the audio and video data sent by the high-speed wireless communication VoLTE terminal does not perform communication protocol conversion, is not unified with the audio and video data protocol in the converged communication device, and does not meet the convergence requirement, so that unified protocol encapsulation is needed first. And the audio and video data output by the gateway equipment is an audio and video universal transmission protocol, so that the fusion requirement is met, and unified protocol encapsulation is not needed.
In this embodiment, the corresponding sixth audio and video data is pulled from the high-speed wireless communication VoLTE terminal, and based on the second preset communication protocol, the unified protocol encapsulation is performed on all the first audio and video data, so that the sixth audio and video data meets the audio and video fusion requirement of the fusion communication device, and meanwhile, the seventh audio and video data is pulled directly from the gateway device corresponding to the wireless trunking terminal and the fixed phone to be fused, so that the data docking between each high-speed wireless communication VoLTE terminal, the wireless trunking terminal and the fixed phone and the fusion communication device can be performed at the same time, and further more audio and video communication scenes can be adapted.
Further, a third embodiment is proposed based on the first embodiment, referring to fig. 3, and fig. 3 is a schematic flow chart of a third embodiment of the converged communication method of the present invention.
In this embodiment, the convergence communication device needs to transmit back to each main stream communication terminal after performing the convergence and low-delay processing on the audio/video data from each main stream communication terminal. Thus, after step S500, the method further comprises:
step c1: acquiring the current network bandwidths of all main stream communication terminals;
step c2: based on the current network bandwidth of the main stream communication terminal, performing code stream adjustment on the fourth audio/video data to obtain ninth audio/video data;
step c3: converting the format of the ninth audio/video data into an audio/video format corresponding to the main stream communication terminal to obtain tenth audio/video data;
step c4: and carrying out protocol encapsulation on tenth audio/video data based on the communication protocol corresponding to the main stream communication terminal, and sending back all the corresponding main stream communication terminals.
Specifically, the current network bandwidth of each main stream communication terminal needs to be acquired first, and the code stream adjustment is performed on the fourth audio/video data obtained through fusion according to the current network bandwidth to acquire ninth audio/video data. Specifically, when the network bandwidth is smaller, the code stream of the fourth audio/video data is reduced, so that smooth playing of the main stream communication terminal is ensured. And then converting the format of the ninth audio/video data into a format corresponding to the main stream communication terminal to obtain tenth audio/video data. And finally, carrying out protocol encapsulation according to a communication protocol corresponding to the main stream communication terminal, and sending the encapsulated tenth audio/video data back to each main stream communication terminal so that each main stream communication terminal can play the audio/video content in the tenth audio/video data.
In this embodiment, according to the network bandwidth of the main stream communication terminal, the code stream adjustment is performed on the fourth audio/video data obtained by fusion, so as to reduce the delay of the fourth audio/video data. On the basis, according to the audio and video format and the communication protocol corresponding to the main stream communication terminal, format conversion and protocol encapsulation are carried out on the ninth audio and video data, and the fused audio and video data can be sent back to each main stream communication terminal, so that the effect of audio and video feedback is realized, and each main stream communication terminal can play the fused audio and video content.
Further, the present invention also provides a converged communication platform, which includes:
the primary conversion module is used for acquiring network bandwidth of opposite terminal equipment and pulling a plurality of corresponding first audio/video data from a plurality of main stream communication terminals; respectively converting the audio data and the video data in the plurality of first audio-video data into corresponding preset formats to obtain a plurality of second audio-video data; based on a second preset communication protocol, carrying out protocol encapsulation on a plurality of second audio/video data and sending the second audio/video data to an input coding module;
the input encoding module is used for encoding the plurality of second audio/video data based on the encoding parameters to obtain a plurality of third audio/video data; based on a second preset communication protocol, carrying out protocol encapsulation on a plurality of third audio/video data and sending the third audio/video data to an audio/video fusion module;
The audio/video fusion module is used for fusing the plurality of third audio/video data to obtain fourth audio/video data; the fourth audio and video data are subjected to protocol encapsulation based on a second preset communication protocol and are sent to an output coding module;
the output coding module is used for carrying out code stream adjustment on the fourth audio/video data based on the network bandwidth to obtain fifth audio/video data; and transmitting the fifth audio and video data to the opposite terminal equipment.
Specifically, referring to fig. 4, fig. 4 is a schematic architecture diagram of a converged communication platform. As shown in fig. 4, in this embodiment, the preliminary conversion module is constructed as an ifch module, the input encoding module is constructed as an iFCP-I module, and the output encoding module is constructed as an iFCP-II module.
The iSwitch module is used for docking the GB28181 platform, the unmanned equipment platform and the intelligent broadband equipment, and converting the communication protocol of each platform into an SIP communication protocol, namely a second preset communication protocol. In addition, the audio coding format in the first audio/video data is converted into PCMA format and the video coding format is converted into H.264 so as to be in data butt joint with the iFCP module.
In addition, the iSwitch module is also used for acquiring the equipment information of each main stream communication terminal at fixed time, and updating the coding information of the first audio and video data corresponding to each main stream communication terminal to the converged communication platform through the iFCP-I module;
During real-time communication, the iSwitch module pulls first audio and video data from each main stream communication terminal, decodes the first audio and video data, and pushes the first audio and video data to the iFCP-I for relevant encoding. Before decoding, the user can set parameters such as low delay, no buffer, audio encoder, video encoder, etc. through ffmpeg (open source multimedia processing tool) to reduce decoding time.
The iFCP-I module is used for encoding the second audio/video data pushed by the iSwtich module according to SIP communication and encoding parameters so as to meet the fusion requirement. The coding parameters mainly include video coding slice values, audio sampling rates, and video coding buffer values.
It should be noted that in some specific implementations, the encoding parameters may further include one or more of codec (video encoder selection parameters), b: v (video bit rate), tune: v (video encoding optimization parameters), profile (video encoding Profile parameters), level (video encoding Level parameters), preset (video encoding speed and quality parameters), pixfmt (video pixel format), or Colorspace (video color space parameters).
The iFCP-II module is used for receiving the fourth audio and video data sent by the audio and video fusion module, decoding the fourth audio and video data, and then adjusting the code stream of the fourth audio and video data according to the network bandwidth. In the implementation process, the fourth audio/video data can be encoded into any one of a transmission rate code stream in a local area network, an Internet transmission rate code stream or a 4G transmission rate code stream, so as to meet the low-delay transmission under different bandwidths.
Further, the platform also comprises a streaming media service module and a streaming media playing module;
the input coding module is specifically used for acquiring current equipment information of all main stream communication terminals; based on the current equipment information, creating a corresponding video soft terminal for each main stream communication terminal; binding the third audio and video data with the corresponding video soft terminal;
the audio and video fusion module is specifically used for fusing the bound third audio and video data based on the video soft terminal to obtain fourth audio and video data; and/or
The input encoding module is specifically configured to bind audio data in the plurality of third audio/video data with an audio channel of the corresponding video soft terminal; binding video data in the third audio and video data with the video channels of the corresponding video soft terminals.
The output coding module is specifically used for sending fifth audio and video data to the third-party live broadcast platform; and/or
The streaming media service module is used for carrying out protocol encapsulation on the fifth audio/video data based on a first preset communication protocol; sending the packaged fifth audio and video data to a streaming media playing module;
the streaming media playing module is used for generating a unique hypertext markup language (HTML) page based on the packaged fifth audio and video data and the fixed network address of the high-definition wall terminal, and sending the unique HTML page to the high-definition wall terminal.
Specifically, reference is continued to FIG. 4. In this embodiment, the streaming media service module is constructed as an iMediaServer module, and the streaming media playing module is constructed as an iHDV-HTML5 module.
The iMediaServer module is used for receiving the audio and video data sent by the iFCP-II module and carrying out protocol encapsulation on the audio and video data sent by the FCP-II module according to the protocol requirement of the high-definition upper wall terminal. For example: the encapsulation protocol may be either RTMP, RTSP, webRTC or HTTPS-FLV. To ensure low latency, the present embodiment is preferably WebRTC protocol.
In addition, in some implementation processes, the iMediaServer module is further configured to record audio and video data according to a user requirement for subsequent playback.
The iHDV-HTML5 module is used for pulling the audio and video data packaged by the protocol from the iMediaServer module, generating a unique HTML page according to the pulled audio and video data and the fixed network address of the high-definition wall terminal, and sending the unique HTML page to the high-definition wall terminal. The HTML page comprises an H5 Video player for playing audio and Video contents in audio and Video data.
Further, in a specific embodiment, the iFCP-I module further includes an iSMDG sub-module. The iSMDG submodule is used for creating a plurality of video soft terminals of a second preset communication protocol according to actual needs, and binding third audio/video data obtained after the encoding of the iscp-I module to an audio channel (AudioChannel) and a video channel (VideoChannel) of the corresponding video soft terminal.
In addition, in this embodiment, the converged communication platform further includes a converged communication middle station module. The convergence communication middle station module is used for scheduling control of audio and video data among the modules. The iSMDG sub-module is also used for issuing terminal information (whether online or not and whether audio and video data transmission is normal or not) of the video soft terminal to the converged communication center station for the dispatching control of the converged communication center station.
In this embodiment, the high-definition wall-mounted terminal includes a smart city service high-definition wall-mounted terminal, an emergency command service high-definition wall-mounted terminal, and a hundred million engineering high-definition wall-mounted terminals.
Further, the iSMDG sub-module is further configured to push the audio and video data in the converged communication platform to the iSwitch module, so that the iSwitch module returns the converged audio and video data to each access device of the converged communication platform.
Further, the platform also comprises a program control exchange module;
the program control exchange module is used for pulling a plurality of corresponding sixth audio/video data from the plurality of high-speed wireless communication VoLTE terminals; respectively carrying out protocol encapsulation on the plurality of sixth audio/video data based on a second preset communication protocol to obtain a plurality of seventh audio/video data; transmitting the seventh audio and video data to an audio and video fusion module;
the audio/video fusion module is further used for fusing a plurality of seventh audio/video data to obtain eighth audio/video data and sending the eighth audio/video data to the video terminal; and/or
The audio and video fusion module is also used for pulling a plurality of seventh audio and video data from the accessed gateway equipment; and/or
The input coding module is also used for acquiring the current network bandwidths of all the main stream communication terminals;
The output coding module is also used for carrying out code stream adjustment on the fourth audio/video data based on the current network bandwidth of the main stream communication terminal to obtain ninth audio/video data; transmitting the ninth audio and video data to a preliminary conversion module;
the primary conversion module is further used for converting the format of the ninth audio/video data into an audio/video format corresponding to the main stream communication terminal to obtain tenth audio/video data; and carrying out protocol encapsulation on tenth audio/video data based on the communication protocol corresponding to the main stream communication terminal, and sending back all the corresponding main stream communication terminals.
Specifically, reference is continued to FIG. 4. In this embodiment, the program-controlled switching module is constructed as an iPBX module.
The iPBX module is used for rewriting the SDP into a format corresponding to a second preset communication protocol based on the second preset communication protocol. Wherein SDP (Session Description Protocol) is used to describe media stream information, such as audio, video, data, etc., required for a multimedia session. SDP is commonly used in multimedia application scenarios such as VoIP (Voice over Internet Protocol) and video conferencing, and can describe information such as the type, format, coding scheme, transmission protocol, network address, etc. of the media stream. In this embodiment, the contents to be rewritten in SDP mainly include an audio format number, an audio format name, an audio code rate, an audio channel number, a video format code, a video format name, a video code rate, a video frame rate, a video GOP, and the like.
In addition, the iPBX module is also in butt joint with the audio and video fusion module to receive and send communication instructions and audio and video data. Wherein the communication instruction includes: a communication hang-up instruction or a communication connection request instruction.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising a converged communication" does not exclude the presence of additional identical elements in a process, method, article or system comprising the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a computer readable storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. A converged communication method, characterized in that the converged communication method comprises the steps of:
acquiring network bandwidth of opposite terminal equipment, and pulling a plurality of corresponding first audio/video data from a plurality of main stream communication terminals; the main stream communication terminal comprises a national standard equipment terminal, an unmanned equipment terminal, an intelligent equipment terminal or an emergency command terminal;
respectively converting the audio data and the video data in the plurality of first audio-video data into corresponding preset formats to obtain a plurality of second audio-video data;
encoding the second audio and video data based on the encoding parameters to obtain third audio and video data; the coding parameters comprise a video coding slice value, an audio sampling rate and a video coding buffer zone value, wherein the video coding slice value is smaller than a reference video coding slice value, the audio sampling rate is smaller than a reference audio sampling rate, and the video coding buffer zone value is smaller than the reference video coding buffer zone value;
Fusing the plurality of third audio and video data to obtain fourth audio and video data;
based on the network bandwidth, performing code stream adjustment on the fourth audio/video data to obtain fifth audio/video data;
and sending the fifth audio and video data to the opposite terminal equipment.
2. The fusion communication method according to claim 1, wherein the step of fusing a plurality of the third audio-video data to obtain fourth audio-video data includes;
acquiring current equipment information of all the main stream communication terminals;
creating a corresponding video soft terminal for each of the mainstream communication terminals based on the current device information;
binding the third audio and video data with the corresponding video soft terminal;
and based on the video soft terminal, fusing the bound third audio and video data to obtain the fourth audio and video data.
3. The converged communication method of claim 2, wherein the step of binding the third av data with the corresponding video soft terminal includes:
binding the audio data in the third audio-video data with the corresponding audio channels of the video soft terminal;
Binding the video data in the third audio and video data with the video channels of the corresponding video soft terminals.
4. The converged communication method of claim 1, wherein the peer device comprises a third party live broadcast platform or a high definition wall terminal;
the step of sending the fifth audio/video data to the opposite terminal device includes:
transmitting the fifth audio and video data to the third party live broadcast platform; and/or
The step of sending the fifth audio/video data to the opposite terminal device includes:
protocol packaging is carried out on the fifth audio and video data based on a first preset communication protocol; the first preset communication protocol comprises any one of a real-time message transmission RTMP protocol, a real-time streaming RTSP protocol, a network real-time communication WebRTC protocol or an HTTPS-based FLV video streaming HTTPS-FLV protocol;
and generating a unique hypertext markup language (HTML) page based on the packaged fifth audio and video data and the fixed network address of the high-definition wall-mounted terminal, and sending the HTML page to the high-definition wall-mounted terminal.
5. The converged communication method of claim 1, wherein before the step of fusing the plurality of third audio-video data to obtain fourth audio-video data, the method further comprises:
Pulling a corresponding plurality of sixth audio/video data from the plurality of high-speed wireless communication VoLTE terminals;
respectively carrying out protocol encapsulation on a plurality of sixth audio/video data based on a second preset communication protocol to obtain a plurality of seventh audio/video data; the second preset communication protocol is an audio and video universal transmission protocol;
after the step of fusing the plurality of third audio/video data to obtain fourth audio/video data, the method further includes:
and fusing the seventh audio and video data to obtain eighth audio and video data and sending the eighth audio and video data to the video terminal.
6. The converged communication method of claim 5, wherein before the step of fusing the plurality of third audio-video data to obtain fourth audio-video data, the method further comprises:
and pulling a plurality of seventh audio and video data from the accessed gateway equipment.
7. The converged communication method of claim 1, wherein after the step of performing code stream adjustment on the fourth audio/video data based on the network bandwidth to obtain fifth audio/video data, the method further comprises:
acquiring the current network bandwidths of all the main stream communication terminals;
Based on the current network bandwidth of the main stream communication terminal, performing code stream adjustment on the fourth audio/video data to obtain ninth audio/video data;
converting the format of the ninth audio/video data into an audio/video format corresponding to the main stream communication terminal to obtain tenth audio/video data;
and carrying out protocol encapsulation on tenth audio/video data based on the communication protocol corresponding to the main stream communication terminal, and sending back all the corresponding main stream communication terminals.
8. A converged communication platform, the platform comprising: the device comprises a primary conversion module, an input coding module, an output coding module and an audio/video fusion module;
the primary conversion module is used for acquiring network bandwidth of opposite terminal equipment and pulling a plurality of corresponding first audio/video data from a plurality of main stream communication terminals; respectively converting the audio data and the video data in the plurality of first audio-video data into corresponding preset formats to obtain a plurality of second audio-video data; based on a second preset communication protocol, carrying out protocol encapsulation on a plurality of second audio/video data and sending the second audio/video data to the input coding module;
the input encoding module is used for encoding the plurality of second audio/video data based on encoding parameters to obtain a plurality of third audio/video data; based on a second preset communication protocol, carrying out protocol encapsulation on the plurality of third audio/video data and sending the third audio/video data to the audio/video fusion module;
The audio and video fusion module is used for fusing the plurality of third audio and video data to obtain fourth audio and video data; the fourth audio and video data are subjected to protocol encapsulation based on a second preset communication protocol and are sent to the output coding module;
the output coding module is used for carrying out code stream adjustment on the fourth audio/video data based on the network bandwidth to obtain fifth audio/video data; and sending the fifth audio and video data to the opposite terminal equipment.
9. The converged communication platform of claim 8, wherein the platform further comprises: a streaming media service module and a streaming media playing module;
the opposite terminal equipment comprises a third party live broadcast platform or a high-definition wall terminal;
the input coding module is specifically used for acquiring current equipment information of all the main stream communication terminals; creating a corresponding video soft terminal for each of the mainstream communication terminals based on the current device information; binding the third audio and video data with the corresponding video soft terminal;
the audio and video fusion module is specifically configured to fuse the bound third audio and video data based on the video soft terminal, so as to obtain the fourth audio and video data; and/or
The input encoding module is specifically configured to bind audio data in the plurality of third audio/video data with an audio channel of the corresponding video soft terminal; binding video data in the third audio and video data with the video channels of the corresponding video soft terminals;
the output encoding module is specifically configured to send the fifth audio/video data to the third party live broadcast platform; and/or
The streaming media service module is used for carrying out protocol encapsulation on the fifth audio/video data based on a first preset communication protocol; sending the packaged fifth audio and video data to the streaming media playing module;
the streaming media playing module is used for generating a unique hypertext markup language (HTML) page based on the packaged fifth audio and video data and the fixed network address of the high-definition wall terminal, and sending the unique HTML page to the high-definition wall terminal.
10. The converged communication platform of claim 8, wherein the platform further comprises a program controlled switching module;
the program control exchange module is used for pulling a plurality of corresponding sixth audio/video data from a plurality of high-speed wireless communication VoLTE terminals; respectively carrying out protocol encapsulation on a plurality of sixth audio/video data based on a second preset communication protocol to obtain a plurality of seventh audio/video data; transmitting the seventh audio and video data to the audio and video fusion module;
The audio/video fusion module is further used for fusing the seventh audio/video data to obtain eighth audio/video data and sending the eighth audio/video data to the video terminal; and/or
The audio and video fusion module is also used for pulling a plurality of seventh audio and video data from the accessed gateway equipment; and/or
The input encoding module is further used for acquiring current network bandwidths of all the main stream communication terminals;
the output coding module is further used for performing code stream adjustment on the fourth audio/video data based on the current network bandwidth of the main stream communication terminal to obtain ninth audio/video data; transmitting the ninth audio and video data to the preliminary conversion module;
the preliminary conversion module is further configured to convert the format of the ninth audio/video data into an audio/video format corresponding to the main stream communication terminal, so as to obtain tenth audio/video data; and carrying out protocol encapsulation on tenth audio/video data based on the communication protocol corresponding to the main stream communication terminal, and sending back all the corresponding main stream communication terminals.
CN202410060669.3A 2024-01-16 2024-01-16 Converged communication method and platform Pending CN117596442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410060669.3A CN117596442A (en) 2024-01-16 2024-01-16 Converged communication method and platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410060669.3A CN117596442A (en) 2024-01-16 2024-01-16 Converged communication method and platform

Publications (1)

Publication Number Publication Date
CN117596442A true CN117596442A (en) 2024-02-23

Family

ID=89920406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410060669.3A Pending CN117596442A (en) 2024-01-16 2024-01-16 Converged communication method and platform

Country Status (1)

Country Link
CN (1) CN117596442A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006452A (en) * 2010-11-24 2011-04-06 中兴通讯股份有限公司 Method for monitoring terminal through IP network and MCU
CN102447875A (en) * 2010-09-30 2012-05-09 中兴通讯股份有限公司 Method and system for centralized monitoring of video session terminals and relevant devices
CN104349178A (en) * 2014-11-21 2015-02-11 赛特斯信息科技股份有限公司 System and method for required real-time transcoding and self-adaptive code rate stream media playing
CN104506883A (en) * 2014-12-11 2015-04-08 成都德芯数字科技有限公司 Audio and video encoder based on wide area network live broadcast and working method thereof
CN104967872A (en) * 2015-06-08 2015-10-07 青岛海信移动通信技术股份有限公司 Live broadcasting method and server based on dynamic self-adaptive code rate transport protocol HLS streaming media
CN105357591A (en) * 2015-11-16 2016-02-24 北京理工大学 QoE monitoring and optimization method for adaptive code rate video direct broadcast
CN111726651A (en) * 2020-07-03 2020-09-29 浪潮云信息技术股份公司 Audio and video stream live broadcasting method and system based on HILS protocol
CN115766348A (en) * 2022-12-26 2023-03-07 河钢数字技术股份有限公司 Multi-protocol video fusion gateway based on Internet of things

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102447875A (en) * 2010-09-30 2012-05-09 中兴通讯股份有限公司 Method and system for centralized monitoring of video session terminals and relevant devices
CN102006452A (en) * 2010-11-24 2011-04-06 中兴通讯股份有限公司 Method for monitoring terminal through IP network and MCU
CN104349178A (en) * 2014-11-21 2015-02-11 赛特斯信息科技股份有限公司 System and method for required real-time transcoding and self-adaptive code rate stream media playing
CN104506883A (en) * 2014-12-11 2015-04-08 成都德芯数字科技有限公司 Audio and video encoder based on wide area network live broadcast and working method thereof
CN104967872A (en) * 2015-06-08 2015-10-07 青岛海信移动通信技术股份有限公司 Live broadcasting method and server based on dynamic self-adaptive code rate transport protocol HLS streaming media
CN105357591A (en) * 2015-11-16 2016-02-24 北京理工大学 QoE monitoring and optimization method for adaptive code rate video direct broadcast
CN111726651A (en) * 2020-07-03 2020-09-29 浪潮云信息技术股份公司 Audio and video stream live broadcasting method and system based on HILS protocol
CN115766348A (en) * 2022-12-26 2023-03-07 河钢数字技术股份有限公司 Multi-protocol video fusion gateway based on Internet of things

Similar Documents

Publication Publication Date Title
JP2018186524A (en) Content transmitting device and content reproduction device
WO2009128528A1 (en) Server device, content distribution method, and program
US20110138018A1 (en) Mobile media server
CN102104762B (en) Media recording method, equipment and system of IMS (Internet Management Specification) video conference
US20100161716A1 (en) Method and apparatus for streaming multiple scalable coded video content to client devices at different encoding rates
WO2017138387A1 (en) Information processing device and information processing method
US20140297804A1 (en) Control of multimedia content streaming through client-server interactions
CN101999234A (en) Gateway device, method, and program
CN112752115B (en) Live broadcast data transmission method, device, equipment and medium
JP2005176352A (en) Wireless moving picture streaming file, method and system for moving picture streaming service of mobile communication terminal
CN112532945B (en) Multi-type media service fusion system
CN111741248B (en) Data transmission method, device, terminal equipment and storage medium
KR20080086262A (en) Method and apparatus for sharing digital contents, and digital contents sharing system using the method
CN111327580A (en) Message transmission method and device
CN113727144A (en) High-definition live broadcast system and streaming media method based on mixed cloud
KR102137858B1 (en) Transmission device, transmission method, reception device, reception method, and program
WO2010114092A1 (en) Distribution system and method, conversion device, and program
JP2005094769A (en) Apparatus and method for providing high speed download service of multimedia contents
CN117596442A (en) Converged communication method and platform
KR100502186B1 (en) HDTV internet broadcast service system
KR100820350B1 (en) Multi contaniner format integration streaming server and streaming method
Lohan et al. Content delivery and management in networked MPEG-4 system
JP2001359071A (en) Data distributor and method, and data distribution system
WO2008145679A2 (en) Method to convert a sequence of electronic documents and relative apparatus
KR101568317B1 (en) System for supporting hls protocol in ip cameras and the method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination