CN110611639A - Audio data processing method and device for streaming media conference - Google Patents

Audio data processing method and device for streaming media conference Download PDF

Info

Publication number
CN110611639A
CN110611639A CN201810613653.5A CN201810613653A CN110611639A CN 110611639 A CN110611639 A CN 110611639A CN 201810613653 A CN201810613653 A CN 201810613653A CN 110611639 A CN110611639 A CN 110611639A
Authority
CN
China
Prior art keywords
audio data
terminal
streaming media
format
media server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810613653.5A
Other languages
Chinese (zh)
Inventor
张新博
李云鹏
谢文龙
付立友
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201810613653.5A priority Critical patent/CN110611639A/en
Publication of CN110611639A publication Critical patent/CN110611639A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention provides an audio data processing method and device for a streaming media conference, which are applied to a streaming media server. The method comprises the following steps: the method comprises the steps that a streaming media server receives first audio data sent by a first terminal; if the streaming media server judges that the first audio data is based on the Internet protocol and the coding format is the first format, the streaming media server sends the first audio data to a transcoder; the streaming media server acquires second audio data with a second format and a coding format returned by the transcoder, the second audio data is used as audio data to be sent, and the transcoder transcodes the first audio data to obtain the second audio data; and the streaming media server sends the audio data to be sent to the second terminal. The embodiment of the invention has simple and convenient process, high efficiency, no change of the original code in a large range and better stability.

Description

Audio data processing method and device for streaming media conference
Technical Field
The present invention relates to the field of video networking technologies, and in particular, to an audio data processing method and an audio data processing apparatus for a streaming media conference.
Background
Streaming media refers to continuous time-based media in networks using streaming techniques, such as: audio, video or multimedia files. The rapid development and popularity of networks provide a strong market force for the development of streaming media services, which are becoming increasingly popular. The streaming media technology is widely applied to aspects of internet information services such as multimedia news release, online live broadcast, network advertisement, electronic commerce, video on demand, remote education, remote medical treatment, network radio stations, real-time video conference and the like.
In the prior art, for a streaming media conference, such as an audio conference, a video conference, and other multimedia conferences, after receiving audio data from a participant terminal, the audio data is decoded into PCM (Pulse Code Modulation) data, and then the PCM data is processed; when the audio data are sent to the participant terminal, the PCM data are encoded into a format supported by the participant terminal and then sent to the participant terminal. Therefore, in the prior art, all audio data transmitted in the streaming media conference are decoded and encoded, and the process is complex and the efficiency is low.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide an audio data processing method for a streaming media conference and a corresponding audio data processing apparatus for a streaming media conference, which overcome or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present invention discloses an audio data processing method for a streaming media conference, where the method is applied to a streaming media server, the streaming media server is connected to a transcoder, a first terminal and a second terminal, the first terminal is a video networking terminal or an internet terminal, and the second terminal is a video networking terminal or an internet terminal, and the method includes:
the streaming media server receives first audio data sent by the first terminal;
if the streaming media server judges that the first audio data are based on the Internet protocol and the coding format is the first format, the streaming media server sends the first audio data to the transcoder;
the streaming media server acquires second audio data with a coding format of a second format returned by the transcoder, and takes the second audio data as audio data to be sent; the second audio data is obtained by transcoding the first audio data through the transcoder;
and the streaming media server sends the audio data to be sent to the second terminal.
Preferably, after the step of receiving, by the streaming server, the first audio data sent by the first terminal, the method further includes: and if the streaming media server judges that the first audio data is based on a video networking protocol and the coding format is the second format, converting the first audio data into third audio data based on an internet protocol, and taking the third audio data as audio data to be sent.
Preferably, after the step of receiving, by the streaming server, the first audio data sent by the first terminal, the method further includes: and if the streaming media server judges that the first audio data is based on the Internet protocol and the coding format is the second format, the streaming media server takes the first audio data as audio data to be sent.
Preferably, the step of sending the audio data to be sent to the second terminal by the streaming media server includes: if the streaming media server judges that the second terminal is the internet terminal supporting the first format audio coding, the streaming media server sends the audio data to be sent to the transcoder; the streaming media server acquires fourth audio data with the coding format of the first format returned by the transcoder and sends the fourth audio data to the second terminal; and the fourth audio data is obtained by transcoding the audio data to be sent by the transcoder.
Preferably, the step of sending the audio data to be sent to the second terminal by the streaming media server includes: and if the streaming media server judges that the second terminal is a video networking terminal, converting the audio data to be sent into fifth audio data based on a video networking protocol, and sending the fifth audio data to the second terminal.
Preferably, the step of sending the audio data to be sent to the second terminal by the streaming media server includes: and if the streaming media server judges that the second terminal is the internet terminal supporting the second-format audio coding, the streaming media server sends the audio data to be sent to the second terminal.
Preferably, the first format is a G711 format, and the second format is an AAC format.
On the other hand, the embodiment of the present invention further discloses an audio data processing device for a streaming media conference, the device is applied to a streaming media server, the streaming media server is respectively connected to a transcoder, a first terminal and a second terminal, the first terminal is a video networking terminal or an internet terminal, the second terminal is a video networking terminal or an internet terminal, and the streaming media server includes:
the receiving module is used for receiving first audio data sent by the first terminal;
the first sending module is used for sending the first audio data to the transcoder if the first audio data is determined to be based on an internet protocol and the coding format is the first format;
the acquisition module is used for acquiring second audio data with a second coding format returned by the transcoder and taking the second audio data as audio data to be sent; the second audio data is obtained by transcoding the first audio data through the transcoder;
and the second sending module is used for sending the audio data to be sent to the second terminal.
Preferably, the streaming media server further comprises: and the conversion module is used for converting the first audio data into third audio data based on an internet protocol and taking the third audio data as audio data to be sent if the first audio data is judged to be based on a video networking protocol and the coding format is judged to be the second format.
Preferably, the streaming media server further comprises: and the determining module is used for taking the first audio data as the audio data to be sent if the first audio data is judged to be based on the Internet protocol and the coding format is the second format.
In the embodiment of the invention, a streaming media server receives first audio data sent by a first terminal, and if the first audio data is determined to be based on an internet protocol and the coding format is a first format, the first audio data is sent to a transcoder; the transcoder transcodes the first audio data to obtain second audio data with a second format; and the streaming media server acquires second audio data with the coding format being the second format returned by the transcoder, takes the second audio data as audio data to be sent, and sends the audio data to be sent to the second terminal. Therefore, in the embodiment of the invention, not all audio data are coded and decoded, but the audio data are coded and decoded only when transcoding is judged to be needed, the process is simple and convenient, the efficiency is high, the original code is not changed in a large range, and the stability is good.
Drawings
FIG. 1 is a schematic networking diagram of a video network of the present invention;
FIG. 2 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 3 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention;
fig. 5 is a flowchart illustrating steps of an audio data processing method for a streaming media conference according to a first embodiment of the present invention;
fig. 6 is a flowchart illustrating steps of an audio data processing method for a streaming media conference according to a second embodiment of the present invention;
fig. 7 is a flowchart illustrating an audio data processing method for a streaming media conference according to a third embodiment of the present invention;
fig. 8 is a block diagram of an audio data processing apparatus for a streaming media conference according to a fourth embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network circuit Switching (circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 1, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 2, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 3, the network interface module mainly includes a network interface module (a downlink network interface module 301 and an uplink network interface module 302), a switching engine module 303 and a CPU module 304;
wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the data packet coming from the CPU module 204 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 208 is configured by the CPU module 204, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
As shown in fig. 4, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, acquires the ethernet MAC DA of the corresponding terminal according to the video networking destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MACSA of the ethernet coordination gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of data packets, and is 64 bytes if the data packet is a variety of protocol packets, and is 32+1024 or 1056 bytes if the data packet is a unicast data packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
Based on the above characteristics of the video network, the audio data processing scheme of the streaming media conference provided by the embodiment of the invention follows the internet protocol and the video network protocol, and more conveniently processes the audio data in the streaming media conference.
The audio data processing scheme of the streaming media conference is applied to a streaming media server. The streaming media server of the embodiment of the invention uses double network cards, wherein one network card is connected with the video network, and the other network card is connected with the internet. The device such as the video network server (which may be a node server as described above) and the video network terminal may be included in the video network, and the device such as the transcoder and the internet terminal may be included in the internet.
Example one
Referring to fig. 5, a flowchart illustrating steps of an audio data processing method for a streaming media conference according to a first embodiment of the present invention is shown.
The audio data processing method for the streaming media conference of the embodiment of the invention can comprise the following steps:
step 501, a streaming media server receives first audio data sent by a first terminal.
The streaming media server in the embodiment of the present invention may be connected to the transcoder, the first terminal and the second terminal, respectively, where the first terminal is a video networking terminal or an internet terminal, and the second terminal is a video networking terminal or an internet terminal. In the specific implementation, the streaming media server and the video network terminal can be connected through the video network server, that is, the streaming media server is connected with the video network server, and the video network server is connected with the video network terminal.
The first terminal and the second terminal are terminals participating in the streaming media conference, and the first terminal and the second terminal can transmit data to the streaming media server and can also receive data transmitted by the streaming media server. In the embodiment of the present invention, a first terminal is taken as a party (i.e., a speaking party) that sends data to a streaming media server, and a second terminal is taken as a party (i.e., a participating party) that receives the data sent by the streaming media server.
In step 502, if the streaming media server determines that the first audio data is based on the internet protocol and the encoding format is the first format, the streaming media server sends the first audio data to the transcoder.
The method comprises the steps that a user of a first terminal speaks, the first terminal collects first audio data and sends the first audio data to a streaming media server, the types of the first terminal are different, and the formats of the first audio data sent by the first terminal are also different. In the embodiment of the present invention, the streaming media server determines, according to the type of the first terminal, that is, according to the format of the first audio data, audio data to be subsequently sent to the second terminal.
The internet terminal in the embodiment of the invention can comprise an internet terminal supporting the audio coding of the first format and an internet terminal supporting the audio coding of the second format, and the video network terminal can comprise a video network terminal supporting the audio coding of the second format. The first audio data sent by the Internet terminal supporting the first format audio coding is data which is based on an Internet protocol and has a coding format of the first format; the first audio data sent by the Internet terminal supporting the second format audio coding is data which is based on the Internet protocol and has a coding format of the second format; the first audio data sent by the video networking terminal supporting the second format audio coding is data which is based on the video networking protocol and has the coding format of the second format.
Step 503, the streaming media server obtains the second audio data with the coding format being the second format returned by the transcoder, and takes the second audio data as the audio data to be sent.
If the streaming media server determines that the first audio data is based on the internet protocol and the encoding format is the first format, the streaming media server can determine that the first audio data can be subjected to format conversion, and therefore the streaming media server sends the first audio data to the transcoder. The transcoder receives the first audio data, transcodes the first audio data and transcodes the first audio data into second audio data with the coding format of the second audio data, and the second audio data is based on the internet protocol because protocol conversion is not carried out at the part. And the streaming media server acquires second audio data with the coding format being the second format returned by the transcoder and takes the second audio data as audio data to be sent.
And if the streaming media server judges that the first audio data is not the data which is based on the Internet protocol and has the encoding format of the first format, the streaming media server does not send the first audio data to a transcoder for transcoding.
Step 504, the streaming media server sends the audio data to be sent to the second terminal.
The streaming media server acquires audio data to be sent, sends the audio data to be sent to the second terminal, and the second terminal performs relevant processing and then plays the audio data, so that a user of the second terminal can listen to the speech of the user of the first terminal, and a streaming media conference is realized.
In the embodiment of the invention, not all audio data are coded and decoded, but the audio data are coded and decoded only when transcoding is judged to be needed, so that the process is simple and convenient, the efficiency is high, the original code cannot be changed in a large range, and the stability is better.
Example two
Referring to fig. 6, a flowchart illustrating steps of an audio data processing method for a streaming media conference according to a second embodiment of the present invention is shown.
The audio data processing method for the streaming media conference of the embodiment of the invention can comprise the following steps:
step 601, the streaming media server receives first audio data sent by the first terminal.
The first terminal may be an internet terminal or a video network terminal. The internet terminals can comprise internet terminals supporting audio coding of a first format and internet terminals supporting audio coding of a second format, and the video network terminals can comprise video network terminals supporting audio coding of the second format.
In the embodiment of the present invention, the first format may be a G711 format, and G711 is an audio coding method specified by the international telecommunication union (ITU-T), also called ITU-T G711; the second format may be an AAC (Advanced Audio Coding) format, the AAC is a file compression format specially designed for sound data, and unlike MP3, the AAC format uses a completely new algorithm for encoding, which is more efficient and has higher performance/price ratio. The internet terminal supporting the audio coding in the G711 format may include a single-soldier device, the internet terminal supporting the audio coding in the ACC format may include a PCTV (Personal computer television), an APP palm on a mobile terminal, and the like, and the internet terminal supporting the audio coding in the ACC format may include a STB (Set Top Box), and the like.
The first terminal collects first audio data when a user speaks and sends the first audio data to the streaming media server. And if the first terminal is an internet terminal, the internet terminal sends the first audio data to the streaming media server through an internet protocol, and the streaming media server is accessed through a network card connected with the internet. If the first terminal is a video network terminal, the video network terminal sends the first audio data to the streaming media server through a video network protocol, and it should be noted that the video network terminal sends the first audio data to the video network server through the video network protocol first, and then the video network server sends the first audio data to the streaming media server through the video network protocol, and the streaming media server is accessed through a network card connected with the video network.
In step 602, the streaming media server determines the format of the first audio data.
In the embodiment of the present invention, the streaming media server determines, according to the type of the first terminal, that is, according to the format of the first audio data, audio data to be subsequently sent to the second terminal.
If the first terminal is an internet terminal supporting audio encoding in the first format, the first audio data is data based on the internet protocol and having the encoding format in the first format, in which case, step 603 is performed subsequently. It should be noted that, if the first terminal is the above-mentioned individual device, since the individual device follows the national standard protocol, the protocol conversion of the first audio data may be performed by the protocol conversion server, and the first audio data may be converted into data based on the corresponding internet protocol.
If the first terminal is an internet of video terminal supporting audio encoding in the second format, the first audio data is data based on the internet of video protocol and the encoding format is the second format, in which case step 604 is performed subsequently.
If the first terminal is an internet terminal supporting the second format audio coding, the first audio data is data based on the internet protocol and the coding format is the second format, in which case, step 605 is performed subsequently.
Step 603, if the streaming media server determines that the first audio data is based on the internet protocol and the encoding format is the first format, the streaming media server sends the first audio data to the transcoder; and acquiring second audio data with a coding format of a second format returned by the transcoder, and taking the second audio data as audio data to be sent.
If the streaming media server determines that the first audio data is based on the internet protocol and the encoding format is the first format, it can be determined that the first audio data can be subjected to format conversion. And the first audio data is data based on an internet protocol, it can be determined that the first audio data may not be subjected to protocol conversion.
In a specific implementation, it is preferable that a transcoding process is established, and if the streaming media server determines that the first audio data is based on the internet protocol and the encoding format is the first format, the streaming media server enters the transcoding process and sends the first audio data to the transcoder. The transcoder receives the first audio data, and transcodes the first audio data into the second audio data with the coding format being the second format when detecting that the coding format of the first audio data is the first format. The streaming media server can establish a transcoding receiving thread inside, and the thread can acquire second audio data which is returned by the transcoder and has the coding format of the second format. In this case, the second audio data is data based on the internet protocol and having a coding format of the second format, and the streaming server takes the second audio data as audio data to be transmitted.
In step 604, if the streaming media server determines that the first audio data is based on the video networking protocol and the encoding format is the second format, the streaming media server converts the first audio data into third audio data based on the internet protocol, and uses the third audio data as audio data to be sent.
If the streaming media server judges that the first audio data is based on the video networking protocol and the coding format is the second format, the streaming media server can determine that the first audio data can not be subjected to format conversion. However, considering that the first audio data is data based on an internet protocol and the subsequent first audio data may be transmitted in the internet, it can be determined that the first audio data may be subjected to protocol conversion. The streaming server may convert the first audio data into third audio data based on an internet protocol. In this case, the third audio data is data based on the internet protocol and having the encoding format of the second format, and the streaming server takes the third audio data as audio data to be transmitted.
Step 605, if the streaming media server determines that the first audio data is based on the internet protocol and the encoding format is the second format, the streaming media server takes the first audio data as the audio data to be sent.
If the streaming media server determines that the first audio data is based on the internet protocol and the encoding format is the second format, it can be determined that the first audio data may not be subjected to format conversion. And the first audio data is data based on an internet protocol, it can be determined that the first audio data may not be subjected to protocol conversion. In this case, the streaming server may use the first audio data as the audio data to be transmitted.
In step 606, the streaming media server determines the type of the second terminal.
From the above steps 603 to 604, it can be known that the audio data to be transmitted is data based on the internet protocol and the encoding format is the second format. After obtaining the audio data to be sent, the streaming media server can perform corresponding processing on the audio data to be sent, and send the audio data to be sent to the second terminal. The second terminal may be an internet terminal or a video network terminal. The internet terminals can comprise internet terminals supporting audio coding of a first format and internet terminals supporting audio coding of a second format, and the video network terminals can comprise video network terminals supporting audio coding of the second format. The streaming media server may determine the type of the second terminal, and send the audio data to be sent to the second terminal according to the type of the second terminal, that is, the types of the second terminal are different, and the processes of sending the audio data to be sent by the streaming media server are different.
If the second terminal is an internet terminal supporting the first format audio coding, in this case, step 607 is subsequently performed. If the second terminal is a video network terminal, in this case, step 608 is performed subsequently. If the second terminal is an internet terminal supporting audio encoding in the second format, step 609 is subsequently performed in this case.
Step 607, if the streaming media server determines that the second terminal is an internet terminal supporting the first format audio coding, the streaming media server sends the audio data to be sent to the transcoder; and acquiring fourth audio data with the coding format of the first format returned by the transcoder, and sending the fourth audio data to the second terminal.
If the streaming media server determines that the second terminal is an internet terminal supporting the first format audio coding, the to-be-sent audio data is data based on the internet protocol and having a coding format of the second format, so that it can be determined that the to-be-sent audio data can be subjected to format conversion and protocol conversion is not performed.
In a specific implementation, preferably, if the streaming media server determines that the second terminal is an internet terminal supporting the first format audio coding, the streaming media server enters a transcoding process and sends audio data to be sent to a transcoder. The transcoder receives the audio data to be transmitted, and transcodes the audio data to be transmitted into fourth audio data of which the encoding format is the first format when detecting that the encoding format of the audio data to be transmitted is the second format. The streaming media server can acquire the fourth audio data which is returned by the transcoder and has the coding format of the first format through an internal transcoding receiving thread. In this case, the fourth audio data is data based on the internet protocol and having the encoding format of the first format, and the streaming server transmits the fourth audio data to the second terminal.
In step 608, if the streaming media server determines that the second terminal is a video networking terminal, the streaming media server converts the audio data to be sent into fifth audio data based on a video networking protocol, and sends the fifth audio data to the second terminal.
If the streaming media server determines that the second terminal is the video networking terminal, that is, the second terminal is the video networking terminal supporting the second format audio coding, since the audio data to be sent is data based on the internet protocol and the coding format of which is the second format, it can be determined that the audio data to be sent can not be subjected to format conversion and can be subjected to protocol conversion. The streaming media server may convert the audio data to be transmitted into fifth audio data based on the video networking protocol. In this case, the fifth audio data is data based on the video networking protocol and having the encoding format of the second format, and the streaming server sends the fifth audio data to the second terminal. It should be noted that, the streaming media server first sends the fifth audio data to the video networking server through the video networking protocol by the network card connected to the video networking, and then the video networking server sends the fifth audio data to the video networking terminal through the video networking protocol.
Step 609, if the streaming media server determines that the second terminal is an internet terminal supporting the second format audio coding, the streaming media server sends the audio data to be sent to the second terminal.
If the streaming media server determines that the second terminal is an internet terminal supporting the second format audio coding, the streaming media server can determine that the audio data to be sent may not be subjected to format conversion and may not be subjected to protocol conversion because the audio data to be sent is data based on the internet protocol and the coding format of which is the second format. In this case, the streaming server may send the audio data to be sent to the second terminal.
After receiving the audio data sent by the streaming media server, the second terminal performs related processing, such as decoding, on the audio data, and the audio data can be played after the processing, so that a user of the second terminal can listen to the speech of the user of the first terminal, thereby implementing a streaming media conference.
In the embodiment of the invention, when the streaming media server receives the audio data and sends the audio data, the audio data is decoded into the PCM when special processing is needed, and then the PCM audio data is encoded into the audio data with a corresponding encoding format after the processing, if special processing is not needed, encoding and decoding operations can not be carried out, the process is simple and convenient, the efficiency is high, the original code can not be changed in a large range, and the stability is better.
EXAMPLE III
Referring to fig. 7, a flowchart of an audio data processing method for a streaming media conference according to a third embodiment of the present invention is shown.
As shown in fig. 7, the embodiment of the present invention will be described by taking an example in which a streaming media server is connected to one individual device and two AAC audio terminals. The individual device represents an internet terminal supporting audio coding in a G711 format, and the AAC audio terminal represents an internet terminal supporting audio coding in an AAC format.
If the individual equipment is the first terminal and the two AAC audio terminals are the second terminals, the audio data processing method of the streaming media conference in the embodiment of the invention can include the following steps:
(1) the individual soldier equipment collects audio data and sends the audio data in the G711 format to the streaming media server.
(2) And the streaming media server enters an audio transcoding process and writes the audio data in the G711 format into a preset memory map.
(3) And the transcoder reads the audio data from the memory map in real time, transcodes the audio data in the G711 format into the audio data in the AAC format and writes the audio data in the AAC format into the memory map.
(4) And starting a transcoding receiving thread in the streaming media server, reading transcoded AAC (audio-audio coding) format audio data from the memory mapping in real time, and sending the audio data to the video conference module for processing.
(5) And the video conference module of the streaming media server respectively sends the AAC format audio data to the two AAC audio terminals.
If one of the AAC audio terminals is a first terminal, and the individual equipment and the other AAC audio terminal are second terminals, the audio data processing method for the streaming media conference according to the embodiment of the present invention may include the following steps:
(1) the AAC audio terminal equipment serving as the first terminal collects audio data and sends the audio data in the AAC format to the streaming media server.
(2) And the streaming media server sends the AAC format audio data to the video conference module for processing.
(3) And the video conference module of the streaming media server transmits the AAC format audio data to an AAC audio terminal serving as a second terminal.
(4) And the streaming media server video conference module sends the AAC-format audio data into an audio transcoding process, and the streaming media server writes the AAC-format audio data into a preset memory map.
(5) And the transcoder reads the audio data from the memory map in real time, transcodes the audio data in the AAC format into the audio data in the G711 format, and writes the audio data in the G711 format into the memory map.
(6) And starting a transcoding receiving thread in the streaming media server, reading transcoded audio data in the G711 format from the memory mapping in real time, and sending the audio data in the G711 format to the individual soldier equipment by the streaming media server.
The process can be applied to a mobile command car project, video conference services can be performed after the individual soldier equipment with the audio coding format of G711 is accessed to the streaming media, but the sound of other participants in the streaming media conference is in the AAC coding format, so that the sound of the individual soldier equipment cannot be heard by the other participants, and the sound of the other participants cannot be heard by the individual soldier equipment. Therefore, the embodiment of the invention can realize that the individual device supporting the G711 coding format is accessed to the conference in the streaming media video conference by the method. In a conference, G711 audio data from participants are not directly forwarded in a business service but sent to a transcoding process, a transcoding receiving thread continuously obtains transcoded AAC audio data from the transcoding process, and the transcoded AAC audio data and AAC audio data of other participants are subjected to conference service processing. And when the audio data are sent to the individual soldier equipment by the video conference, the audio data are also sent to the transcoding process, and the transcoding receiving thread obtains the transcoded G711 audio data from the transcoding process and sends the transcoded G711 audio data to the corresponding individual soldier equipment.
The embodiment of the invention can realize audio transcoding, and enables the streaming media conference to support the terminal conference of G711 audio coding through the cross-process AAC and G711 audio coding format conversion. The technical support can be provided for the mobile command car project, and a technical framework is provided for supporting other audio coding formats for streaming media. The transcoder improves the fault tolerance of the streaming media in another process, and the audio transcoding does not influence the stability of the streaming media.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Example four
Referring to fig. 8, a block diagram of an audio data processing apparatus for a streaming media conference according to a fourth embodiment of the present invention is shown. The device can be applied to a streaming media server, the streaming media server is respectively connected with a transcoder, a first terminal and a second terminal, the first terminal is a video networking terminal or an internet terminal, and the second terminal is a video networking terminal or an internet terminal.
The audio data processing device of the embodiment of the invention can comprise the following modules in the streaming media server:
a receiving module 801, configured to receive first audio data sent by the first terminal;
a first sending module 802, configured to send the first audio data to the transcoder if it is determined that the first audio data is based on an internet protocol and the encoding format is a first format;
an obtaining module 803, configured to obtain second audio data with a second coding format returned by the transcoder, and use the second audio data as audio data to be sent; the second audio data is obtained by transcoding the first audio data through the transcoder;
a second sending module 804, configured to send the audio data to be sent to the second terminal.
Preferably, the streaming media server further comprises: and the conversion module is used for converting the first audio data into third audio data based on an internet protocol and taking the third audio data as audio data to be sent if the first audio data is judged to be based on a video networking protocol and the coding format is judged to be the second format.
Preferably, the streaming media server further comprises: and the determining module is used for taking the first audio data as the audio data to be sent if the first audio data is judged to be based on the Internet protocol and the coding format is the second format.
Preferably, the second sending module includes: the transcoding sending unit is used for sending the audio data to be sent to the transcoder if the second terminal is judged to be the internet terminal supporting the first format audio coding; the data acquisition unit is used for acquiring fourth audio data which is returned by the transcoder and has the coding format of the first format, and sending the fourth audio data to the second terminal; and the fourth audio data is obtained by transcoding the audio data to be sent by the transcoder.
Preferably, the second sending module includes: and the protocol conversion unit is used for converting the audio data to be sent into fifth audio data based on a video networking protocol and sending the fifth audio data to the second terminal if the second terminal is judged to be the video networking terminal.
Preferably, the second sending module includes: and the data sending unit is used for sending the audio data to be sent to the second terminal if the second terminal is judged to be the internet terminal supporting the second format audio coding.
Preferably, the first format is a G711 format, and the second format is an AAC format.
In the embodiment of the invention, not all audio data are coded and decoded, but the audio data are coded and decoded only when transcoding is judged to be needed, so that the process is simple and convenient, the efficiency is high, the original code cannot be changed in a large range, and the stability is better.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The audio data processing method of the streaming media conference and the audio data processing device of the streaming media conference provided by the invention are introduced in detail, and a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the above embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. The method for processing the audio data of the streaming media conference is applied to a streaming media server, the streaming media server is respectively connected with a transcoder, a first terminal and a second terminal, the first terminal is a video networking terminal or an internet terminal, the second terminal is a video networking terminal or an internet terminal, and the method comprises the following steps:
the streaming media server receives first audio data sent by the first terminal;
if the streaming media server judges that the first audio data are based on the Internet protocol and the coding format is the first format, the streaming media server sends the first audio data to the transcoder;
the streaming media server acquires second audio data with a coding format of a second format returned by the transcoder, and takes the second audio data as audio data to be sent; the second audio data is obtained by transcoding the first audio data through the transcoder;
and the streaming media server sends the audio data to be sent to the second terminal.
2. The method according to claim 1, further comprising, after the step of receiving, by the streaming server, the first audio data transmitted by the first terminal:
and if the streaming media server judges that the first audio data is based on a video networking protocol and the coding format is the second format, converting the first audio data into third audio data based on an internet protocol, and taking the third audio data as audio data to be sent.
3. The method according to claim 1, further comprising, after the step of receiving, by the streaming server, the first audio data transmitted by the first terminal:
and if the streaming media server judges that the first audio data is based on the Internet protocol and the coding format is the second format, the streaming media server takes the first audio data as audio data to be sent.
4. The method according to claim 1, wherein the step of the streaming media server sending the audio data to be sent to the second terminal comprises:
if the streaming media server judges that the second terminal is the internet terminal supporting the first format audio coding, the streaming media server sends the audio data to be sent to the transcoder;
the streaming media server acquires fourth audio data with the coding format of the first format returned by the transcoder and sends the fourth audio data to the second terminal; and the fourth audio data is obtained by transcoding the audio data to be sent by the transcoder.
5. The method according to claim 1, wherein the step of the streaming media server sending the audio data to be sent to the second terminal comprises:
and if the streaming media server judges that the second terminal is a video networking terminal, converting the audio data to be sent into fifth audio data based on a video networking protocol, and sending the fifth audio data to the second terminal.
6. The method according to claim 1, wherein the step of the streaming media server sending the audio data to be sent to the second terminal comprises:
and if the streaming media server judges that the second terminal is the internet terminal supporting the second-format audio coding, the streaming media server sends the audio data to be sent to the second terminal.
7. The method of claim 1, wherein the first format is a G711 format and the second format is an AAC format.
8. An audio data processing device for a streaming media conference is applied to a streaming media server, the streaming media server is respectively connected with a transcoder, a first terminal and a second terminal, the first terminal is a video networking terminal or an internet terminal, the second terminal is a video networking terminal or an internet terminal, and the streaming media server comprises:
the receiving module is used for receiving first audio data sent by the first terminal;
the first sending module is used for sending the first audio data to the transcoder if the first audio data is determined to be based on an internet protocol and the coding format is the first format;
the acquisition module is used for acquiring second audio data with a second coding format returned by the transcoder and taking the second audio data as audio data to be sent; the second audio data is obtained by transcoding the first audio data through the transcoder;
and the second sending module is used for sending the audio data to be sent to the second terminal.
9. The apparatus of claim 8, wherein the streaming server further comprises:
and the conversion module is used for converting the first audio data into third audio data based on an internet protocol and taking the third audio data as audio data to be sent if the first audio data is judged to be based on a video networking protocol and the coding format is judged to be the second format.
10. The apparatus of claim 1, wherein the streaming server further comprises:
and the determining module is used for taking the first audio data as the audio data to be sent if the first audio data is judged to be based on the Internet protocol and the coding format is the second format.
CN201810613653.5A 2018-06-14 2018-06-14 Audio data processing method and device for streaming media conference Pending CN110611639A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810613653.5A CN110611639A (en) 2018-06-14 2018-06-14 Audio data processing method and device for streaming media conference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810613653.5A CN110611639A (en) 2018-06-14 2018-06-14 Audio data processing method and device for streaming media conference

Publications (1)

Publication Number Publication Date
CN110611639A true CN110611639A (en) 2019-12-24

Family

ID=68887868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810613653.5A Pending CN110611639A (en) 2018-06-14 2018-06-14 Audio data processing method and device for streaming media conference

Country Status (1)

Country Link
CN (1) CN110611639A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111182324A (en) * 2020-01-13 2020-05-19 张益兰 Video data processing method and server
CN111755017A (en) * 2020-07-06 2020-10-09 全时云商务服务股份有限公司 Audio recording method and device for cloud conference, server and storage medium
CN113645485A (en) * 2021-07-29 2021-11-12 长沙千视电子科技有限公司 Method and device for realizing conversion from any streaming media protocol to NDI (network data interface)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1668109A (en) * 2004-03-10 2005-09-14 浙江大学 Adaptive video transcoding gateway having multiple transforming functions
CN103905834A (en) * 2014-03-13 2014-07-02 深圳创维-Rgb电子有限公司 Voice data coded format conversion method and device
CN106162040A (en) * 2015-03-30 2016-11-23 北京视联动力国际信息技术有限公司 The method and apparatus that video conference accesses in many ways
CN106331581A (en) * 2015-07-06 2017-01-11 北京视联动力国际信息技术有限公司 Method and device for communication between mobile terminal and video networking terminal
CN106550282A (en) * 2015-09-17 2017-03-29 北京视联动力国际信息技术有限公司 A kind of player method and system of video data
US20170272483A1 (en) * 2016-03-15 2017-09-21 Adobe Systems Incorporated Digital Content Streaming to Loss Intolerant Streaming Clients
WO2017169890A1 (en) * 2016-03-31 2017-10-05 ソニー株式会社 Information processing device and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1668109A (en) * 2004-03-10 2005-09-14 浙江大学 Adaptive video transcoding gateway having multiple transforming functions
CN103905834A (en) * 2014-03-13 2014-07-02 深圳创维-Rgb电子有限公司 Voice data coded format conversion method and device
CN106162040A (en) * 2015-03-30 2016-11-23 北京视联动力国际信息技术有限公司 The method and apparatus that video conference accesses in many ways
CN106331581A (en) * 2015-07-06 2017-01-11 北京视联动力国际信息技术有限公司 Method and device for communication between mobile terminal and video networking terminal
CN106550282A (en) * 2015-09-17 2017-03-29 北京视联动力国际信息技术有限公司 A kind of player method and system of video data
US20170272483A1 (en) * 2016-03-15 2017-09-21 Adobe Systems Incorporated Digital Content Streaming to Loss Intolerant Streaming Clients
WO2017169890A1 (en) * 2016-03-31 2017-10-05 ソニー株式会社 Information processing device and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111182324A (en) * 2020-01-13 2020-05-19 张益兰 Video data processing method and server
CN111755017A (en) * 2020-07-06 2020-10-09 全时云商务服务股份有限公司 Audio recording method and device for cloud conference, server and storage medium
CN111755017B (en) * 2020-07-06 2021-01-26 全时云商务服务股份有限公司 Audio recording method and device for cloud conference, server and storage medium
CN113645485A (en) * 2021-07-29 2021-11-12 长沙千视电子科技有限公司 Method and device for realizing conversion from any streaming media protocol to NDI (network data interface)

Similar Documents

Publication Publication Date Title
CN110049271B (en) Video networking conference information display method and device
CN109302576B (en) Conference processing method and device
CN110460804B (en) Conference data transmitting method, system, device and computer readable storage medium
CN109120879B (en) Video conference processing method and system
CN110049273B (en) Video networking-based conference recording method and transfer server
CN108809921B (en) Audio processing method, video networking server and video networking terminal
CN110022295B (en) Data transmission method and video networking system
CN108616487B (en) Audio mixing method and device based on video networking
CN109788235B (en) Video networking-based conference recording information processing method and system
CN109547817B (en) Method and device for double-playing video networking video recording in Internet
CN109040656B (en) Video conference processing method and system
CN111147859A (en) Video processing method and device
CN110149305B (en) Video network-based multi-party audio and video playing method and transfer server
CN110113564B (en) Data acquisition method and video networking system
CN109005378B (en) Video conference processing method and system
CN110611639A (en) Audio data processing method and device for streaming media conference
CN110769179B (en) Audio and video data stream processing method and system
CN109302384B (en) Data processing method and system
CN111131743A (en) Video call method and device based on browser, electronic equipment and storage medium
CN110913162A (en) Audio and video stream data processing method and system
CN110769297A (en) Audio and video data processing method and system
CN110072154B (en) Video networking-based clustering method and transfer server
CN111654659A (en) Conference control method and device
CN111246153A (en) Video conference establishing method and device, electronic equipment and readable storage medium
CN110661749A (en) Video signal processing method and video networking terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191224