WO2023231723A1 - 流媒体数据处理方法及系统 - Google Patents

流媒体数据处理方法及系统 Download PDF

Info

Publication number
WO2023231723A1
WO2023231723A1 PCT/CN2023/093058 CN2023093058W WO2023231723A1 WO 2023231723 A1 WO2023231723 A1 WO 2023231723A1 CN 2023093058 W CN2023093058 W CN 2023093058W WO 2023231723 A1 WO2023231723 A1 WO 2023231723A1
Authority
WO
WIPO (PCT)
Prior art keywords
code stream
streaming media
stream data
data
thread
Prior art date
Application number
PCT/CN2023/093058
Other languages
English (en)
French (fr)
Inventor
陈俊江
涂英哲
张胜文
陈勇
卢建
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023231723A1 publication Critical patent/WO2023231723A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • H04N21/2225Local VOD servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Definitions

  • Embodiments of the present invention relate to the field of streaming media data processing, and specifically to a streaming media data processing method and system.
  • 5G mobile communication technology
  • the business streaming system has become an important mechanism for information dissemination. 5G's quantitative changes in speed will promote qualitative changes in the entire industry, and will also help accelerate the development of streaming media systems.
  • the streaming media system in the industry has live broadcast, recording, playback, etc. as its main functions, and its receiving and forwarding performance has become an important indicator to measure the quality of the streaming media system.
  • the industry's receiving and forwarding speed can reach 2Gbps, but it is difficult to increase it further.
  • its processing capacity will be greatly limited: if the speed of receiving data packets is not fast enough , resulting in losses at the system level; if the speed of forwarding and processing data packets is not fast enough, resulting in losses at the application level, the receiving and forwarding performance of the streaming media system has become an important bottleneck restricting its development.
  • Embodiments of the present invention provide a streaming media data processing method and system to at least solve the problem in related technologies of packet loss in receiving and processing forwarding link data of streaming media data under large traffic conditions.
  • a streaming media data processing method including: integrating code stream data and message signaling into the same thread task channel according to different descriptors, and caching them in the system cache area; receiving packets The thread obtains the code stream data from the system buffer area and stores it in the streaming media internal buffer area; different types of business threads read the code stream data from the streaming media internal buffer area.
  • a streaming media data processing system including: a data processing module configured to integrate code stream data and message signaling into the same thread task channel according to different descriptors, and cache to the system cache area; the data acquisition module is configured to obtain the code stream data from the system cache area and store it in the streaming media internal cache area; the data reading module is configured to obtain the code stream data from the streaming media internal cache according to different types of business threads.
  • the buffer area reads the code stream data.
  • a computer-readable storage medium is also provided.
  • a computer program is stored in the computer-readable storage medium, wherein the computer program is configured to execute any of the above methods when running. Steps in Examples.
  • an electronic device including a memory and a processor.
  • a computer program is stored in the memory, and the processor is configured to run the computer program to perform any of the above. Steps in method embodiments.
  • Figure 1 is a hardware structure block diagram of a mobile terminal of a streaming media data processing method according to an embodiment of the present invention
  • Figure 2 is an operating network architecture diagram of a streaming media data processing method according to an embodiment of the present invention
  • Figure 3 is a flow chart of a streaming media data processing method according to an embodiment of the present invention.
  • Figure 4 is a flow chart of a business thread reading code stream data according to an embodiment of the present invention.
  • Figure 5 is a flow chart of a streaming media data processing method according to an embodiment of the present invention.
  • FIG. 6 is a structural block diagram of a streaming media data processing system according to an embodiment of the present invention.
  • Figure 7 is a structural block diagram of a data processing module according to an embodiment of the present invention.
  • Figure 8 is a structural block diagram of a data reading module according to an embodiment of the present invention.
  • FIG. 9 is a structural block diagram of a streaming media data processing system according to an embodiment of the present invention.
  • Figure 10 is a flow chart of streaming media data processing according to a scenario embodiment of the present invention.
  • Figure 11 is a schematic diagram of a thread management mechanism according to a scenario embodiment of the present invention.
  • Figure 12 is a schematic diagram of the principle of vertical and horizontal packet retrieval technology used by the packet collection thread according to the scenario embodiment of the present invention.
  • Figure 13 is a schematic diagram of the technical principle of business thread integration with producers and consumers according to a scenario embodiment of the present invention.
  • FIG. 1 is a hardware structure block diagram of a mobile terminal of a streaming media data processing method according to an embodiment of the present invention.
  • the mobile terminal may include one or more (only one is shown in Figure 1) processors 102 (the processor 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, wherein the above-mentioned mobile terminal may also include a transmission device 106 and an input and output device 108 for communication functions.
  • processors 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device FPGA
  • a memory 104 for storing data
  • the above-mentioned mobile terminal may also include a transmission device 106 and an input and output device 108 for communication functions.
  • the structure shown in Figure 1 is only illustrative, and it does not limit the structure of the above-mentioned mobile terminal.
  • the mobile terminal may also include more or fewer components than shown in FIG. 1 , or have a different configuration than shown in FIG. 1 .
  • the memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the streaming media data processing method in the embodiment of the present invention.
  • the processor 102 runs the computer program stored in the memory 104, thereby Execute various functional applications and data processing, that is, implement the above methods.
  • Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include memory located remotely relative to the processor 102, and these remote memories may be connected to the mobile terminal through a network. Examples of the above-mentioned networks include but are not limited to the Internet, intranets, local area networks, mobile communication networks and combinations thereof.
  • the transmission device 106 is used to receive or send data via a network.
  • Specific examples of the above-mentioned network may include a wireless network provided by a communication provider of the mobile terminal.
  • the transmission device 106 includes a network adapter (Network Interface Controller, NIC for short), which can be connected to other network devices through a base station to communicate with the Internet.
  • the transmission device 106 may be a radio frequency (Radio Frequency, RF for short) module, which is used to communicate with the Internet wirelessly.
  • NIC Network Interface Controller
  • the embodiment of the present application can run on the network architecture shown in Figure 2.
  • the network architecture runs on a server.
  • the server can be a physical machine, a virtual machine, or a Docker container environment.
  • the network architecture is divided into two parts: kernel space and user space.
  • the kernel space refers to the environment in which the operating system and drivers run. After the code stream data enters the server, it will first enter the kernel space.
  • User space refers to the environment in which streaming media runs. Streaming media will get the stream data from the kernel space to the user space.
  • Between kernel space and user space is the Socket Buffer system buffer area.
  • the system cache area is used to implement temporary caching of code stream data when it transitions from kernel space to user space.
  • the user space includes various functional modules of streaming media: packet collection thread RECEIVE, streaming media internal buffer rBuffer, recording thread RECORD, storage IO buffer IO Buffer, and live broadcast thread PLAY.
  • FIG. 3 is a flow chart of the streaming media data processing method according to an embodiment of the present invention. As shown in Figure 3, the flow Includes the following steps:
  • Step S302 integrate the code stream data and message signaling into the same thread task channel according to different descriptors, and cache them in the system cache area;
  • Step S304 The packet collection thread obtains the code stream data from the system buffer area and stores it in the streaming media internal buffer area;
  • Step S306 Different types of service threads read the code stream data from the internal buffer area of the streaming media.
  • the stream data and message signaling are integrated into the same thread task channel according to different descriptors and cached in the system cache area, thereby improving the performance of streaming media data reception; by making the packet receiving thread start from the The system buffer area obtains the code stream data and stores it in the streaming media internal buffer area; different types of business threads read the code stream data from the streaming media internal buffer area, which improves the processing and forwarding performance of streaming media data and achieves Improve the effect of receiving and forwarding speed of streaming media data.
  • the execution subject of the above steps may be a base station, a terminal, etc., but is not limited thereto.
  • the code stream data and the message signaling are distinguished by socket descriptors and named pipe descriptors respectively in the same thread task channel.
  • the packet receiving thread obtains the code stream data from the system buffer area including: the data packet structure of the code stream data is transferred using a pointer, and the packet receiving thread uses asynchronous IO
  • the event triggering mechanism obtains the code stream data from the system buffer area, and adjusts the number of packets and the maximum number of events according to the traffic size of the code stream data.
  • the packet receiving thread obtains the code stream data from the system cache area, which further includes: using multiple packet collection threads to obtain the code stream data from the system cache area, and sends it to The specified CPU in the multi-core CPU processes and increases the priority of the packet receiving thread.
  • the different types of service threads reading the code stream data from the streaming media internal cache area include: the packet collection thread and the service thread are centered on the streaming media internal cache area, using The producer/consumer mode delivers data packets of the code stream data, where the packet receiving thread is a producer and the business thread is a consumer.
  • Figure 4 is a flow chart of a business thread reading code stream data according to an embodiment of the present invention. As shown in Figure 4, the different types of business threads read from the internal buffer area of the streaming media. Obtaining the code stream data also includes the following steps:
  • Step S402 merge the different types of business threads into the same packet receiving thread and CPU processing
  • Step S404 According to the number of tasks of the different types of business threads, select the packet receiving thread with the smallest load for processing.
  • the streaming media internal buffer area is a circular resource pool.
  • Figure 5 is a flow chart of a streaming media data processing method according to an embodiment of the present invention. As shown in Figure 5, when the different types of business threads read from the streaming media internal cache area After the code stream data, it also includes: the different types of business threads store the code stream data into a storage device for persistent data storage, that is, the process includes the following steps:
  • Step S502 integrate the code stream data and message signaling into the same thread task channel according to different descriptors, and cache them in the system cache area;
  • Step S504 The packet receiving thread obtains the code stream data from the system buffer area and stores it in the streaming media internal buffer area;
  • Step S506 Different types of business threads read the code stream data from the internal buffer area of the streaming media
  • Step S508 The different types of business threads store the code stream data into a storage device for persistent data storage.
  • the method according to the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is Better implementation.
  • the technical solution of the present invention can be embodied in the form of a software product in essence or the part that contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM/RAM, disk, CD), including several instructions to cause a terminal device (which can be a mobile phone, a computer, a server, or a network device, etc.) to execute the methods described in various embodiments of the present invention.
  • module may be a combination of software and/or hardware that implements a predetermined function.
  • the apparatus described in the following embodiments is preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
  • FIG. 6 is a structural block diagram of a streaming media data processing system according to an embodiment of the present invention.
  • the streaming media data processing system 60 includes: a data processing module 610, configured to process code stream data and message signaling according to different The descriptors are integrated into the same thread task channel and cached in the system cache; the data acquisition module 620 is configured to obtain the code stream data from the system cache and store it in the streaming media internal cache; the data reading module 630 , set to read the code stream data from the streaming media internal buffer according to different types of service threads.
  • FIG. 7 is a structural block diagram of a data processing module according to an embodiment of the present invention.
  • the data processing module 70 processes the data in the streaming media data processing system 60 shown in FIG. 6.
  • the processing module 610 is divided into two parts, namely: the first data processing unit 710, which is configured to integrate the code stream data into the thread task channel according to the socket descriptor; the second data processing unit 720, which is configured to The message signaling is integrated into the thread task channel according to the named pipe descriptor.
  • the data processing module can be further subdivided into multiple data processing units, which is not limited here.
  • FIG. 8 is a structural block diagram of a data reading module according to an embodiment of the present invention.
  • the data reading module 80 converts the streaming media data processing system 60 shown in FIG. 6 into The data reading module 630 of The streaming media internal cache area reads the code stream data; the second reading unit 820, after the code stream data is stored in the streaming media internal cache area, reads the code stream data from all the code stream data simultaneously according to the different types of the service threads.
  • the streaming media internal cache area reads the code stream data; the third reading unit 830, after the code stream data is stored in the streaming media internal cache area, the different types of needs of the business thread can be read from all the requirements at any time.
  • the internal buffer area of the streaming media reads the code stream data.
  • the data reading module can be further subdivided into multiple data reading units, which is not limited here.
  • Figure 9 is a structural block diagram of a streaming media data processing system according to an embodiment of the present invention.
  • the streaming media data processing system 90 includes various modules shown in Figure 6. , also includes: a data storage module 910, configured to store the code stream data read by the data reading module into a storage device.
  • each of the above modules can be implemented through software or hardware.
  • it can be implemented in the following ways, but is not limited to this: the above modules are all located in the same processor; or the above modules can be implemented in any combination.
  • the forms are located in different processors.
  • Embodiments of the present invention also provide a computer-readable storage medium that stores a computer program, wherein the computer program is configured to execute the steps in any of the above method embodiments when running.
  • the computer-readable storage medium may include but is not limited to: U disk, read-only memory (Read-Only Memory, referred to as ROM), random access memory (Random Access Memory, referred to as RAM) , mobile hard disk, magnetic disk or optical disk and other media that can store computer programs.
  • ROM read-only memory
  • RAM random access memory
  • mobile hard disk magnetic disk or optical disk and other media that can store computer programs.
  • An embodiment of the present invention also provides an electronic device, including a memory and a processor.
  • a computer program is stored in the memory, and the processor is configured to run the computer program to perform the steps in any of the above method embodiments.
  • the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
  • FIG 10 is a flow chart of streaming media data processing according to a scenario embodiment of the present invention.
  • the streaming media data transmission model is taken as an example; first, a thread management mechanism is established in the entire link, and channel convergence technology is proposed to reasonably Manage and optimize threads; then, the streaming media packet collection thread retrieves data packets from the operating system cache, and proposes vertical and horizontal packet capture technology to improve the performance of the packet collection phase; then, the packet collection thread stores the retrieved data packets into the streaming media ring cache area; then, the streaming media business thread copies the data packets from the ring buffer area to related services for data processing, and proposes to integrate producer-consumer technology to improve data forwarding performance; finally, the data is stored persistently.
  • the following is a detailed description based on the specific steps shown in Figure 10:
  • Step S1002 establish a thread management mechanism
  • FIG 11 is a schematic diagram of the thread management mechanism according to the scenario embodiment of the present invention.
  • the data packet code stream and message signaling are integrated into the same receiving thread for processing and unified management, but they are differentiated when entering the channel.
  • the signaling goes through the named pipe file fd, and the code flow goes through the socket fd. This avoids the use of locks to prevent multi-thread access, reduces information management investment, and reduces CPU blocking and code redundancy. Thread task channel.
  • Step S1004 the packet receiving thread uses vertical and horizontal packet fetching technology to receive code stream data
  • the packet receiving thread RECEIVE of streaming media is equivalent to the entrance of the code stream data. It needs to be responsible for getting the code stream data packets from the system buffer area, that is, receiving the data.
  • the maximum processing performance of the receiving thread should be mined vertically, and hardware and system resources should be fully scheduled horizontally. The vertical and horizontal combination will help improve the performance of the packet receiving thread stage.
  • Figure 12 is a schematic diagram of the principle of vertical and horizontal packet retrieval technology used by the packet collection thread according to the scenario embodiment of the present invention.
  • the data packet structure is transferred using pointers to avoid large memory copies;
  • the packet collection thread RECEIVE adopts asynchronous I 0 event triggering mechanism adjusts the number of packets and the maximum number of events according to the traffic size; thereby improving the packet fetching capability of a single thread.
  • Step S1006 the code stream data is stored in the internal buffer area of the streaming media
  • the data packet After the packet receiving thread takes out the data, the data packet will be stored in the streaming media internal ring buffer rBuffer.
  • rBuffer uses a resource pool, and recycling does not require increasing the amount of storage, thus avoiding memory fragmentation.
  • This module is responsible for temporarily caching the data packets taken out from the system cache by the packet receiving thread RECE IVE. This is equivalent to a backup of the data package, so that different subsequent services can read the data at the same time, such as RECORD storage and PLAY forwarding, so that it can be stored and played at the same time.
  • Step S1008 the business thread uses the integrated producer-consumer technology to process code stream data
  • the business thread is logically located behind the rbuffer.
  • Various business threads can read the data in the rbuffer at the same time for their respective business processes, such as RECORD thread storage, PLAY thread forwarding, etc.
  • the producer-consumer model is used to use the packet receiving thread as the producer and the business thread as the consumer, and unify these working threads into the same actual thread to improve the performance of the rbuffer module and improve the business Thread processing data performance.
  • FIG 13 is a schematic diagram of the technical principle of business thread integration with producers and consumers according to the scenario embodiment of the present invention.
  • the rbuffer is regarded as the center point
  • the packet collection thread RECEIVE is regarded as the producer
  • the business threads RECORD and PLAY are regarded as As consumers
  • the data packets are passed from the producer RECEIVE thread to the consumer RECORD, PLAY and other threads through rBuffer.
  • This technology decouples different working threads in the way of asynchronous calls, reducing the mutual influence between threads and reducing Less time consuming delays.
  • each abstract working thread is regarded as a work task and is unified and integrated into the same actual thread and CPU for processing.
  • the tasks are assigned to the thread with the smallest load according to the number of tasks in the current thread. This allows hardware resources to be more fully utilized and reduces the additional consumption of thread creation and management threads by the operating system.
  • Step S1010 persistent storage of code stream data.
  • This link is the end of code stream data transmission. After receiving and forwarding, it needs to be placed in a storage device for persistence. As shown in Figure 2, after the recording thread RECORD reads the data packet from rBuffer, it first writes the data packet into the IO Buffer buffer area. When the file is closed or the buffer area is full, the buffer area data is flushed into local storage or object storage. This is used to reduce the pressure of reading and writing I 0.
  • embodiments of the present invention propose a streaming media data processing method and system, which can theoretically process data streams at full bandwidth speed under 10 Gbit/s bandwidth.
  • the embodiments of the present invention greatly improve the receiving and forwarding performance of the streaming media system, and can greatly save resources and costs for audio and video related industries.
  • the embodiment of the present invention first establishes a thread management mechanism in the entire link to integrate the code stream and signaling into the same receiving thread for processing and processing. Carry out unified management, but differentiate between entry channels and avoid using locks to prevent multi-thread access, which is conducive to optimizing thread management and reducing system blocking.
  • channel convergence technology is proposed to reasonably manage and optimize threads.
  • the packet retrieval performance of a single packet receiving thread is improved; in the horizontal direction, the batch expansion of kernel resources is beneficial to speeding up the retrieval of code stream data packets from the system buffer area and storing them in the internal buffer area of the streaming media.
  • the vertical and horizontal packet fetching technology is proposed to improve the performance of the data receiving stage.
  • different working threads are decoupled in the form of asynchronous calls to reduce the mutual influence and time-consuming delay between threads; and the abstract threads are integrated into the actual same thread and core for processing, reducing the additional consumption of threads and making full use of System resources are helpful to speed up data processing in the internal cache module.
  • Immediate mention Integrate producer-consumer technology to improve the performance of the data processing stage.
  • Embodiments of the present invention are suitable for audio and video related industries based on streaming data processing, such as video conferencing, video IoT, video middle stations, etc. Specifically, it can be used in high-concurrency scenarios with a large number of access channels and traffic, high-performance data forwarding, and services without packet loss requirements.
  • the data flow link process and analysis ideas, staged performance optimization technology, solution design, etc. of the embodiment of the present invention have obvious characteristics. Achieving similar or identical purposes to the embodiments of the present invention through means such as packet capture and business call chain tracking tools shall be within the scope of the present invention.
  • modules or steps of the present invention can be implemented using general-purpose computing devices. They can be concentrated on a single computing device, or distributed across a network composed of multiple computing devices. They may be implemented in program code executable by a computing device, such that they may be stored in a storage device for execution by the computing device, and in some cases may be executed in a sequence different from that shown herein. Or the described steps can be implemented by making them into individual integrated circuit modules respectively, or by making multiple modules or steps among them into a single integrated circuit module. As such, the invention is not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明实施例提供了一种流媒体数据处理方法及系统,通过将码流数据和消息信令按不同的描述文件符整合至同一线程任务通道,并缓存至系统缓存区,提升了流媒体数据接收的性能;通过令收包线程从所述系统缓存区获取所述码流数据,并存入流媒体内部缓存区;不同类型的业务线程从所述流媒体内部缓存区读取所述码流数据,提高了流媒体数据的处理转发性能,达到了提升流媒体数据接收与转发速度的效果。

Description

流媒体数据处理方法及系统 技术领域
本发明实施例涉及流媒体数据处理领域,具体而言,涉及一种流媒体数据处理方法及系统。
背景技术
目前,第五代移动通信技术(5G)正逐步进入商用时代,这对全球的音视频行业带来了深刻的影响与变革,也使得服务于视频会议、视频物联、视频中台等产品和业务的流媒体系统成为了信息传播的重要机制。5G对于速度的量变将会推动整个产业发生质的变化,也将助力流媒体系统加速发展。
总的来说,行业内的流媒体系统以直播、录像、回放等为主要功能,其接收和转发性能成为衡量流媒体系统优劣的一个重要指标。目前,业界的接收和转发速度能达到2Gbps,但是再往上提升就显得困难重重了,在高并发大流量的情况下,其处理能力将会极大受限:如接收数据包的速度不够快,导致了系统层面的丢失;如转发处理数据包的速度不够快,导致了应用层面的丢失,流媒体系统的接收和转发性能成为了制约其发展的重要瓶颈。
发明内容
本发明实施例提供了一种流媒体数据处理方法及系统,以至少解决相关技术中在大流量情况下,流媒体数据接收和处理转发链路数据丢包的问题。
根据本发明的一个实施例,提供了一种流媒体数据处理方法,包括:将码流数据和消息信令按不同的描述文件符整合至同一线程任务通道,并缓存至系统缓存区;收包线程从所述系统缓存区获取所述码流数据,并存入流媒体内部缓存区;不同类型的业务线程从所述流媒体内部缓存区读取所述码流数据。
根据本发明的另一个实施例,提供了一种流媒体数据处理系统,包括:数据处理模块,设置为将码流数据和消息信令按不同的描述文件符整合至同一线程任务通道,并缓存至系统缓存区;数据获取模块,设置为从所述系统缓存区获取所述码流数据,并存入流媒体内部缓存区;数据读取模块,设置为根据业务线程的不同类型从所述流媒体内部缓存区读取所述码流数据。
根据本发明的又一个实施例,还提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
根据本发明的又一个实施例,还提供了一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行上述任一项方法实施例中的步骤。
附图说明
图1是本发明实施例的一种流媒体数据处理方法的移动终端的硬件结构框图;
图2是根据本发明实施例的一种流媒体数据处理方法运行网络架构图;
图3是根据本发明实施例的流媒体数据处理方法的流程图;
图4是根据本发明实施例的业务线程读取码流数据的流程图;
图5是根据本发明实施例的流媒体数据处理方法的流程图;
图6是根据本发明实施例的流媒体数据处理系统的结构框图;
图7是根据本发明实施例的数据处理模块的结构框图;
图8是根据本发明实施例的数据读取模块的结构框图;
图9是根据本发明实施例的流媒体数据处理系统的结构框图;
图10是根据本发明场景实施例的流媒体数据处理的流程图;
图11是根据本发明场景实施例的线程管理机制示意图;
图12是根据本发明场景实施例的收包线程采用纵横取包技术原理示意图;
图13是根据本发明场景实施例的业务线程融合生产者消费者技术原理示意图。
具体实施方式
下文中将参考附图并结合实施例来详细说明本发明的实施例。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
本申请实施例中所提供的方法实施例可以在移动终端、计算机终端或者类似的运算装置中执行。以运行在移动终端上为例,图1是本发明实施例的一种流媒体数据处理方法的移动终端的硬件结构框图。如图1所示,移动终端可以包括一个或多个(图1中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)和用于存储数据的存储器104,其中,上述移动终端还可以包括用于通信功能的传输设备106以及输入输出设备108。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述移动终端的结构造成限定。例如,移动终端还可包括比图1中所示更多或者更少的组件,或者具有与图1所示不同的配置。
存储器104可用于存储计算机程序,例如,应用软件的软件程序以及模块,如本发明实施例中的流媒体数据处理方法对应的计算机程序,处理器102通过运行存储在存储器104内的计算机程序,从而执行各种功能应用以及数据处理,即实现上述的方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至移动终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输装置106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括移动终端的通信供应商提供的无线网络。在一个实例中,传输装置106包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输装置106可以为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。
本申请实施例可以运行于图2所示的网络架构上,如图2所示,该网络架构运行于服务器上,该服务器可以是物理机,可以是虚机,也可以Docker容器化环境。该网络架构分为内核空间和用户空间两部分,其中,内核空间指操作系统和驱动程序所运行的环境。码流数据进入服务器之后会首先进入到内核空间。用户空间指流媒体所运行的环境,流媒体会将码流数据会从内核空间拿到用户空间。内核空间与用户空间之间的是Socket Buffer系统缓存区, 系统缓存区用于实现码流数据从内核空间过度到用户空间时的临时缓存。用户空间中包括流媒体各功能模块:收包线程RECEIVE、流媒体内部缓存区rBuffer、录像线程RECORD、存储IO缓冲区IO Buffer、直播线程PLAY。
在本实施例中提供了一种运行于上述移动终端或网络架构的流媒体数据处理方法,图3是根据本发明实施例的流媒体数据处理方法的流程图,如图3所示,该流程包括如下步骤:
步骤S302,将码流数据和消息信令按不同的描述文件符整合至同一线程任务通道,并缓存至系统缓存区;
步骤S304,收包线程从所述系统缓存区获取所述码流数据,并存入流媒体内部缓存区;
步骤S306,不同类型的业务线程从所述流媒体内部缓存区读取所述码流数据。
通过上述步骤,通过将码流数据和消息信令按不同的描述文件符整合至同一线程任务通道,并缓存至系统缓存区,提升了流媒体数据接收的性能;通过令收包线程从所述系统缓存区获取所述码流数据,并存入流媒体内部缓存区;不同类型的业务线程从所述流媒体内部缓存区读取所述码流数据,提高了流媒体数据的处理转发性能,达到了提升流媒体数据接收与转发速度的效果。
其中,上述步骤的执行主体可以为基站、终端等,但不限于此。
在一个示例性实施例中,所述码流数据和所述消息信令在所述同一线程任务通道中分别通过套接字描述文件符和命名管道描述文件符进行区分。
在一个示例性实施例中,所述收包线程从所述系统缓存区获取所述码流数据包括:所述码流数据的数据包结构体使用指针进行传递,所述收包线程采用异步IO事件触发机制从所述系统缓存区获取所述码流数据,并根据所述码流数据的流量大小调整取包数量和最大事件数。
在一个示例性实施例中,所述收包线程从所述系统缓存区获取所述码流数据,还包括:采用多收包线程从所述系统缓存区获取所述码流数据,并发送至多核CPU中指定的CPU处理,并提升所述收包线程的优先级。
在一个示例性实施例中,所述不同类型的业务线程从所述流媒体内部缓存区读取所述码流数据包括:所述收包线程和业务线程以流媒体内部缓存区为中心,采用生产者/消费者模式传递所述码流数据的数据包,其中,所述收包线程为生产者,所述业务线程为消费者。
在一个示例性实施例中,图4是根据本发明实施例的业务线程读取码流数据的流程图,如图4所示,所述不同类型的业务线程从所述流媒体内部缓存区读取所述码流数据,还包括以下步骤:
步骤S402,将所述不同类型的业务线程融合到同一所述收包线程和CPU处理;
步骤S404,根据所述不同类型的业务线程的任务数量,选择最小载荷的所述收包线程进行处理。
在一个示例性实施例中,所述流媒体内部缓存区为环形资源池。
在一个示例性实施例中,图5是根据本发明实施例的流媒体数据处理方法的流程图,如图5所示,在所述不同类型的业务线程从所述流媒体内部缓存区读取所述码流数据之后,还包括:所述不同类型的业务线程将所述码流数据存入存储设备进行数据持久化存储,即该流程包括以下步骤:
步骤S502,将码流数据和消息信令按不同的描述文件符整合至同一线程任务通道,并缓存至系统缓存区;
步骤S504,收包线程从所述系统缓存区获取所述码流数据,并存入流媒体内部缓存区;
步骤S506,不同类型的业务线程从所述流媒体内部缓存区读取所述码流数据;
步骤S508,所述不同类型的业务线程将所述码流数据存入存储设备进行数据持久化存储。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
在本实施例中还提供了一种流媒体数据处理系统,该系统用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。
图6是根据本发明实施例的流媒体数据处理系统的结构框图,如图6所示,该流媒体数据处理系统60包括:数据处理模块610,设置为将码流数据和消息信令按不同的描述文件符整合至同一线程任务通道,并缓存至系统缓存区;数据获取模块620,设置为从所述系统缓存区获取所述码流数据,并存入流媒体内部缓存区;数据读取模块630,设置为根据业务线程的不同类型从所述流媒体内部缓存区读取所述码流数据。
在一个示例性实施例中,图7是根据本发明实施例的数据处理模块的结构框图,如图7所示,该数据处理模块70把图6所示的流媒体数据处理系统60中的数据处理模块610分为两部分,分别是:第一数据处理单元710,设置为将所述码流数据按套接字描述文件符整合进入所述线程任务通道;第二数据处理单元720,设置为将所述消息信令按命名管道描述文件符整合进入所述线程任务通道。
本领域的普通技术人员应该知道,根据数据内容的不同,在实际实施过程中,可以将数据处理模块进一步细分为多个数据处理单元,这里不做限制。
在一个示例性实施例中,图8是根据本发明实施例的数据读取模块的结构框图,如图8所示,该数据读取模块80把图6所示的流媒体数据处理系统60中的数据读取模块630分为三部分,分别是:第一读取单元810,设置为在所述码流数据存入所述流媒体内部缓存区后,根据所述业务线程的不同类型立即从所述流媒体内部缓存区读取所述码流数据;第二读取单元820,在所述码流数据存入所述流媒体内部缓存区后,根据所述业务线程的不同类型同时从所述流媒体内部缓存区读取所述码流数据;第三读取单元830,在所述码流数据存入所述流媒体内部缓存区后,所述业务线程的不同类型的需要随时从所述流媒体内部缓存区读取所述码流数据。
本领域的普通技术人员应该知道,根据数据内容的不同,在实际实施过程中,可以将数据读取模块进一步细分为多个数据读取单元,这里不做限制。
在一个示例性实施例中,图9是根据本发明实施例的流媒体数据处理系统的结构框图,如图9所示,该流媒体数据处理系统90除了包括图6中所示的各个模块外,还包括:数据存储模块910,设置为将所述数据读取模块读取的所述码流数据存入存储设备。
本领域的普通技术人员应该知道,上述实施例所涉及的各个模块、单元是可以组合到一 起,或者根据需要进行部分组合或者集成到一个或者多个装置、系统上,只要能够实现相应的功能即可。
需要说明的是,上述各个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述各个模块以任意组合的形式分别位于不同的处理器中。
本发明的实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述计算机可读存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。
本发明的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述电子装置还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。
本实施例中的具体示例可以参考上述实施例及示例性实施方式中所描述的示例,本实施例在此不再赘述。
为了使得本领域的技术人员更好地理解本发明的技术方案,下面将结合具体的场景实施例对本发明的技术方案进行说明。
场景实施例一
图10是根据本发明场景实施例的流媒体数据处理的流程图,如图10所示,以流媒体数据传输模型为例;首先,在全链路建立线程管理机制,提出通道趋同技术,合理管理和优化线程;接着,流媒体收包线程从操作系统缓存区取出数据包,提出纵横取包技术,提升收包阶段的性能;接着,收包线程将取出的数据包存入流媒体环形缓存区;接着,流媒体业务线程从环形缓存区复制数据包到相关业务进行数据处理,提出融合生产者消费者技术,提升数据转发性能;最后,将数据进行持久化存储。下面结合图10所示的具体步骤做详细说明:
步骤S1002,建立线程管理机制;
在流媒体数据传输的全链路建立线程管理机制,对线程进行合理管理和优化,减少系统阻塞,提升传输性能。
图11是根据本发明场景实施例的线程管理机制示意图,如图11所示,将数据包码流与消息信令整合到同一接收线程中处理并进行统一管理,但又在进入通道进行区分,信令走命名管道文件fd,码流走socket fd。从而避免使用锁的方式来防止多线程访问,减小了信息管理的投入,且减少了CPU阻塞和代码冗余。线程任务通道。
步骤S1004,收包线程采用纵横取包技术接收码流数据;
媒体数据在进入流媒体模块之前,会先达到操作系统缓存区。流媒体的收包线程RECEIVE,相当于是码流数据的入口,需要负责从系统缓存区中拿取到码流数据包,即接收数据。
为了提升该链路的性能,在纵向上挖掘接收线程的最大处理性能,在横向上充分调度硬件和系统资源,纵横结合有利于提升收包线程阶段的性能。
图12是根据本发明场景实施例的收包线程采用纵横取包技术原理示意图,如图12所示,纵向上:数据包结构体使用指针进行传递,避免大的内存拷贝;收包线程RECEIVE采用异步 I 0事件触发机制,根据流量大小调整取包数量、最大事件数;从而提升单个线程取包能力。
横向上:增大接收线程数量,采用多线程与系统多核CPU亲缘性策略,让收包线程RECE I VE发送给指定的CPU核来处理;并且提升收包线程优先级;从而加大利用系统内核资源,优先快速处理。
步骤S1006,码流数据存入流媒体内部缓存区;
收包线程取出数据后,会将数据包存入流媒体内部环形缓存区rBuffer。rBuffer会使用资源池,循环使用无须增加存储量,避免了内存碎片。该模块负责将收包线程RECE I VE从系统缓存区取出的数据包进行临时缓存。这相当于是对数据包的备份,方便后续的不同业务可同时读取数据,比如RECORD存储、PLAY转发,做到边存边播。
步骤S1008,业务线程采用融合生产者消费者技术处理码流数据;
业务线程在逻辑上位于rbuffer之后,各种业务线程可同时读取rbuffer中的数据,用于各自的业务流程,比如RECORD线程存储、PLAY线程转发,等等。
为了提升该链路的性能,利用生产者消费者模式将收包线程作为生产者,将业务线程作为消费者,并且将这些工作线程统一到实际相同的线程,用于提升rbuffer模块性能,提升业务线程处理数据性能。
图13是根据本发明场景实施例的业务线程融合生产者消费者技术原理示意图,如图13所示,把rbuffer视作为中心点,将收包线程RECEIVE看作生产者,将业务线程RECORD、PLAY等看作消费者,通过rBuffer将数据包从生产者RECEIVE线程向消费者RECORD、PLAY等线程传递,该技术以异步调用的方式解耦不同的工作线程,减小线程之间的相互影响,减小了耗时延迟。
在此技术基础上,将各个抽象的工作线程看作工作任务统一融合到实际相同的线程和CPU上进行处理,同时在任务进行分配时根据当前线程的任务数量,把任务分配到载荷最小的线程上,使硬件资源得到更为充分的利用,并减少操作系统创建线程与管理线程的额外消耗。
步骤S1010,码流数据持久化存储。
该链路为码流数据传输的末端,经过接收转发等处理之后,需要放到存储设备进行持久化。如图2所示,录像线程RECORD从rBuffer中读到数据包后,先将数据包写入IO Buffer缓存区,待文件关闭或者缓存区满时就将缓存区数据刷入本地存储或者对象存储,以此来减轻读写I 0压力。
综上,本发明实施例提出了一种流媒体数据处理方法及系统,理论上,在万兆带宽下(10Gbps)能够以满带宽的速度处理数据码流。本发明实施例极大提高了流媒体系统的接收转发性能,能够为音视频相关的产业极大地节约资源和成本。针对大流量情况下,流媒体数据接收和处理转发链路数据丢包的问题,本发明实施例首先,在全链路建立线程管理机制,将码流与信令整合到同一接收线程中处理并进行统一管理,但又在进入通道进行区分,避免使用锁的方式来防止多线程访问,有利于优化线程管理,减少系统阻塞。即提出通道趋同技术对线程进行合理管理和优化。然后,在纵向上,提升单个收包线程取包性能;在横向上,批量扩大利用内核资源,有利于加快从系统缓存区中拿取码流数据包并存入流媒体内部缓存区。即提出纵横取包技术来提升数据接收阶段性能。最后,以异步调用的方式解耦不同的工作线程,减小线程之间的相互影响和耗时延迟;并且将抽象线程融合到实际相同的线程和内核上处理,减少线程的额外消耗,充分利用系统资源,有利于加快内部缓存模块数据处理。即提 出融合生产者消费者技术来提升数据处理阶段性能。
本发明实施例适用于基于流式数据处理的音视频相关产业,如视频会议、视频物联、视频中台等。具体可用于接入路数和流量庞大的高并发场景,可用于高性能数据转发,可用于无丢包需求的业务等。本发明实施例的数据流链路流程及分析思想、分阶段性能优化技术、方案设计等具有明显的特征。通过抓包、业务调用链追踪工具等手段实现与本发明实施例类似或者相同的目的,均应在本发明保护范围之内。
显然,本领域的技术人员应该明白,上述的本发明的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (14)

  1. 一种流媒体数据处理方法,包括:
    将码流数据和消息信令按不同的描述文件符整合至同一线程任务通道,并缓存至系统缓存区;
    收包线程从所述系统缓存区获取所述码流数据,并存入流媒体内部缓存区;
    不同类型的业务线程从所述流媒体内部缓存区读取所述码流数据。
  2. 根据权利要求1所述的方法,其中,所述码流数据和所述消息信令在所述同一线程任务通道中分别通过套接字描述文件符和命名管道描述文件符进行区分。
  3. 根据权利要求1所述的方法,其中,所述收包线程从所述系统缓存区获取所述码流数据包括:
    所述码流数据的数据包结构体使用指针进行传递,所述收包线程采用异步IO事件触发机制从所述系统缓存区获取所述码流数据,并根据所述码流数据的流量大小调整取包数量和最大事件数。
  4. 根据权利要求1所述的方法,其中,所述收包线程从所述系统缓存区获取所述码流数据,还包括:
    采用多收包线程从所述系统缓存区获取所述码流数据,并发送至多核CPU中指定的CPU处理,并提升所述收包线程的优先级。
  5. 根据权利要求4所述的方法,其中,所述不同类型的业务线程从所述流媒体内部缓存区读取所述码流数据包括:
    所述收包线程和业务线程以流媒体内部缓存区为中心,采用生产者/消费者模式传递所述码流数据的数据包,其中,所述收包线程为生产者,所述业务线程为消费者。
  6. 根据权利要求5所述的方法,其中,所述不同类型的业务线程从所述流媒体内部缓存区读取所述码流数据,还包括:
    将所述不同类型的业务线程融合到同一所述收包线程和CPU处理;
    根据所述不同类型的业务线程的任务数量,选择最小载荷的所述收包线程进行处理。
  7. 根据权利要求6所述的方法,所述流媒体内部缓存区为环形资源池。
  8. 根据权利要求1所述的方法,其中,在所述不同类型的业务线程从所述流媒体内部缓存区读取所述码流数据之后,还包括:
    所述不同类型的业务线程将所述码流数据存入存储设备进行数据持久化存储。
  9. 一种流媒体数据处理系统,包括:
    数据处理模块,设置为将码流数据和消息信令按不同的描述文件符整合至同一线程任务通道,并缓存至系统缓存区;
    数据获取模块,设置为从所述系统缓存区获取所述码流数据,并存入流媒体内部缓存区;
    数据读取模块,设置为根据业务线程的不同类型从所述流媒体内部缓存区读取所述码流数据。
  10. 根据权利要求9所述的系统,其中,所述数据处理模块包括以下至少之一:
    第一数据处理单元,设置为将所述码流数据按套接字描述文件符整合进入所述线程任务通道;
    第二数据处理单元,设置为将所述消息信令按命名管道描述文件符整合进入所述线程任务通道。
  11. 根据权利要求9所述的系统,其中,所述数据读取模块包括以下至少之一:
    第一读取单元,设置为在所述码流数据存入所述流媒体内部缓存区后,根据所述业务线程的不同类型立即从所述流媒体内部缓存区读取所述码流数据;
    第二读取单元,在所述码流数据存入所述流媒体内部缓存区后,根据所述业务线程的不同类型同时从所述流媒体内部缓存区读取所述码流数据;
    第三读取单元,在所述码流数据存入所述流媒体内部缓存区后,所述业务线程的不同类型的需要随时从所述流媒体内部缓存区读取所述码流数据。
  12. 根据权利要求9所述的系统,其中,还包括:
    数据存储模块,设置为将所述数据读取模块读取的所述码流数据存入存储设备。
  13. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,其中,所述计算机程序被处理器执行时实现所述权利要求1至8任一项中所述的方法。
  14. 一种电子装置,包括存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现所述权利要求1至8任一项中所述的方法。
PCT/CN2023/093058 2022-06-01 2023-05-09 流媒体数据处理方法及系统 WO2023231723A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210618490.6 2022-06-01
CN202210618490.6A CN116668415A (zh) 2022-06-01 2022-06-01 流媒体数据处理方法及系统

Publications (1)

Publication Number Publication Date
WO2023231723A1 true WO2023231723A1 (zh) 2023-12-07

Family

ID=87726613

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/093058 WO2023231723A1 (zh) 2022-06-01 2023-05-09 流媒体数据处理方法及系统

Country Status (2)

Country Link
CN (1) CN116668415A (zh)
WO (1) WO2023231723A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116962512B (zh) * 2023-09-20 2024-01-05 北京信安世纪科技股份有限公司 报文处理方法、设备、存储介质和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016197659A1 (zh) * 2015-06-12 2016-12-15 中兴通讯股份有限公司 网络媒体流收包方法、装置及系统
CN112995753A (zh) * 2019-12-16 2021-06-18 中兴通讯股份有限公司 一种媒体流分发方法、cdn节点服务器、cdn系统和可读存储介质
CN113553346A (zh) * 2021-07-22 2021-10-26 中国电子科技集团公司第十五研究所 大规模实时数据流一体化处理、转发和存储方法及系统
US20220141273A1 (en) * 2020-10-29 2022-05-05 Boe Technology Group Co., Ltd. Streaming media data processing method, processing system and storage server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016197659A1 (zh) * 2015-06-12 2016-12-15 中兴通讯股份有限公司 网络媒体流收包方法、装置及系统
CN106302372A (zh) * 2015-06-12 2017-01-04 中兴通讯股份有限公司 网络媒体流收包方法、装置及系统
CN112995753A (zh) * 2019-12-16 2021-06-18 中兴通讯股份有限公司 一种媒体流分发方法、cdn节点服务器、cdn系统和可读存储介质
US20220141273A1 (en) * 2020-10-29 2022-05-05 Boe Technology Group Co., Ltd. Streaming media data processing method, processing system and storage server
CN113553346A (zh) * 2021-07-22 2021-10-26 中国电子科技集团公司第十五研究所 大规模实时数据流一体化处理、转发和存储方法及系统

Also Published As

Publication number Publication date
CN116668415A (zh) 2023-08-29

Similar Documents

Publication Publication Date Title
CN108768826B (zh) 基于MQTT和Kafka高并发场景下的消息路由方法
CN106021315B (zh) 一种应用程序的日志管理方法及系统
WO2021254330A1 (zh) 内存管理方法、系统、客户端、服务器及存储介质
WO2023231723A1 (zh) 流媒体数据处理方法及系统
US20230042747A1 (en) Message Processing Method and Device, Storage Medium, and Electronic Device
CN113179327B (zh) 基于大容量内存的高并发协议栈卸载方法、设备、介质
CN113285931B (zh) 流媒体的传输方法、流媒体服务器及流媒体系统
WO2021238259A1 (zh) 一种数据传输方法、装置、设备及计算机可读存储介质
WO2023231897A1 (zh) 数据传输方法、发送服务器、接收服务器及存储介质
CN113259408A (zh) 数据传输方法和系统
CN114095901A (zh) 通信数据处理方法及装置
CN112543374A (zh) 一种转码控制方法、装置及电子设备
CN115941907A (zh) 一种rtp数据包发送方法、系统、电子设备及存储介质
CN113992609B (zh) 一种处理多链路业务数据乱序的方法及系统
CN114422617B (zh) 一种报文处理方法、系统及计算机可读存储介质
CN114186163A (zh) 一种应用层网络数据缓存方法
CN113242446A (zh) 视频帧的缓存方法、转发方法、通信服务器及程序产品
CN112751893A (zh) 一种消息轨迹数据的处理方法、装置及电子设备
CN108200481B (zh) 一种rtp-ps流处理方法、装置、设备及存储介质
CN101635669B (zh) 一种用于数据共享系统中获取数据片段的方法
WO2024082882A1 (zh) 多媒体内容的传输方法、装置、设备及存储介质
CN110661731A (zh) 一种报文处理方法及其装置
CN115580667B (zh) 数据传输方法、装置、设备及存储介质
CN113709044B (zh) 数据转发方法、装置、电子设备和存储介质
CN112653691B (zh) 一种数据处理方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23814909

Country of ref document: EP

Kind code of ref document: A1