CN115174501A - Service system and service method for intra-network aggregation transmission - Google Patents

Service system and service method for intra-network aggregation transmission Download PDF

Info

Publication number
CN115174501A
CN115174501A CN202210561540.1A CN202210561540A CN115174501A CN 115174501 A CN115174501 A CN 115174501A CN 202210561540 A CN202210561540 A CN 202210561540A CN 115174501 A CN115174501 A CN 115174501A
Authority
CN
China
Prior art keywords
server
task
sending
receiving
aggregation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210561540.1A
Other languages
Chinese (zh)
Inventor
吴文斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unnamed Smart Computing Beijing Technology Co ltd
Original Assignee
Unnamed Smart Computing Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unnamed Smart Computing Beijing Technology Co ltd filed Critical Unnamed Smart Computing Beijing Technology Co ltd
Priority to CN202210561540.1A priority Critical patent/CN115174501A/en
Publication of CN115174501A publication Critical patent/CN115174501A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements

Abstract

A service system for intra-network aggregate transmissions, comprising: the system comprises a receiving server and a sending server, wherein the receiving server generates a plurality of aggregation tasks, and each aggregation task comprises: the global unique ID number, the number of sending servers, the ID number of the sending server, the address of the sending server and information for acquiring key value pair data; a receiving server prepares a receiving space and notifies the receiving space to a transmitting server; a receiving server creates a sending task and informs the sending server of the sending task; the sending server creates a group based on the sending task and the information of the storage space, wherein the group comprises the key value pair data after aggregation; and after receiving the packet sent by the sending server, the receiving server analyzes the packet and puts the data into the storage space. The invention also correspondingly provides a method, which is suitable for a multi-task scene with a plurality of senders and receivers aiming at the aggregation task and greatly expands the universality.

Description

Service system and service method for intra-network aggregation transmission
Technical Field
The present invention relates to the field of computer network transmission and the field of distributed systems, and more particularly, to a service system and a service method for intra-network aggregate transmission.
Background
There are certain differences in the types of services that different aggregate transport protocols can provide. For distributed machine learning training, two solutions of SwitchML and ATP based on intra-network aggregation are provided in the current academic community, and both the solutions design a special aggregation transmission protocol for distributed machine learning training. The SwitchML determines the aggregator corresponding to each group of gradients in a static allocation manner, provides an aggregation service of single tenant and single job for upper layer users, and needs to divide aggregator resources if the aggregator needs to be extended to a situation of multiple tenants and multiple jobs, so that each job of each tenant corresponds to one aggregator resource pool. The ATP provides an aggregation service with multiple tenants and multiple jobs to an upper-level user in a discretized, dynamic, best-effort manner, so that effective aggregator resources can be efficiently and equally shared when multiple distributed training jobs are simultaneously run. The general intra-network aggregation transport protocol proposed by the present proposal can also provide an aggregation service with multiple tenants and multiple jobs for upper layer users, but the job oriented to the protocol is a wider distributed aggregation job, and therefore, the protocol has greater versatility.
SwitchML and ATP are only one upper-level application for distributed machine learning training and thus no additional design is made for the service framework. The ATP simply uses multiple threads to accelerate the packet processing, a centralized scheduler is arranged on each working node of the ATP to receive the gradient tensor sent from the application layer, the gradient tensor is distributed to one thread by the scheduler to be sent, and the total load on each thread is maintained by the scheduler, so that the load balance on each thread is ensured.
The existing aggregation transmission protocol based on intra-network aggregation is designed to be coupled with specific applications, so that a general intra-network aggregation transmission protocol is designed, and a set of application service framework is correspondingly designed. The service framework is provided with a persistent agent on a terminal host, and the persistent agent is communicated with various distributed aggregation applications through the agent, particularly, the agent interacts with different application plugins, so that the applications can simultaneously submit a plurality of aggregation tasks and aggregate the tasks, and the service framework provides multi-tenant and multi-job aggregation services for upper-layer users.
Disclosure of Invention
To address the problems in the background art, a service system for intra-network aggregation includes: the system comprises a receiving server and a sending server, wherein the receiving server generates a plurality of aggregation tasks, and each aggregation task comprises: the global unique ID number, the number of sending servers, the ID number of the sending server, the address of the sending server and information for acquiring key value pair data; a receiving server prepares a receiving space and notifies the receiving space to a transmitting server; a receiving server creates a sending task and informs the sending server of the sending task; the sending server creates a group based on the sending task and the information of the storage space, wherein the group comprises the key value pair data after aggregation; and after receiving the packet sent by the sending server, the receiving server analyzes the packet and puts the data into the storage space.
The invention also provides a service method for intra-network aggregation transmission, which comprises the following steps:
the receiving server is provided with a main scheduling thread and a plurality of working threads, the main scheduling thread generates a plurality of aggregation tasks, and each aggregation task comprises: the global unique ID number, the number of sending servers, the ID number of the sending server, the address of the sending server and information for acquiring key value pair data;
the receiving server distributes a receiving cache for the aggregation task and informs each sending server of receiving cached information;
the sending server creates a sending task based on the sending task and the receiving cache, wherein the sending task comprises: the ID number of the aggregation task, information of the sending server and information of the receiving cache,
the receiving server is provided with a main scheduling thread and a plurality of working threads, the sending server is provided with a main scheduling thread and a plurality of working threads, the main scheduling threads of the receiving server and the sending server share a control channel, and the main scheduling thread of the receiving server and the plurality of working threads share an exclusive data channel.
The invention provides a general implementation method for distributed computing, the interaction process of the method of the invention also has generality, and the method can be applied to other multitask scenes with a plurality of senders and receivers.
Drawings
In order that the invention may be more readily understood, it will be described in more detail with reference to specific embodiments thereof that are illustrated in the accompanying drawings. These drawings depict only typical embodiments of the invention and are not therefore to be considered to limit the scope of the invention.
FIG. 1 is a flow chart of one embodiment of a service system of the present invention.
FIG. 2 is a schematic block diagram of an embodiment of a method for carrying out the present invention.
Fig. 3 is a schematic structural diagram of another embodiment of a service system embodying the present invention.
Fig. 4 is a diagram of one embodiment of a general packet data format used by the present invention.
Detailed Description
Embodiments of the present invention will be described below with reference to the accompanying drawings so that those skilled in the art can better understand the present invention and can carry out the present invention, but the illustrated embodiments are not intended to limit the present invention, and technical features in the following embodiments and embodiments can be combined with each other without conflict, wherein like parts are denoted by like reference numerals.
First embodiment
As shown in fig. 1, the service system of the present invention includes a plurality of servers, each of which includes: the system comprises an application interface unit, a main scheduling unit and a working unit. The server of the present invention may be a computer program implemented agent software for serving upper level applications. The application software communicates or is integrated with the server through the plug-in. Multiple servers may form a cluster of servers that aggregate tasks.
And the upper-layer application generates key value pair data for aggregation and transmits the key value pair data to the main scheduling unit of the receiving server through the application interface unit. Each aggregation task has a globally unique ID number taskid. The main scheduling unit creates a plurality of aggregated tasks, each aggregated task comprising: < task _ ID, num _ snd _ task, list < dstIP, snd _ ID, app _ data > >, where num _ snd _ task represents the number of sending servers in the aggregation task, and the List records the IP address dsttip of each sending server, the sending server global ID snd _ ID, and the key-value pair data app _ data (which may also be specific information for obtaining key-value pair data from an upper application). Each aggregated task is assigned to one of N units of work by which it is processed.
In addition, the master scheduling unit of the receiving server prepares a receiving space < addr, size > for each aggregation task, the receiving space is allocated to one of the N work units, and the allocation manner may be that after hash function operation hash (taskid) is performed on taskid, the receiving space is allocated to one of the work units. In the future, the receiving server listens for packets with task _ id, which are put into the corresponding receiving space. After the receiving space is prepared, the receiving server transmits information of the receiving space to each transmitting server. The master schedule unit of the receiving server may inform the List < dstIP, snd _ id, app _ data > table that the master schedule unit data of each transmitting server is ready in the form of metadata < addr, size > to be stored at the start position addr and size of the storage space.
Then, the main scheduling thread of the receiving server generates num _ snd _ task sending tasks according to the aggregated task, and informs the List < dstIP, snd _ id, app _ data > in the aggregated task of each sending server of the sending tasks < dstIP, snd _ id, app _ data > recorded in the aggregated task, and the sending tasks < dstIP, snd _ id, app _ data > of the sending server include the following information: an IP address dstIP of each sending server, a sending server global ID snd _ ID, and key-value pair data app _ data.
For the sending server, after the sending server in List < dstIP, snd _ ID, app _ data > receives sending task < dstIP, snd _ ID, app _ data >, the main scheduling unit of the sending server creates a sending task < task _ ID, snd _ ID, addr, size >, wherein snd _ ID represents sending server global ID. The sending task is allocated to one of the N work units of the sending server, and the allocation manner may be that after hash function operation hash (task _ id) is performed on the task _ id, the task _ id is allocated to one of the work units. The hash function used by the sending server and the receiving server may be the same.
The receiving server and the sending server both comprise a master scheduling unit and a plurality of working units. The master scheduling unit has a control channel (e.g., interface or channel) for communicating control information with other servers or other external devices (e.g., network cards). The working unit can be set as a sender for sending the aggregation task, and can also be set as a receiver for receiving the aggregation task. There is a data channel (e.g., interface or channel) between the master schedule unit and each of the work units for communicating data, i.e., the master schedule unit may distribute the aggregated tasks to each of the work units. In one embodiment, when the server communicates with the network card, each data channel is associated with the network card with a transmit-receive pair (tx/rx transmit-receive pair) for transmitting data, such as key-value pair data.
The sending server and the receiving server also comprise network card NIC units, and each working unit actively polls the network card to exchange data with the working unit. The work unit of the transmission server takes out the tasks from the task queue, processes each transmission task in sequence, pushes data of each transmission task into the network, and transmits an end signal when the transmission task ends. After receiving the packet, the working unit of the receiving server classifies the packet into the corresponding receiving buffer < addr, size >. The switch between the sending server and the receiving server performs the aggregation operation according to the agreed transmission protocol. The receiving server collects the end signal of each sending task, once all the sending tasks are ended, the receiving server pulls the intermediate results from the switch, all the results are merged together and put into the memory, and the aggregation task with the task number of task _ id is marked as completed and put into the completion queue.
Second embodiment
As shown in fig. 2, the second embodiment proposes a method.
S100, the receiving server generates an aggregation task with a globally unique ID (identity) number task _ ID. The aggregation task includes the following information: < task _ ID, num _ snd _ task, list < dstIP, snd _ ID, app _ data > >, where num _ snd _ task represents the number of sending servers in the aggregation task, and the List records the IP address dsttip of each sending server, the sending server global ID snd _ ID, and the key-value-pair data app _ data (which may also be specific information for obtaining the key-value-pair data app _ data).
S101, the receiving server prepares a receiving storage space < addr, size >, addr representing the starting position addr of the storage space, size representing the size of the storage space, and notifies the sending server in the List in the form of metadata < addr, size >, wherein the notification information further includes the aggregation task ID number task _ ID.
S102, the receiving server creates num _ snd _ task sending tasks, and notifies each sending server in the List of its sending task < dstIP, snd _ id, app _ data >, and the sending task < dstIP, snd _ id, app _ data > of the sending server includes the following information: an IP address dstIP of each sending server, a sending server global ID snd _ ID, and key-value pair data app _ data. The notification information also includes an aggregation task ID number taskid.
S103, after the sending server receives the sending task < dstIP, snd _ ID, app _ data > and the metadata < addr, size >, the sending server creates a sending task < tasd _ ID, snd _ ID, addr, size >, wherein snd _ ID represents the global ID of the sending server. The sending task is allocated to one of the N work units of the sending server, and the allocation manner may be that after hash function operation hash (task _ id) is performed on the task _ id, the task _ id is allocated to one of the work units. The hash function used by the sending server and the receiving server may be the same.
S104, the receiving server monitors the information with the task _ id, analyzes the grouping after receiving the data of the sending server, and puts the grouping into the corresponding storage space < addr, size >.
Third embodiment
The servers in the first embodiment may form a distributed service system, as shown in fig. 3, where the service system includes a plurality of servers, and each server performs the functions described above.
A persistent agent is arranged on each server and used for serving upper-layer applications, and the applications are integrated with the service system through plug-ins, so that a plurality of aggregated tasks can be submitted to the service system at the same time. Each agent includes a main scheduler thread and a plurality of worker threads. FIG. 3 illustrates an architecture of a proxy in which threads in the proxy persistently execute multiple aggregated tasks as senders or receivers in the aggregated tasks. In the implementation of the service system, a control channel is constructed on all scheduling threads, N data channels are constructed on each scheduling thread and all working threads, and each data channel is associated with a tx/rx transmit-receive pair on a network card. The control channel transmits the control channel, and the N data channels transmit key value pair data.
Initializing an aggregation task: the receiver (receiving server) in the application uses the globally unique ID number taskid to initialize an aggregation task. The plug-in first submits a request < task _ id, num _ snd _ task, list < dstIP, snd _ id, app _ data > to its local dispatch thread through IPC (Inter-Process Communication). num _ snd _ task represents the number of senders (sending servers) in the task, and each sender is described in a List, including its IP address, sender ID, and specific information for acquiring key-value pair data from an upper application.
Preparation recipient (receiving server): the scheduling thread allocates a receive buffer < addr, size > in the memory for the task, allocates the block of receive buffer to one of the N worker threads by a hash function (e.g., hash (task _ id)), and then puts an aggregated task into the receive task queue. When a packet with a task id in the header is received, the packet is classified and placed into the corresponding receive buffer. Then, the recipient's dispatch thread generates num _ snd _ task sender tasks, and notifies the dispatch thread of each sender server of its sender task < task _ id, snd _ id, app _ data > through a control channel.
Preparation of sender (sending server): the sender's dispatch thread accepts the send task and advertises the plug-in of the recipient application that monitors the sub-channel with the key of the task _ id in the listener control channel. The plug-in parses app _ data, prepares key-value pair data in the memory, notifies the sender in the form of metadata < addr, size > that the scheduled thread data is ready and stores it in the memory at the size position from addr.
Schedule sender (sending server): the sender's dispatch thread creates a send task with the message < task _ id, snd _ id, addr, size >, which is assigned to the corresponding data channel using the same hash function hash (task _ id) as the receiver. Each work thread maintains a task queue, sending tasks are put into the task queue, and the work threads process the tasks in a FIFO first-come first-serve mode.
And (3) executing an aggregation task: each worker thread actively polls the network card (the switch performs the aggregation task) to exchange data with it. The sender's worker thread takes out tasks from the task queue, processes each send task in sequence, pushes data of each send task into the network, and sends a FIN signal when the send task is finished. The receiver's worker thread always receives packets, classifying the packets into the corresponding receive buffer. The switch between the sender and the receiver will perform a best effort aggregation operation according to the general intra-network aggregation transport protocol. The receiver will collect the FIN signal of each sending task, once all sending tasks are finished, the receiver will pull the intermediate result from the switch, all results will be merged together and put into the memory, and the aggregation task with task _ id will be marked as completed and put into the completion queue.
Returning the result to the application: the plug-in of the recipient application monitors the sub-channel with the key of the task _ id in the listener control channel. Once an aggregate task is completed, the thread announces plug-in < addr, size > and key taskid, and the plug-in returns the data in memory to the application.
The packet format adopted in the aggregation task is as shown in fig. 4, in which the header contains control information and the payload field contains key-value pairs to be aggregated.
Specifically, a task number field (task _ id field) stores a task number for aggregation. The transmitting node number field (snd _ id field) stores the transmitting node number that transmits the packet. The thread number field (thread _ id field) stores the specific thread number that sent the packet.
The packet type field (type field) stores the type of the packet, which includes: SYN, FIN, DATA, ACK, QUERY, RESET. SYN indicates start, FIN indicates end, DATA indicates DATA, ACK indicates acknowledgement, QUERY indicates QUERY, and RESET indicates RESET.
The packet sequence number field (sequence field) stores the sequence number of the packet sent by the thread. The payload field (payload field) contains a key-value pair, which may be multiple, to be aggregated.
The validity field (bitmap field) indicates which key-value pairs in the payload field (payload) are valid. In one embodiment, the bitmap field is used to have the same number of bits as key-value pairs in the payload field (payload field) to indicate which key-value pairs in the payload field (payload field) are valid (e.g., 1 indicates valid, i.e., processed). The load field (payload field) stores a key-value pair in the form of (key-value pairs), and the key comparison is performed during aggregation, and whether value is aggregated by the switch is determined through a bitmap field in a general packet data format.
The service system may be in communication with a switch. The service system and the switch cooperatively obey a common transmission protocol to execute each aggregation task, and the service system provides a general architecture for distributed aggregation application, so that a plurality of different tasks can simultaneously submit the aggregation tasks to the service system, and the service system processes the aggregation tasks in a multiplexing mode, thereby improving the universality of the system.
The service system can perform six functions of initializing an aggregation task, preparing a receiver, preparing a sender, preparing a task of the sender, scheduling the sender, executing the aggregation task and returning a result to an upper-layer application, and uses the same interactive process to execute aggregation operation on three distributed applications, namely big data computing application, distributed training and high-performance computing, so that the universality of a service system interface is verified.
The embodiments described above are merely preferred specific embodiments of the present invention, and the present specification uses the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the present disclosure. General changes and substitutions by those skilled in the art within the technical scope of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A service system for intra-network aggregate transmissions, comprising: a receiving server and a sending server, wherein,
the receiving server generates a plurality of aggregation tasks, wherein each aggregation task comprises the following steps: the global unique ID number, the number of sending servers, the ID number of the sending server, the address of the sending server and information for acquiring key value pair data;
a receiving server prepares a receiving space and notifies the receiving space to a transmitting server;
a receiving server creates a sending task and informs the sending server of the sending task;
the sending server creates a group based on the sending task and the information of the storage space, wherein the group comprises the key value pair data after aggregation;
and after receiving the packet sent by the sending server, the receiving server analyzes the packet and puts the data into the storage space.
2. The service system according to claim 1,
the sending server has a plurality of work units, each of which processes an aggregation task and has a dedicated receiving space.
3. The service system according to claim 2,
each working unit is associated with a transceiving pair with the network card for data transmission.
4. The service system according to claim 1,
the packet includes control information and a payload, the payload includes key-value pairs into which original data to be transmitted is converted, the control information includes a validity field, and the validity field can indicate whether the key-value pairs in the payload field are aggregated.
5. The service system according to claim 4, wherein the control information further includes:
a task number field for storing a task number for aggregation;
a transmit end number field for storing a transmit end number to transmit the packet;
a thread number field for storing a thread number of a transmitting end that transmits the packet;
a packet sequence number field for storing a sequence number of a packet to be transmitted;
a type field for storing a type of the packet, including: start, end, data, acknowledge, query, and reset.
6. A service method for intra-network aggregate transmissions, comprising:
the receiving server is provided with a main scheduling thread and a plurality of working threads, the main scheduling thread generates a plurality of aggregation tasks, and each aggregation task comprises: the global unique ID number, the number of sending servers, the ID number of the sending server, the address of the sending server and information for acquiring key value pair data;
the receiving server distributes a receiving cache for the aggregation task and informs each sending server of receiving cached information;
the sending server creates a sending task based on the sending task and the receiving cache, wherein the sending task comprises: the ID number of the aggregation task, information of the sending server and information of the receiving cache,
the receiving server is provided with a main scheduling thread and a plurality of working threads, the sending server is provided with a main scheduling thread and a plurality of working threads, the main scheduling threads of the receiving server and the sending server share a control channel, and the main scheduling thread of the receiving server and the plurality of working threads share an exclusive data channel.
7. Service method according to claim 6,
each data channel is associated with a transceiving pair with the network card, and the data channels are used for transmitting key value pair data.
8. The service method according to claim 6,
the receiving server and the sending server use a hash of the ID of the aggregated task to assign a receive cache to a worker thread.
9. The service method according to claim 6,
the packet includes control information and a payload, the payload includes key-value pairs into which original data to be transmitted is converted, the control information includes a validity field, and the validity field can indicate whether the key-value pairs in the payload field are aggregated.
10. The service method of claim 9, wherein the control information further comprises:
a task number field for storing a task number for aggregation;
a transmit end number field for storing a transmit end number to transmit the packet;
a thread number field for storing a thread number of a transmitting end that transmits the packet;
a packet sequence number field for storing a sequence number of a packet to be transmitted;
a type field for storing a type of the packet, including: start, end, data, acknowledge, query, and reset.
CN202210561540.1A 2022-05-23 2022-05-23 Service system and service method for intra-network aggregation transmission Pending CN115174501A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210561540.1A CN115174501A (en) 2022-05-23 2022-05-23 Service system and service method for intra-network aggregation transmission

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210561540.1A CN115174501A (en) 2022-05-23 2022-05-23 Service system and service method for intra-network aggregation transmission

Publications (1)

Publication Number Publication Date
CN115174501A true CN115174501A (en) 2022-10-11

Family

ID=83484079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210561540.1A Pending CN115174501A (en) 2022-05-23 2022-05-23 Service system and service method for intra-network aggregation transmission

Country Status (1)

Country Link
CN (1) CN115174501A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040233934A1 (en) * 2003-05-23 2004-11-25 Hooper Donald F. Controlling access to sections of instructions
US20060251109A1 (en) * 2005-04-05 2006-11-09 Shimon Muller Network system
CN105791460A (en) * 2016-03-03 2016-07-20 中国科学院信息工程研究所 DNS agent cache optimization method and system based on multi-dimension aggregation
US20190005155A1 (en) * 2017-06-30 2019-01-03 Kabushiki Kaisha Toshiba Visualization management device, data management device, data visualization system, visualization management method, and program product
CN109617792A (en) * 2019-01-17 2019-04-12 北京云中融信网络科技有限公司 Instant communicating system and broadcast message distribution method
CN111881165A (en) * 2020-07-15 2020-11-03 杭州安恒信息技术股份有限公司 Data aggregation method and device and computer readable storage medium
CN112860695A (en) * 2021-02-08 2021-05-28 北京百度网讯科技有限公司 Monitoring data query method, device, equipment, storage medium and program product
CN114281648A (en) * 2021-12-23 2022-04-05 北京奇艺世纪科技有限公司 Data acquisition method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040233934A1 (en) * 2003-05-23 2004-11-25 Hooper Donald F. Controlling access to sections of instructions
US20060251109A1 (en) * 2005-04-05 2006-11-09 Shimon Muller Network system
CN105791460A (en) * 2016-03-03 2016-07-20 中国科学院信息工程研究所 DNS agent cache optimization method and system based on multi-dimension aggregation
US20190005155A1 (en) * 2017-06-30 2019-01-03 Kabushiki Kaisha Toshiba Visualization management device, data management device, data visualization system, visualization management method, and program product
CN109617792A (en) * 2019-01-17 2019-04-12 北京云中融信网络科技有限公司 Instant communicating system and broadcast message distribution method
CN111881165A (en) * 2020-07-15 2020-11-03 杭州安恒信息技术股份有限公司 Data aggregation method and device and computer readable storage medium
CN112860695A (en) * 2021-02-08 2021-05-28 北京百度网讯科技有限公司 Monitoring data query method, device, equipment, storage medium and program product
CN114281648A (en) * 2021-12-23 2022-04-05 北京奇艺世纪科技有限公司 Data acquisition method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
JP5726316B2 (en) Lockless, zero-copy messaging scheme for telecommunications network applications
US20040073683A1 (en) Method and apparatus for providing an integrated cluster alias address
US8553708B2 (en) Bandwith allocation method and routing device
WO2014194869A1 (en) Request processing method, device and system
US9641604B1 (en) Ranking candidate servers in order to select one server for a scheduled data transfer
CN111404931B (en) Remote data transmission method based on persistent memory
US8539089B2 (en) System and method for vertical perimeter protection
CN111107586A (en) Processing method and system for BBU forward-transmitted data
CN110535811B (en) Remote memory management method and system, server, client and storage medium
CN113747373B (en) Message processing system, device and method
US20060200828A1 (en) Network element management system and method
CN113127139B (en) Memory allocation method and device based on DPDK of data plane development kit
CN112968965B (en) Metadata service method, server and storage medium for NFV network node
CN111131081A (en) Method and device for supporting multi-process high-performance unidirectional transmission
CN111404986B (en) Data transmission processing method, device and storage medium
CN115174501A (en) Service system and service method for intra-network aggregation transmission
WO2023109891A1 (en) Multicast transmission method, apparatus and system
US8194658B2 (en) Transmitting and receiving method and apparatus in real-time system
US7782870B1 (en) Method and apparatus for consolidating available computing resources on different computing devices
CN106790632B (en) Streaming data concurrent transmission method and device
CN113141390B (en) Netconf channel management method and device
CN109257227B (en) Coupling management method, device and system in data transmission
CN113259408A (en) Data transmission method and system
JP2018046404A (en) Relay device, relay system, relay program, and relay method
CN101483599A (en) Method for enhancing webpage browsing efficiency based on priority multi-connection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination