CN112087326A - Multi-instance dynamic deployment transceiving method and system - Google Patents

Multi-instance dynamic deployment transceiving method and system Download PDF

Info

Publication number
CN112087326A
CN112087326A CN202010857734.7A CN202010857734A CN112087326A CN 112087326 A CN112087326 A CN 112087326A CN 202010857734 A CN202010857734 A CN 202010857734A CN 112087326 A CN112087326 A CN 112087326A
Authority
CN
China
Prior art keywords
ppi
message
psi
default
library
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010857734.7A
Other languages
Chinese (zh)
Inventor
周芬林
夏桂斌
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fiberhome Telecommunication Technologies Co Ltd
Original Assignee
Fiberhome Telecommunication Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fiberhome Telecommunication Technologies Co Ltd filed Critical Fiberhome Telecommunication Technologies Co Ltd
Priority to CN202010857734.7A priority Critical patent/CN112087326A/en
Publication of CN112087326A publication Critical patent/CN112087326A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/0816Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0817Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0888Throughput

Abstract

The invention discloses a multi-instance dynamic deployment transceiving method and a multi-instance dynamic deployment transceiving system, which relate to the technical field of data communication. The single-point performance bottleneck is avoided, the message delay caused by competition of a plurality of service instances during single-instance deployment is reduced, and the message receiving and sending throughput is improved.

Description

Multi-instance dynamic deployment transceiving method and system
Technical Field
The invention relates to the technical field of data communication, in particular to a multi-instance dynamic deployment transceiving method and a multi-instance dynamic deployment transceiving system.
Background
In the platform control plane of the router, various protocol modules need protocol message transceiving modules to complete message transceiving, and the performance of the transceiving modules is the key for determining the protocol efficiency of the control plane of the whole device.
At present, the transceiver module is deployed by adopting a single instance, the problems of single-point performance bottleneck, low throughput, incapability of fully utilizing the advantages of a multi-core CPU, incapability of realizing concurrent processing and the like exist, and the message delay is larger due to lock competition when the single instance serves a plurality of service instances.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a multi-instance dynamic deployment transmitting and receiving method and a multi-instance dynamic deployment transmitting and receiving system, which can reduce the message delay caused by the competition of a plurality of service instances during single-instance deployment and improve the message transmitting and receiving throughput.
In order to achieve the above purposes, the technical scheme adopted by the invention is as follows: a multi-instance dynamic deployment transceiving method comprises the following steps:
deploying a monitoring module and a default PPI; configuring the default PPI initial state for processing all types of protocol message receiving and transmitting between the PSI and the PPI library;
the monitoring module monitors the load states of all PPIs, creates a new PPI when the load state of the default PPI is detected to exceed the preset upper load limit, configures the new PPI to replace the default PPI to process protocol messaging of a set type, and informs the updated PPI configuration to a PSI and a PPI library;
when the average value of the load states of the PPIs is lower than the preset load lower limit, removing a certain non-default PPI, configuring the default PPI to process the receiving and sending of a protocol message of a set type corresponding to the removed PPI, and informing the updated PPI configuration to a PSI and a PPI library;
and the PSI and each PPI library respectively establish/delete a communication channel between the PSI and the corresponding PPI according to the updated configuration.
On the basis of the technical scheme, the method further comprises the following steps:
when the monitoring module detects that the load state of the non-default PPI exceeds the preset upper load limit, a new PPI is created, the new PPI and the non-default PPI are configured to process the protocol message of the set type corresponding to the non-default PPI in parallel, and the updated PPI configuration is informed to the PSI and the PPI library.
On the basis of the technical scheme, when the average value of the load states of the PPIs is lower than the preset lower load limit and a certain PPI is cancelled, the PPI with the lowest load state is cancelled.
On the basis of the technical scheme, the method further comprises the following steps:
when the PSI receives an uplink message sent by a bottom layer driving module, the message is identified, and different types of messages are distributed to corresponding PPIs;
each PPI receives an uplink message sent by the PSI, identifies a socket type of the message, and sends different types of messages to corresponding PPI libraries according to the socket type.
On the basis of the technical scheme, the method further comprises the following steps:
when each PPI library receives a downlink message sent by a protocol module, identifying a socket type of the message, and shunting and sending different types of messages to corresponding PPIs according to the socket type;
each PPI sends the message to the PSI.
The invention also provides a multi-instance dynamic deployment transceiving system, which comprises a default PPI, a PPI library corresponding to the protocol modules one by one, a PSI and monitoring module:
the default PPI is used to: processing all types of protocol message receiving and sending between the PSI and the PPI library in an initial state;
the PPI library is used for: realizing the receiving and sending of messages between a plurality of PPIs and a protocol module;
the PSI is used for: realizing the receiving and sending of messages between a plurality of PPIs and the bottom layer driving module;
the monitoring module is used for: monitoring the load states of all PPIs, creating a new PPI when the load state of the default PPI is detected to exceed the preset load upper limit, configuring the new PPI to replace the default PPI to process protocol message transceiving of a set type, and informing the updated PPI configuration to a PSI and a PPI library; when the average value of the load states of the PPIs is lower than the preset load lower limit, removing a certain non-default PPI, configuring the default PPI to process the receiving and sending of a protocol message of a set type corresponding to the removed PPI, and informing the updated PPI configuration to a PSI and a PPI library;
the PSI, each PPI library is further configured to: and establishing/deleting a communication channel between the PPI and the corresponding PPI according to the updated configuration.
On the basis of the technical scheme, the monitoring module is specifically configured to:
when the monitor detects that the load state of the non-default PPI exceeds the preset upper load limit, a new PPI is created, the new PPI and the non-default PPI are configured to process the protocol message of the set type corresponding to the non-default PPI in parallel, and the updated PPI configuration is informed to the PSI and the PPI library.
On the basis of the technical scheme, the monitoring module detects that the average value of the load states of the PPIs is lower than a preset lower load limit, and when a certain PPI is cancelled, the PPI with the lowest load state is cancelled.
On the basis of the above technical solution, the PSI is specifically configured to:
when receiving an uplink message sent by a bottom layer driving module, identifying the message, and distributing different types of messages to corresponding PPIs;
and when receiving the downlink message sent by the PPI, sending the message to a bottom layer driving module.
On the basis of the above technical solution, the PPI library is specifically configured to:
when a downlink message sent by a protocol module is received, identifying a socket type of the message, and shunting and sending different types of messages to corresponding PPIs according to the socket type;
and when receiving the uplink message sent by the PPI, sending the message to a protocol module.
Compared with the prior art, the invention has the advantages that:
the invention adds a monitoring module which is responsible for monitoring the throughput and the CPU load rate of the transmitting and receiving examples, dynamically deploys a plurality of PPIs according to the throughput and the CPU load condition, switches between the PPIs for transmitting and receiving messages by dynamically adjusting the number of deployed PPIs, and realizes the optimal configuration of the packet transmitting and receiving performance and the resource occupancy rate. The single-point performance bottleneck is avoided, the message delay caused by competition of a plurality of service instances during single-instance deployment is reduced, and the message receiving and sending throughput is improved.
Drawings
Fig. 1 is a schematic diagram of dynamic deployment of a multi-instance dynamic deployment transceiving method in an embodiment of the present invention;
fig. 2 is a schematic diagram of a downlink message flow of a multi-instance dynamic deployment transceiving method in an embodiment of the present invention;
fig. 3 is a schematic diagram of an uplink message flow of a multi-instance dynamic deployment transceiving method in an embodiment of the present invention;
fig. 4 is a schematic static deployment diagram of a multi-instance dynamic deployment transceiving method in an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present invention provides a multi-instance dynamic deployment transceiving method, including the following steps:
deploying a monitoring module and a default PPI (Packet Processor instant); configuring the default PPI initial state for processing all types of protocol message receiving and sending between a PSI (Packet Splitter Instance) and a PPI library (lib);
the monitoring module monitors the load states of all PPIs, when the load state of a default PPI is detected to exceed a preset load upper limit, a new PPI is created, the new PPI is configured to replace the default PPI to process protocol messaging of a set type, the default PPI does not process protocol messaging of the set type any more, and the updated PPI configuration is informed to a PSI and a PPI library;
when the average value of the load states of the PPIs is lower than the preset load lower limit, removing a certain non-default PPI, configuring the default PPI to process the receiving and sending of a protocol message of a set type corresponding to the removed PPI, and informing the updated PPI configuration to a PSI and a PPI library;
and the PSI and each PPI library respectively establish/delete a communication channel between the PSI and the corresponding PPI according to the updated configuration.
Referring to fig. 1, 1 and 2 in the figure represent protocol types. The dotted line in the figure represents the sharing of the message in the default state. At this point, the default PPI is used to handle all types of protocol messaging. The protocol messages of types 1 and 2 sent by each PPI library are all sent to the PSI through the PPI 1. And the protocol messages of 1 and 2 reported by the PSI are both sent to the PPI 1. The PPI1 identifies the message and sends the message to a corresponding PPI library.
The solid line in fig. 1 represents the message sharing after adding the PPI. After the new PPI is added, the newly added PPI2 is configured to be used for processing the transceiving of the protocol packet of type 2, so the protocol packet of type 2 issued by the PPI library is issued to the PSI through the PPI2, and other packets of type (type 1) are still issued to the PSI through the PPI 1. In the uplink direction, the PSI identifies the packets, and shunts the packets of different types to the corresponding PPIs. Type 2 is sent to PPI2, and the other types (type 1) are still sent to PPI 1.
As one implementation manner of the embodiment of the present invention, when the monitoring module detects that the load state of the non-default PPI exceeds the preset load upper limit, a new PPI is created, the new PPI is configured to process the protocol packet of the set type corresponding to the non-default PPI in parallel with the non-default PPI, and the updated PPI configuration is notified to the PSI and the PPI library. When the new PPI and the non-default PPI are configured to process the protocol message of the set type corresponding to the non-default PPI in parallel, a flow threshold method can be adopted, namely, a flow threshold of a certain non-default PPI is set, and the non-default PPI is switched to before the flow threshold is reached; and after the threshold value is reached, switching to a new PPI. The specific flow threshold is configured according to actual requirements.
In a preferred embodiment, when it is detected that the average value of the load states of the PPIs is lower than the preset lower load limit, and a PPI with the lowest load state is cancelled.
In particular, PPIs provide messaging services to protocol modules in the form of libs. The Protocol module firstly creates a Socket, and then receives and transmits a packet based on the Socket, the created Socket distinguishes types, including TCP (Transmission Control Protocol), UDP (User Datagram Protocol), RAWIP (original IP), MPLS (Multi-Protocol Label Switching) and LINK (LINK), and a Socket only processes one type of packet, for example, the MPLS Socket only receives and transmits a packet with an MPLS Label. Therefore, the packet type can be determined through the socket type, and the packet type is sent to the PPI for processing the corresponding packet.
The PPI receives the PPI lib protocol message, and sends the message to the PSI to be sent to the bottom layer driver in a unified manner after routing, interface finding and package message searching. The PPI receives the uplink message sent by the PSI, completes message identification and sends the message identification to the PPI lib to be delivered to the protocol module. The parallel processing is realized by binding the PPIs on different CPU cores, and the message throughput is improved.
The PSI identifies the received message, identifies whether the message is MPLS, two-layer link message or IP protocol, and distributes different types of messages to corresponding PPI. And receiving a message from the PPI, and sending the protocol message to a link through a bottom layer driving interface.
And the MI (Monitor Instance) monitors the CPU load and the message throughput of each PPI in real time, and after detecting that the current states of all PPIs exceed the load, the load pulls up a new PPI to complete the synchronization of PPI socket information, and part of flow is switched into the new PPI.
The embodiment of the invention also provides a multi-instance dynamic deployment transceiving system, which comprises a default PPI, a PPI library corresponding to the protocol modules one by one, a PSI and monitoring module:
the default PPI is used to: processing all types of protocol message receiving and sending between the PSI and the PPI library in an initial state;
the PPI library is used for: realizing the receiving and sending of messages between a plurality of PPIs and a protocol module;
the PSI is used for: realizing the receiving and sending of messages between a plurality of PPIs and the bottom layer driving module;
the monitoring module is used for: monitoring the load states of all PPIs, creating a new PPI when the load state of the default PPI is detected to exceed the preset load upper limit, configuring the new PPI to replace the default PPI to process protocol message transceiving of a set type, and informing the updated PPI configuration to a PSI and a PPI library; when the average value of the load states of the PPIs is lower than the preset load lower limit, removing a certain non-default PPI, configuring the default PPI to process the receiving and sending of a protocol message of a set type corresponding to the removed PPI, and informing the updated PPI configuration to a PSI and a PPI library;
the PSI, each PPI library is further configured to: and establishing/deleting a communication channel between the PPI and the corresponding PPI according to the updated configuration.
As shown in fig. 1, a DMPP (dynamic Multi-instance Packet Processor) system is composed of MI, PPI lib, PPI, and PSI. The detailed description is as follows:
the dynamic deployment is suitable for the situation that the protocol traffic changes greatly. For example, 10 neighbors are created by BGP, each neighbor creates a socket, and each neighbor only partially routes in a channel when it starts, and there are few protocol messages; however, in the subsequent scene change, each neighbor carries the advertisement of 100 ten thousand routes, and the number of protocol messages is increased under the condition that the number of neighbors is not changed, which causes the processing bottleneck of PPI.
One MI and one default PPI are deployed in the dynamic deployment mode. The MI is responsible for regularly monitoring the CPU load rate and the protocol message throughput of all PPIs, and the default PPI is responsible for processing all types of protocol messages. And when the MI detects that the average CPU load rate and the protocol message throughput of the PPI exceed a certain threshold value, the MI starts a dynamic PPI deployment process and completes synchronization from the socket information tables of other PPIs to the new PPI by pulling up the new PPI.
Deploying a new PPI for processing a message of a specific type (for example, the new PPI processes a message based on TCP), and automatically switching corresponding TCP traffic from a default PPI to the new PPI after all PPI libs and PSI sense a deployment completion event, so as to achieve load balancing; other messages are still received and transmitted through the default PPI.
When the average CPU and protocol throughput of the PPI is lower than a certain threshold value after the MI detects that the PPI is lower than the certain threshold value, the PPI lib and the PSI can automatically switch corresponding flow to the default PPI after sensing a deployment completion event through dynamically cancelling the certain PPI, and resource consumption is reduced.
Referring to fig. 2, a message downlink flow of the multi-instance dynamic deployment transmitting and receiving method specifically includes:
the protocol module issues a data message;
the PPI lib receives a protocol module packet;
the PPI lib obtains a socket type of the packet to obtain a PPI corresponding to the socket type;
judging whether a PPI corresponding to the socket type exists or not, and if so, sending the PPI to the corresponding PPI; if not, sending a default PPI;
the PPI processes the message and sends the message to the bottom driver through the PSI.
Referring to fig. 3, the message uplink flow of the multi-instance dynamic deployment transmitting and receiving method specifically includes:
PSI receives a bottom layer drive packet;
analyzing the message to obtain the message type, namely identifying whether the message is an MPLS (multi-protocol label switching), a two-layer link message, an IP (Internet protocol) protocol or the like;
judging whether the PPI corresponding to the message type exists or not, and if so, sending the PPI to the corresponding PPI; if not, sending a default PPI;
the PPI processes the message, delivers the message to the corresponding PPI lib and sends the message to the protocol module through the PPI lib.
The static deployment can be carried out under the scene that the protocol flow is not changed greatly and the message flow difference under each socket is not large. In a static deployment scenario, MI may not need to be deployed, and the number and type of PPI instances are also confirmed before startup.
As shown in fig. 4, two PPIs are statically deployed: PPI1 processes TCP messages and PPI2 processes other IP messages.
The BGP protocol module only needs TCP service, so only a TCP type socket is created, and the socket receives and transmits TCP messages;
the PPI lib corresponding to the BGP protocol module only establishes a message channel with the PPI 1; the OSPF protocol module only needs raw IP message, so only creates raw IP type Socket, which receives and transmits raw IP message, the PPI lib corresponding to OSPF protocol module only establishes message channel with PPI 2.
The PSI receives the uplink message to analyze the message, analyzes the message to a fourth layer header, determines whether the message is TCP or not, and sends the message to PPI1 if the message is TCP; if the IP message is other IP message, the IP message is sent to PPI2 for processing.
In the multi-core CPU scenario, PPI1 and PPI2 are bound on different CPU cores, and the message sending and receiving efficiency is improved through concurrence.
Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements all or part of method steps of a multi-instance dynamic deployment transceiving method.
The present invention realizes all or part of the flow in the multi-instance dynamic deployment transmitting and receiving method, and may also be completed by instructing relevant hardware by a computer program, where the computer program may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the above-mentioned method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
Based on the same inventive concept, an embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program running on the processor, and the processor implements all or part of the method steps in the multi-instance dynamic deployment transceiving method when executing the computer program.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the computer device and the various interfaces and lines connecting the various parts of the overall computer device.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the computer device by executing or executing the computer programs and/or modules stored in the memory, as well as by invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the cellular phone, etc. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, server, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), servers and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A multi-instance dynamic deployment transceiving method is characterized by comprising the following steps:
deploying a monitoring module and a default PPI; configuring the default PPI initial state for processing all types of protocol message receiving and transmitting between the PSI and the PPI library;
the monitoring module monitors the load states of all PPIs, creates a new PPI when the load state of the default PPI is detected to exceed the preset upper load limit, configures the new PPI to replace the default PPI to process protocol messaging of a set type, and informs the updated PPI configuration to a PSI and a PPI library;
when the average value of the load states of the PPIs is lower than the preset load lower limit, removing a certain non-default PPI, configuring the default PPI to process the receiving and sending of a protocol message of a set type corresponding to the removed PPI, and informing the updated PPI configuration to a PSI and a PPI library;
and the PSI and each PPI library respectively establish/delete a communication channel between the PSI and the corresponding PPI according to the updated configuration.
2. The method of claim 1, further comprising the steps of:
when the monitoring module detects that the load state of the non-default PPI exceeds the preset upper load limit, a new PPI is created, the new PPI and the non-default PPI are configured to process the protocol message of the set type corresponding to the non-default PPI in parallel, and the updated PPI configuration is informed to the PSI and the PPI library.
3. The method of claim 1, wherein the PPI with the lowest load state is deactivated when a certain PPI is deactivated upon detection that the average of the load states of the PPIs is below a preset lower load limit.
4. The method of claim 1, further comprising the steps of:
when the PSI receives an uplink message sent by a bottom layer driving module, the message is identified, and different types of messages are distributed to corresponding PPIs;
each PPI receives an uplink message sent by the PSI, identifies a socket type of the message, and sends different types of messages to corresponding PPI libraries according to the socket type.
5. The method of claim 1, further comprising the steps of:
when each PPI library receives a downlink message sent by a protocol module, identifying a socket type of the message, and shunting and sending different types of messages to corresponding PPIs according to the socket type;
each PPI sends the message to the PSI.
6. A multi-instance dynamic deployment transceiving system is characterized by comprising a default PPI, a PPI library corresponding to protocol modules one by one, a PSI and monitoring module:
the default PPI is used to: processing all types of protocol message receiving and sending between the PSI and the PPI library in an initial state;
the PPI library is used for: realizing the receiving and sending of messages between a plurality of PPIs and a protocol module;
the PSI is used for: realizing the receiving and sending of messages between a plurality of PPIs and the bottom layer driving module;
the monitoring module is used for: monitoring the load states of all PPIs, creating a new PPI when the load state of the default PPI is detected to exceed the preset load upper limit, configuring the new PPI to replace the default PPI to process protocol message transceiving of a set type, and informing the updated PPI configuration to a PSI and a PPI library; when the average value of the load states of the PPIs is lower than the preset load lower limit, removing a certain non-default PPI, configuring the default PPI to process the receiving and sending of a protocol message of a set type corresponding to the removed PPI, and informing the updated PPI configuration to a PSI and a PPI library;
the PSI, each PPI library is further configured to: and establishing/deleting a communication channel between the PPI and the corresponding PPI according to the updated configuration.
7. The system of claim 6, wherein the monitoring module is specifically configured to:
when the monitor detects that the load state of the non-default PPI exceeds the preset upper load limit, a new PPI is created, the new PPI and the non-default PPI are configured to process the protocol message of the set type corresponding to the non-default PPI in parallel, and the updated PPI configuration is informed to the PSI and the PPI library.
8. The system of claim 6, wherein the monitoring module detects that an average of the load states of the PPIs is lower than a preset lower load limit, and if a PPI is cancelled, the PPI with the lowest load state is cancelled.
9. The system of claim 6, wherein the PSI is specifically configured to:
when receiving an uplink message sent by a bottom layer driving module, identifying the message, and distributing different types of messages to corresponding PPIs;
and when receiving the downlink message sent by the PPI, sending the message to a bottom layer driving module.
10. The system of claim 6, wherein the PPI library is specifically configured to:
when a downlink message sent by a protocol module is received, identifying a socket type of the message, and shunting and sending different types of messages to corresponding PPIs according to the socket type;
and when receiving the uplink message sent by the PPI, sending the message to a protocol module.
CN202010857734.7A 2020-08-24 2020-08-24 Multi-instance dynamic deployment transceiving method and system Withdrawn CN112087326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010857734.7A CN112087326A (en) 2020-08-24 2020-08-24 Multi-instance dynamic deployment transceiving method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010857734.7A CN112087326A (en) 2020-08-24 2020-08-24 Multi-instance dynamic deployment transceiving method and system

Publications (1)

Publication Number Publication Date
CN112087326A true CN112087326A (en) 2020-12-15

Family

ID=73727998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010857734.7A Withdrawn CN112087326A (en) 2020-08-24 2020-08-24 Multi-instance dynamic deployment transceiving method and system

Country Status (1)

Country Link
CN (1) CN112087326A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043625A (en) * 2010-12-22 2011-05-04 中国农业银行股份有限公司 Workflow operation method and system
CN102148713A (en) * 2011-05-11 2011-08-10 烽火通信科技股份有限公司 Linear protection switching method applicable to large-capacity PTN (packet transport network) device
CN102185768A (en) * 2011-04-29 2011-09-14 华为数字技术有限公司 Configuration business deploying method and device
CN103634394A (en) * 2013-11-28 2014-03-12 中国科学院信息工程研究所 Data flow processing-oriented elastic expandable resource managing method and system
CN103782270A (en) * 2013-10-28 2014-05-07 华为技术有限公司 Method for managing stream processing system, and related apparatus and system
CN104270418A (en) * 2014-09-15 2015-01-07 中国人民解放军理工大学 Cloud agent appointment allocation method orientated to user demanded Deadlines
CN106445512A (en) * 2016-09-12 2017-02-22 浪潮软件股份有限公司 Method for realizing dynamic retractility of operating environment
CN107094119A (en) * 2017-07-07 2017-08-25 广州市品高软件股份有限公司 A kind of control method for equalizing load and system based on cloud computing and SDN
CN107704578A (en) * 2017-09-30 2018-02-16 桂林电子科技大学 A kind of figure matching constraint compared towards PPI networks solves notation method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043625A (en) * 2010-12-22 2011-05-04 中国农业银行股份有限公司 Workflow operation method and system
CN102185768A (en) * 2011-04-29 2011-09-14 华为数字技术有限公司 Configuration business deploying method and device
CN102148713A (en) * 2011-05-11 2011-08-10 烽火通信科技股份有限公司 Linear protection switching method applicable to large-capacity PTN (packet transport network) device
CN103782270A (en) * 2013-10-28 2014-05-07 华为技术有限公司 Method for managing stream processing system, and related apparatus and system
CN103634394A (en) * 2013-11-28 2014-03-12 中国科学院信息工程研究所 Data flow processing-oriented elastic expandable resource managing method and system
CN104270418A (en) * 2014-09-15 2015-01-07 中国人民解放军理工大学 Cloud agent appointment allocation method orientated to user demanded Deadlines
CN106445512A (en) * 2016-09-12 2017-02-22 浪潮软件股份有限公司 Method for realizing dynamic retractility of operating environment
CN107094119A (en) * 2017-07-07 2017-08-25 广州市品高软件股份有限公司 A kind of control method for equalizing load and system based on cloud computing and SDN
CN107704578A (en) * 2017-09-30 2018-02-16 桂林电子科技大学 A kind of figure matching constraint compared towards PPI networks solves notation method

Similar Documents

Publication Publication Date Title
CN103532867A (en) Acceleration transmission method and system for network data
US10129722B2 (en) Service processing method and network device
CN108390954B (en) Message transmission method and device
CN106878072B (en) Message transmission method and device
WO2013108676A1 (en) Multiple gateway device, multiple line communication system, multiple line communication method and program
US9661550B2 (en) Communication apparatus, communication method, and communication system
US10178017B2 (en) Method and control node for handling data packets
CN112491944A (en) Edge application discovery method and device, and edge application service support method and device
CN101222347A (en) Method and equipment for user acquiring network data
CN102123105A (en) Method and equipment for switching between standard VRRP (Virtual Router Redundancy Protocol) and load balancing VRRP
CN112751897A (en) Load balancing method, device, medium and equipment
US20230283492A1 (en) Traffic charging method, network device and storage medium
CN106059934B (en) Routing information processing method and device
CN113347158A (en) Streaming media data receiving and transmitting method and device and electronic equipment
CN107148035B (en) Frequency band selection method and device and wireless equipment
CN113132227A (en) Method, device, computer equipment and storage medium for updating routing information
CN104780165A (en) Security verification method and equipment for incoming label of message
CN112087326A (en) Multi-instance dynamic deployment transceiving method and system
CN109450692B (en) Network networking method, system and terminal equipment
CN106202084A (en) Date storage method and data storage device
CN109963312A (en) A kind of method for switching network, system, link switch equipment and storage medium
CN101820391A (en) Route forwarding method used for IP network and network equipment
CN105227924B (en) A kind of rete mirabile dispatching method of video monitoring platform Media Stream
CN114430390B (en) Method and device for acquiring cross-domain link
CN111327524A (en) Flow forwarding method and system, SDN controller and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20201215

WW01 Invention patent application withdrawn after publication