CN112769639B - Method and device for parallel issuing configuration information - Google Patents

Method and device for parallel issuing configuration information Download PDF

Info

Publication number
CN112769639B
CN112769639B CN202011529831.XA CN202011529831A CN112769639B CN 112769639 B CN112769639 B CN 112769639B CN 202011529831 A CN202011529831 A CN 202011529831A CN 112769639 B CN112769639 B CN 112769639B
Authority
CN
China
Prior art keywords
configuration information
resource sub
resource
data connection
network equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011529831.XA
Other languages
Chinese (zh)
Other versions
CN112769639A (en
Inventor
胡有福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou DPTech Technologies Co Ltd
Original Assignee
Hangzhou DPTech Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou DPTech Technologies Co Ltd filed Critical Hangzhou DPTech Technologies Co Ltd
Priority to CN202011529831.XA priority Critical patent/CN112769639B/en
Publication of CN112769639A publication Critical patent/CN112769639A/en
Application granted granted Critical
Publication of CN112769639B publication Critical patent/CN112769639B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/12Network monitoring probes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0227Filtering policies
    • H04L63/0236Filtering by address, protocol, port number or service, e.g. IP-address or URL
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a method and a device for issuing configuration information in parallel, wherein the method comprises the following steps: obtaining the configuration information cached in the resource queue, classifying the configuration information, and caching each type of configuration information obtained by classification into different resource sub-queues respectively; establishing data connection between each resource sub-queue and network equipment, so that each type of configuration information corresponds to one data connection; the configuration information is parallelly issued to the network equipment through the corresponding data connection, and the corresponding data connection is disconnected after the issuance is finished; by the technical scheme, the configuration information can be issued in parallel while the number of times of establishing data connection is reduced, and configuration efficiency is improved.

Description

Method and device for parallel issuing configuration information
Technical Field
The present application relates to the field of communications, and in particular, to a method and an apparatus for concurrently issuing configuration information.
Background
In the related art, when configuration is issued to network equipment, the configuration is generally issued to the network equipment in series according to configuration information, a data connection needs to be established first when each configuration is issued, the data connection is closed after the configuration is issued, and a new data connection is reestablished when the other configuration is issued; the serial delivery configuration is inefficient and frequent creation or closure of data connections results in consumption of device resources.
Disclosure of Invention
In view of this, the present application provides a method and an apparatus for issuing configuration information in parallel.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of the present application, a method for issuing configuration information in parallel is provided, where the method includes:
obtaining the configuration information cached in the resource queue, classifying the configuration information, and caching each type of configuration information obtained by classification into different resource sub-queues respectively;
establishing data connection between each resource sub-queue and network equipment, so that each type of configuration information corresponds to one data connection;
and parallelly issuing the configuration information to network equipment through corresponding data connection, and disconnecting the corresponding data connection after the issuing is completed.
According to a second aspect of the present application, an apparatus for issuing configuration information in parallel is provided, including:
the classification unit is used for acquiring the cached configuration information in the resource queue, classifying the configuration information and caching each type of configuration information obtained by classification into different resource sub-queues respectively;
the connection unit is used for establishing data connection between each resource sub-queue and the network equipment so that each type of configuration information corresponds to one data connection;
and the issuing unit is used for parallelly issuing the configuration information to the network equipment through the corresponding data connection and disconnecting the corresponding data connection after the issuing is finished.
According to a third aspect of the present application, there is provided an electronic apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method as described in the embodiments of the first aspect above by executing the executable instructions.
According to a fourth aspect of embodiments herein, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method as described in the embodiments of the first aspect above.
According to the technical scheme, the method and the device have the advantages that the configuration information cached in the resource queue is classified, each type of classified configuration information is cached in different resource sub-queues, each type of configuration information can be issued to the network equipment through the same data connection, the corresponding data connection is closed after the same type of configuration information is issued, the data connection is established without issuing one configuration, and consumption of equipment resources caused by frequent establishment and closing of the data connection is avoided. Meanwhile, the configuration information is sent to the network equipment in parallel through the corresponding data connection, so that a plurality of configurations can be sent to the network equipment at the same time, the processing capacity of the equipment is fully utilized, and the configuration sending efficiency is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a flowchart illustrating a method for distributing configuration information in parallel according to an exemplary embodiment of the present application;
fig. 2 is a schematic diagram of a network architecture corresponding to a method for issuing configuration information in parallel according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a specific implementation of a method for concurrently issuing configuration information according to an exemplary embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a method for issuing configuration information in parallel according to an exemplary embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device applying a method for issuing configuration information in parallel according to an exemplary embodiment of the present application;
fig. 6 is a block diagram of a device for issuing configuration information in parallel according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Next, examples of the present application will be described in detail.
Fig. 1 is a flowchart illustrating a method for issuing configuration information in parallel according to an exemplary embodiment of the present application. As shown in fig. 1, the following steps may be included:
step 102: and obtaining the configuration information cached in the resource queue, classifying the configuration information, and caching each type of configuration information obtained by classification into different resource sub-queues respectively.
In an embodiment, when multiple pieces of configuration information are received and multiple pieces of configuration information need to be issued to the same network device, the configuration information may be cached in a resource queue, where the network device may be any device that can be connected to a network, such as a server, a gateway, a router, a switch, and the like, and the application does not limit this; the resource queue is a message queue, and configuration information cached in the resource queue is sequentially arranged according to receiving time by using a first-in first-out data structure; classifying the configuration information according to a router-ID in the configuration information or other ID fields derived according to the router-ID, which is not limited in the present application; when the configuration information is classified according to the router-ID, the router-ID of each type of configuration information is the same, the configuration information with the same router-ID of each type is cached in different resource sub-queues respectively, the resource sub-queues are message queues, a first-in first-out data structure is used, and the configuration information in the resource queues is arranged in sequence according to the receiving time, so that the configuration information entering each resource sub-queue not only uses the same router-ID, but also is sequenced in sequence according to the receiving time of the configuration information. In the embodiment, by adopting a special data structure, such as a message queue, each piece of configuration information is sequenced according to the corresponding receiving time, so that the time sequence relation of each piece of configuration information is ensured, the configuration information is issued without disorder, for example, when there is a timing relationship between the configuration information, for example, the first piece of configuration information is to configure the network device from state 1 to state 2, the second piece of configuration information is to configure the network device from state 2 to state 3, using a first-in-first-out data structure such as a message queue, the first piece of configuration information will keep the order before the second piece of configuration information, and the first issuing of the second piece of configuration information will not occur when issuing, resulting in that the device will not enter state 2 according to the first piece of configuration information, and further, the configuration can not be configured to be the state 3, so that the stability of issuing a plurality of configurations is ensured.
Step 104: and creating data connection between each resource sub-queue and the network equipment, so that each type of configuration information corresponds to one data connection.
In an embodiment, a data connection is created between each resource sub-queue and the network device, where the data connection may be a TCP (Transmission Control Protocol) connection or a UDP (User Datagram Protocol) connection, which is not limited in this application; since the configuration information is classified in step 102, when the configuration information is classified according to router-ID, the configuration information in each resource sub-queue corresponds to the same router-ID, and the same data connection can be used by using the configuration information of the same router-ID, only one data connection needs to be established when the same type of configuration information is issued, and it is not necessary to establish one data connection every time one configuration information is issued, and a new data connection is re-established when another configuration information is issued, thereby avoiding the consumption of device resources caused by frequently establishing and closing data connections.
Step 106: and parallelly issuing the configuration information to network equipment through corresponding data connection, and disconnecting the corresponding data connection after the issuing is finished.
In one embodiment, the configuration information is sent to the network device in parallel through the corresponding data connection, in the above steps, the configuration information is classified and then cached in different resource sub-queues, and one type of configuration information is cached in each resource sub-queue, each kind of configuration information corresponds to one data connection, each resource sub-queue can send the configuration information in the queue to the network equipment in parallel through the respective corresponding data connection, the network device is configured by using the received configuration information, when any type of configuration information is issued, because other types of configuration information cannot use the data connection for data transmission, the data connection corresponding to the type of configuration information does not need to be kept continuously, the data connection is disconnected, data resources occupied by the data connection are released, and waste of memory resources caused by excessive useless data connection is avoided. In this embodiment, the configuration information is concurrently issued to the network device through the corresponding data connection, so that the concurrent processing capability of the network device is fully utilized, and the issuing efficiency of the configuration information is improved.
In an embodiment, the configuration information is sent to a network device in parallel through a corresponding data connection, so that the network device processes the configuration information using a processing function, and further the network device performs configuration according to the processed configuration information, where the processing function includes a processing function corresponding to each piece of configuration information carried by each piece of configuration information or a processing function cached in the network device. When the network equipment is configured according to the configuration information, data contained in the configuration information needs to be processed, each piece of configuration information can carry a processing function corresponding to the data contained in the network equipment, and when the network equipment receives the configuration information carrying the processing function, the processing function corresponding to each piece of configuration information can be used for processing the data, and then the processed data is used for configuring the network equipment; in addition, the network device can maintain different processing functions, when receiving the configuration information issued through the corresponding data connection, the network device calls the processing functions maintained by the network device to perform data processing on the configuration information according to the data contained in the configuration information, and then performs configuration by using the processed data.
In an embodiment, the method further includes: pre-creating a first preset number of resource sub-queues; the preset number can be set according to the actual situation, and the application is not limited to this; when the number of the types of the configuration information is larger than the number of the pre-created resource sub-queues, determining the difference value between the number of the types of the configuration information and the number of the pre-created resource sub-queues, creating temporary resource sub-queues with the same number as the difference value, and caching the configuration information of the corresponding type in each temporary resource sub-queue; and releasing the temporary resource sub-queue after the configuration information cached in the temporary resource sub-queue is issued. For example, 50 resource sub-queues are created in advance, when the number of the types of the configuration information is greater than 50, for example, 55 types, a difference between the number of the types of the configuration information and the number of the resource sub-queues is determined, that is, a difference between 55 and 50, that is, a difference is 5, temporary resource sub-queues with the same number as the difference are created, that is, 5 temporary resource sub-queues are created, one type of configuration information is cached in the 5 temporary resource sub-queues respectively, data connection is created for each type of configuration information, and after the configuration information in any one of the temporary resource sub-queues is issued through the corresponding data connection, any one of the temporary resource sub-queues is released. In the implementation, the resource sub-queues are created in advance according to the preset quantity, the preset quantity can be determined according to engineering experience values in the actual process, in most cases, the preset quantity of the resource sub-queues can meet actual requirements, and in special cases, the configuration information of the residual kinds of numbers can be processed through the temporary resource sub-queues, so that the problem that configuration cannot be issued due to the fact that the number of the resource sub-queues is insufficient is avoided; meanwhile, the resource sub-queues are created in advance, so that the time for creating the resource sub-queues in real time according to the classification result is saved, and the configuration issuing efficiency is improved.
In an embodiment, the resource sub-queues may be created in real time, and each type of configuration information obtained by classification may be respectively cached in different resource sub-queues created in real time, where the number of the resource sub-queues is the same as the number of the types of the configuration information. For example, when the configuration information is classified according to the router-ID, when a first piece of configuration information is read, the router-ID is further read, a resource sub-queue corresponding to the first router-ID is created, the next piece of configuration information is continuously read, if the resource sub-queue is different from the router-ID in the previous piece of configuration information, a second resource sub-queue corresponding to the router-ID is created in real time, if the resource sub-queue is the same as the router-ID in the previous piece of configuration information, the configuration information is cached into the resource sub-queue corresponding to the first router-ID, the next piece of configuration information is continuously read, and the steps are repeated until all the configuration information is read. In this embodiment, it is ensured that the number of the resource sub-queues is the same as the number of the types of the configuration information, and resource sub-queues without data are not generated, thereby avoiding the waste of storage resources caused by excessive pre-created resource sub-queues.
In an embodiment, the method further includes: and pre-establishing a resource processing thread pool, wherein the resource processing thread pool comprises a second preset number of resource processing threads, and each resource processing thread is used for processing the configuration information in one resource sub-queue. The method includes the steps that a resource processing thread pool is created in advance, the resource processing thread pool comprises a plurality of resource processing threads, the number of the resource processing threads can be determined according to the parallel processing capacity of the network equipment, for example, the network equipment can process data in 50 threads at the maximum at the same time, no more than 50 resource processing threads can be created, each resource processing thread processes configuration information in one resource sub-queue, and when the number of the resource sub-queues in which the configuration information is cached is larger than the number of the resource processing threads, the resource processing thread which processes the corresponding configuration information first can continue to process the configuration information in the remaining resource sub-queues. In this embodiment, the parallel processing capability of the network device is fully utilized, so that the data in the different resource sub-queues can be concurrently sent to the network device, and the sending efficiency of the configuration information is improved.
In an embodiment, the method further includes: and when the configuration information is failed to be issued, issuing the configuration information to the network equipment through the data connection again according to a preset period. A preset period may be set, for example, the preset period is 2 seconds, and when the configuration information fails to be delivered, the configuration information is delivered to the network device through the data connection again after 2 seconds.
In an embodiment, the method further includes: when any configuration information in the resource sub-queue fails to be issued to the network equipment, detecting the network state; if the network is in a fault state, storing the configuration information in all the resource sub-queues, and retransmitting the configuration information to the network equipment when the network is unblocked; when any configuration information in any resource sub-queue fails to be issued, the network state can be detected, if the network is in a fault state, the configuration information in all the resource sub-queues is stored, data damage or loss is avoided, the network state can be detected in a preset time period, and when the network returns to normal, the configuration information is issued to the network equipment again.
In an embodiment, the method further includes: when any configuration information in the resource sub-queue fails to be issued to the network equipment, detecting whether configuration information associated with any configuration information exists in the network equipment; if the network equipment does not have the configuration information associated with any configuration information, issuing the configuration information associated with any configuration information, and re-issuing any configuration information to the network equipment; when the network device is configured, some configurations have a dependency relationship, for example, the configuration information a needs to be issued first and then the configuration information B needs to be issued, that is, when the configuration information a exists in the network device, the configuration information B can be successfully issued, and the configuration information a and the configuration information B are called associated configuration information.
It should be noted that, when any configuration information in the resource sub-queue fails to be issued to the network device, the present application does not limit the sequence of detecting the network state and detecting whether the network device has been configured as the state corresponding to any configuration information: the network state can be detected first, if the network state is normal, the configuration is successfully issued, and then whether the network equipment is configured to the state corresponding to any configuration information is detected; whether the network equipment is configured to be the state corresponding to any configuration information or not can be detected firstly, then the network state is detected, and the network equipment can also be detected simultaneously, and when the configuration issuing fails, a log can be generated to record the configuration issuing condition, so that operation and maintenance personnel can check the configuration issuing condition conveniently.
According to the technical scheme provided by the application, the configuration information cached in the resource queue is classified, and each type of configuration information obtained through classification is cached in different resource sub-queues respectively; each type of configuration information can be issued to the network equipment through the same data connection, and the corresponding data connection is closed after the same type of configuration information is issued, so that a data connection is not required to be established every time a configuration is issued, and the waste of data transmission resources caused by frequently establishing and closing the data connection is avoided; the configuration information is sent to the network equipment in parallel through corresponding data connection, multiple configurations can be sent to the network equipment at the same time, the processing capacity of the equipment is fully utilized, the configuration sending efficiency is improved, when the configuration sending fails, the reason of the configuration sending failure is determined and corresponding processing is carried out through detecting the network state or whether relevant configuration information exists in the network equipment in time, the configuration is sent again, and the configuration sending success rate is improved; and a log file is generated to record the configuration issuing condition, so that operation and maintenance personnel can know the configuration issuing condition in time conveniently, and the system is maintained in a targeted manner.
Fig. 2 is a schematic diagram of a network architecture of a method for concurrently issuing configuration information according to an embodiment of the present application. As shown in fig. 2, the parallel issuing configuration information system may include a configuration issuing server 21 and a network device 22. The configuration issuing server 21 receives the configuration information, creates a data connection with the network device 22 through a proxy service therein, and concurrently issues the configuration information to the network device 22 through the data connection, and the network device 22 may configure a corresponding function in the network device 22 according to the received configuration information, where the function may be a security protection function or a security policy function, such as a FWaas (Firewall a service), a VTP (VLAN trunk Protocol), an STP (Spanning Tree Protocol), and the like, which is not limited in this application and is described in detail below with reference to fig. 3 and 4. Fig. 3 is a flowchart of a specific implementation of a method for concurrently issuing configuration information according to an exemplary embodiment of the present application, and fig. 4 is a flowchart of a method for concurrently issuing configuration information according to an exemplary embodiment of the present application. As shown in fig. 3, the parallel issuing of the configuration information includes the following steps:
step 302, initializing; the agent service in the issuing server 21 creates a resource queue, a first preset number of resource sub-queues and a resource processing thread pool in advance, wherein the resource queue is used for caching all configuration information received by the issuing server in sequence; the resource sub-queues with the first preset number are used for caching the classified configuration information, the first preset number is determined according to engineering experience values, and in the present example, the first preset number is assumed to be 50; the resource processing thread pool includes a second preset number of resource processing threads, each resource processing thread is configured to process configuration information in one resource sub-queue, the second preset number is determined according to the parallel capability of the network device 22, for example, the maximum parallel capability of the network device 22 is that data in 50 threads are processed at the same time, and the second preset number is not greater than 50.
Step 304, as shown in fig. 4, obtaining the configuration information cached in the resource queue; the delivery server 21 sequentially receives the configuration information for the network device 22, and sequentially buffers the configuration information into the resource queue, where the configuration information in the resource queue is sequentially arranged according to the time when the configuration information is received by the delivery server 21.
Step 306, as shown in fig. 4, classifying the configuration information; the proxy service in the delivery server 21 classifies the configuration information, for example, according to router-ID, the router-ID used by each type of configuration information is the same.
Step 308, sequentially caching each type of configuration information obtained by classification into different resource sub-queues; the proxy service in the down-sending server 21 sequentially reads the configuration information in the resource queue, and when the first configuration information is read, further reads the router-ID therein, and the first piece of configuration information is cached to a resource sub-queue, the router-ID in the next piece of configuration information is continuously read, if the configuration information is different from the router-ID in the previous configuration information, the configuration information is cached to another resource sub-queue, if the configuration information is the same as the router-ID in the previous configuration information, the configuration information is cached to the resource sub-queue where the first configuration information is located, continuously reading the next piece of configuration information, repeating the steps until all the configuration information is read, adopting the method, the configuration information in each resource sub-queue can be ensured to be arranged in order, and the failure of issuing due to time sequence error in the subsequent issuing configuration is avoided; the proxy service in the issue server 21 calls the threads in the thread pool to make each resource processing thread process the configuration information in one resource sub-queue, as shown in fig. 4, if there are exactly 50 types of configuration information, that is, one type of configuration information is cached in each resource sub-queue in the 50 resource sub-queues, then the 50 resource processing threads exactly correspond to the 50 resource sub-queues one by one, and each resource processing thread processes the configuration information in one resource sub-queue; if only 40 types of configuration information exist, namely only 40 resource sub-queues in the 50 resource sub-queues have configuration information cached, only 40 resource processing threads need to be called; if configuration information of more than 50 types exists, for example, 55 types of configuration information exists, determining the difference value between the number of the types of the configuration information and the number of the resource sub-queues, namely the difference value between 55 and 50 is 5, creating 5 temporary resource sub-queues, respectively caching one type of configuration information in the 5 temporary resource sub-queues, calling the 50 resource processing threads to process the configuration information in the 50 sub-queues, and processing the remaining 5 types of configuration information after the resource processing threads finish processing any type of configuration information until the configuration information in all the sub-queues is processed.
Step 310, a data connection is created between each resource sub-queue and the network device, so that each type of configuration information corresponds to one data connection, and each resource processing thread creates a data connection by itself, for example, when 50 resource processing threads correspond to 50 resource sub-queues one by one, 50 resource processing threads can create 50 TCP connections between the issuing server 21 and the network device 22, each TCP connection corresponds to one type of configuration information, and each type of configuration information is issued to the network device 22 through its corresponding TCP connection.
Step 312, the configuration information is sent to the network device through the corresponding data connection in parallel, and the corresponding data connection is disconnected after the sending is completed; after the distribution of any kind of configuration information is completed, the TCP connection between the distribution server 21 and the network device 22 corresponding to any kind of configuration information is disconnected to release the data resources occupied by the distribution server.
Step 314, judging whether any configuration information in the resource sub-queue is successfully issued to the network equipment, if so, entering step 314a, and finishing issuing the configuration information; if any configuration information in the resource sub-queue fails to be delivered, the process goes to step 314b, and determines whether the network state is normal, if the network state is a fault, the process goes to steps 314b1 and 314b2, and stores the configuration information in all the resource sub-queues, so as to prevent the configuration information in the resource sub-queues from being cleaned up due to long-time fault of the network, and a time period can be set, for example, a time period of 2 seconds is set, the network state is detected every 2 seconds, when it is detected that the network recovers to a normal state, the configuration information is re-delivered to the network device 22, and a log file is generated to record the configuration delivery situation, so that operation and maintenance personnel can conveniently check the configuration delivery situation.
If the network is in a normal state, step 316b is performed to determine whether the network device 22 has configuration information associated with any of the configuration information; specifically, when the condition that the configuration information B is successfully issued is that the configuration information a exists in the network device 22, it is called that the configuration information B is the configuration associated with the configuration information a, if the configuration information a does not exist in the network device 22, step 318B is performed, the configuration information a is issued to the network device 22, the configuration information B is issued to the network device 22 again, and a log file is generated to record the configuration issuing condition, so that the configuration issuing condition is convenient for operation and maintenance personnel to check; if the network device 22 has the configuration information a, it indicates that the reason for the failure of configuration delivery is neither a network failure nor lack of the associated configuration information a, and then the process may proceed to step 318a, report an error, directly generate a log file, and record the configuration delivery condition.
It should be noted that the above description is only an exemplary embodiment, and when the above steps are actually performed, the step 314 may be performed first and then the step 316 is performed, or the step 316 may be performed first and then the step 314 is performed, or the step 314 and the step 316 may be performed simultaneously.
In the above embodiment, the proxy service in the delivery server 21 is configured to create a resource queue, a resource sub-queue, and a resource processing thread pool in advance by initialization, and the proxy service buffers each type of configuration information obtained by classification into different resource sub-queues by classifying the configuration information buffered in the resource queue; each type of configuration information can be issued to the network device 22 through the same data connection, and the corresponding data connection is closed after the same type of configuration information is issued, so that a data connection is not required to be established every time a configuration is issued, and the waste of data transmission resources caused by frequently establishing and closing the data connection is avoided; the multiple resource processing threads send the configuration information to the network device 22 in parallel through corresponding data connection, the parallel processing capacity of the network device 22 is fully utilized, the configuration sending efficiency is improved, when the configuration sending fails, the reason of the configuration sending failure is determined and corresponding processing is carried out through detecting the network state and whether associated data exist in time, the configuration is sent again, and the configuration sending success rate is improved; and a log file is generated to record the configuration issuing condition, so that operation and maintenance personnel can know the configuration issuing condition in time conveniently, and the system is maintained in a targeted manner.
Corresponding to the method embodiments, the present specification also provides an embodiment of an apparatus.
Fig. 5 is a schematic structural diagram of an electronic device applying the above method for issuing configuration information in parallel according to an exemplary embodiment of the present application. Referring to fig. 5, at the hardware level, the electronic device includes a processor 502, an internal bus 504, a network interface 506, a memory 508, and a non-volatile memory 510, although it may also include hardware required for other services. The processor 502 reads a corresponding computer program from the non-volatile memory 510 into the memory 508 and runs the computer program, thereby forming a device for issuing configuration information in parallel on a logic level. Of course, besides the software implementation, the present application does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Fig. 6 is a block diagram illustrating a device for issuing configuration information in parallel according to an exemplary embodiment of the present application. Referring to fig. 6, the apparatus includes a classification unit 602, a connection unit 604, and a sending unit 606, where:
a classifying unit 602, configured to obtain configuration information cached in a resource queue, classify the configuration information, and cache each type of configuration information obtained by classification into different resource sub-queues respectively;
a connection unit 604, configured to create a data connection between each resource sub-queue and a network device, so that each type of configuration information corresponds to one data connection;
the issuing unit 606 is configured to issue the configuration information to the network device in parallel through the corresponding data connection, and disconnect the corresponding data connection after the issuing is completed;
optionally, the issuing unit 606 is further configured to issue the configuration information to a network device through a corresponding data connection in parallel, so that the network device processes the configuration information using a processing function, and further configure the network device according to the processed configuration information, where the processing function includes a processing function corresponding to each piece of configuration information carried by each piece of configuration information or a processing function cached in the network device;
optionally, the apparatus further comprises: a first creating unit 608, configured to create a first preset number of resource sub-queues in advance; when the number of the types of the configuration information is larger than the number of the pre-created resource sub-queues, determining a difference value between the number of the types of the configuration information and the number of the pre-created resource sub-queues, creating temporary resource sub-queues with the same number as the difference value, and caching the configuration information of the corresponding type in each temporary resource sub-queue; releasing the temporary resource sub-queue after the configuration information cached in the temporary resource sub-queue is issued;
optionally, the classifying unit 602 is further configured to buffer each type of configuration information obtained by classification into different resource sub-queues created in real time, where the number of the resource sub-queues is the same as the number of the types of the configuration information;
optionally, the apparatus further comprises: a second creating unit 610, configured to create a resource processing thread pool in advance, where the resource processing thread pool includes a second preset number of resource processing threads, and each resource processing thread is used to process configuration information in one resource sub-queue;
optionally, the apparatus further comprises: a re-issuing unit 612, configured to, when the configuration information is unsuccessfully issued, re-issue the configuration information to the network device through the data connection according to a preset period;
optionally, the apparatus further comprises: a detecting unit 614, configured to detect a network state and/or whether configuration information associated with any configuration information exists in the network device when issuing any configuration information in the resource sub-queue to the network device fails;
if the network is in a fault state, storing the configuration information in all the resource sub-queues, and retransmitting the configuration information to the network equipment when the network is smooth;
and if the network equipment does not have the configuration information associated with any configuration information, issuing the configuration information associated with any configuration information to the network equipment, and re-issuing any configuration information to the network equipment.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium comprising instructions, such as a memory comprising instructions, executable by a processor of the apparatus for parallel issuing configuration information to implement the method as in any one of the above embodiments.
The non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, etc., which is not limited in this application.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. A method for issuing configuration information in parallel is characterized in that the method comprises the following steps:
obtaining the cached configuration information in the resource queue, classifying the configuration information, and caching each type of configuration information obtained by classification into different resource sub-queues according to the receiving time;
establishing data connection between each resource sub-queue and network equipment, so that each type of configuration information corresponds to one data connection;
and parallelly issuing the configuration information to network equipment through corresponding data connection, and disconnecting the corresponding data connection after the issuing is finished.
2. The method of claim 1, wherein sending the configuration information to the network devices in parallel via corresponding data connections comprises:
and sending the configuration information to network equipment in parallel through corresponding data connection, so that the network equipment processes the configuration information by using a processing function, and further, the network equipment performs configuration according to the processed configuration information, wherein the processing function comprises the corresponding processing function carried by each piece of configuration information or the processing function cached in the network equipment.
3. The method of claim 1, further comprising:
pre-creating a first preset number of resource sub-queues;
when the number of the types of the configuration information is larger than the number of the pre-created resource sub-queues, determining the difference value between the number of the types of the configuration information and the number of the pre-created resource sub-queues, creating temporary resource sub-queues with the same number as the difference value, and caching the configuration information of the corresponding type in each temporary resource sub-queue;
and releasing the temporary resource sub-queue after the configuration information cached in the temporary resource sub-queue is issued.
4. The method of claim 1, wherein buffering each type of configuration information obtained by classification into different resource sub-queues respectively comprises:
and caching each type of configuration information obtained by classification into different resource sub-queues established in real time, wherein the number of the resource sub-queues is the same as the number of the types of the configuration information.
5. The method of claim 1, further comprising:
and pre-establishing a resource processing thread pool, wherein the resource processing thread pool comprises a second preset number of resource processing threads, and each resource processing thread is used for processing the configuration information in one resource sub-queue.
6. The method of claim 1, further comprising:
and when the configuration information is failed to be issued, the configuration information is issued to the network equipment through the data connection again according to a preset period.
7. The method of claim 1, further comprising:
when any configuration information in the resource sub-queue fails to be issued to the network equipment, detecting a network state and/or whether configuration information associated with any configuration information exists in the network equipment;
if the network is in a fault state, storing the configuration information in all the resource sub-queues, and retransmitting the configuration information to the network equipment when the network is smooth;
and if the network equipment does not have the configuration information associated with any configuration information, issuing the configuration information associated with any configuration information to the network equipment, and re-issuing any configuration information to the network equipment.
8. An apparatus for issuing configuration information in parallel, the apparatus comprising:
the classification unit is used for acquiring the cached configuration information in the resource queue, classifying the configuration information and caching each type of configuration information obtained by classification into different resource sub-queues according to the receiving time;
the connection unit is used for establishing data connection between each resource sub-queue and the network equipment so that each type of configuration information corresponds to one data connection;
and the issuing unit is used for issuing the configuration information to the network equipment in parallel through the corresponding data connection and disconnecting the corresponding data connection after the issuing is finished.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-7 by executing the executable instructions.
10. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, perform the steps of the method according to any one of claims 1-7.
CN202011529831.XA 2020-12-22 2020-12-22 Method and device for parallel issuing configuration information Active CN112769639B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011529831.XA CN112769639B (en) 2020-12-22 2020-12-22 Method and device for parallel issuing configuration information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011529831.XA CN112769639B (en) 2020-12-22 2020-12-22 Method and device for parallel issuing configuration information

Publications (2)

Publication Number Publication Date
CN112769639A CN112769639A (en) 2021-05-07
CN112769639B true CN112769639B (en) 2022-09-30

Family

ID=75694750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011529831.XA Active CN112769639B (en) 2020-12-22 2020-12-22 Method and device for parallel issuing configuration information

Country Status (1)

Country Link
CN (1) CN112769639B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411392B (en) * 2021-06-16 2022-05-10 中移(杭州)信息技术有限公司 Resource issuing method, device, equipment and computer program product
CN114449040B (en) * 2022-01-28 2023-12-05 杭州迪普科技股份有限公司 Configuration issuing method and device based on cloud platform

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101488898A (en) * 2009-03-04 2009-07-22 北京邮电大学 Tree shaped fast connection establishing method based on multi-Agent cooperation
CN110209549A (en) * 2018-05-22 2019-09-06 腾讯科技(深圳)有限公司 Data processing method, relevant apparatus, relevant device and system
CN111211942A (en) * 2020-01-03 2020-05-29 山东超越数控电子股份有限公司 Data packet receiving and transmitting method, equipment and medium
CN111343252A (en) * 2020-02-13 2020-06-26 深圳壹账通智能科技有限公司 High-concurrency data transmission method based on http2 protocol and related equipment
CN111767143A (en) * 2020-06-24 2020-10-13 中国工商银行股份有限公司 Transaction data processing method, device, equipment and system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7788411B2 (en) * 2006-07-20 2010-08-31 Oracle America, Inc. Method and system for automatically reflecting hardware resource allocation modifications
CN103117798B (en) * 2012-12-31 2016-08-03 广东东研网络科技股份有限公司 The method that after OLT power-off restarting, ONU configures fast quick-recovery
CN106254271B (en) * 2016-08-08 2019-07-19 北京邮电大学 A kind of programmable queue configuration method and device for software defined network
CN106789152A (en) * 2016-11-17 2017-05-31 东软集团股份有限公司 Processor extended method and device based on many queue network interface cards
US10244010B2 (en) * 2017-02-16 2019-03-26 Nokia Of America Corporation Data processing apparatus configured to recover a network connection, a method, a system and a non-transitory computer readable medium configured to perform same
CN108259269A (en) * 2017-12-30 2018-07-06 上海陆家嘴国际金融资产交易市场股份有限公司 The monitoring method and system of the network equipment
CN109905412B (en) * 2019-04-28 2021-06-01 山东渔翁信息技术股份有限公司 Network data parallel encryption and decryption processing method, device and medium
CN110532076A (en) * 2019-08-09 2019-12-03 济南浪潮数据技术有限公司 A kind of method, system, equipment and the readable storage medium storing program for executing of cloud resource creation
CN111464331B (en) * 2020-03-03 2023-03-24 深圳市计通智能技术有限公司 Control method and system for thread creation and terminal equipment
CN111478820B (en) * 2020-06-24 2020-10-09 南京赛宁信息技术有限公司 Network equipment configuration system and method for large-scale network environment of network target range

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101488898A (en) * 2009-03-04 2009-07-22 北京邮电大学 Tree shaped fast connection establishing method based on multi-Agent cooperation
CN110209549A (en) * 2018-05-22 2019-09-06 腾讯科技(深圳)有限公司 Data processing method, relevant apparatus, relevant device and system
CN111211942A (en) * 2020-01-03 2020-05-29 山东超越数控电子股份有限公司 Data packet receiving and transmitting method, equipment and medium
CN111343252A (en) * 2020-02-13 2020-06-26 深圳壹账通智能科技有限公司 High-concurrency data transmission method based on http2 protocol and related equipment
CN111767143A (en) * 2020-06-24 2020-10-13 中国工商银行股份有限公司 Transaction data processing method, device, equipment and system

Also Published As

Publication number Publication date
CN112769639A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
US10257135B2 (en) Intelligent electronic mail server manager, and system and method for coordinating operation of multiple electronic mail servers
CN112769639B (en) Method and device for parallel issuing configuration information
CN107451012B (en) Data backup method and stream computing system
US9973306B2 (en) Freshness-sensitive message delivery
CN110677274A (en) Event-based cloud network service scheduling method and device
US9577972B1 (en) Message inspection in a distributed strict queue
CN114025018A (en) Data processing method, device, network equipment and computer readable storage medium
US10341176B2 (en) System and method for network provisioning
CN112969172B (en) Communication flow control method based on cloud mobile phone
CN109150890A (en) The means of defence and relevant device of newly-built connection attack
CN107426012B (en) Fault recovery method and device based on super-fusion architecture
CN105406989B (en) Handle method, network interface card and system, the method and host of more new information of message
US9652310B1 (en) Method and apparatus for using consistent-hashing to ensure proper sequencing of message processing in a scale-out environment
CN110569238B (en) Data management method, system, storage medium and server based on big data
US7843829B1 (en) Detection and recovery from control plane congestion and faults
US11231969B2 (en) Method for auditing a virtualised resource deployed in a cloud computing network
US20210328890A1 (en) System and methods for supporting multiple management interfaces using a network analytics engine of a network switch
CN111614649B (en) Method and device for closing TCP short connection
US9674282B2 (en) Synchronizing SLM statuses of a plurality of appliances in a cluster
US20240048495A1 (en) Systems and methods for networked microservices flow control
US20240163161A1 (en) Active network node resilience pattern for cloud service
CN103368754A (en) Service failure detection method, apparatus, system and device
Sun et al. Attendre: mitigating ill effects of race conditions in openflow via queueing mechanism
WO2024030980A1 (en) Systems and methods for networked microservices flow control
CN116963133A (en) Alarm message processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant