CN116599830B - Communication node and link configuration method and device, storage medium and electronic equipment - Google Patents

Communication node and link configuration method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116599830B
CN116599830B CN202310884249.2A CN202310884249A CN116599830B CN 116599830 B CN116599830 B CN 116599830B CN 202310884249 A CN202310884249 A CN 202310884249A CN 116599830 B CN116599830 B CN 116599830B
Authority
CN
China
Prior art keywords
end processor
link
main
external system
main link
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310884249.2A
Other languages
Chinese (zh)
Other versions
CN116599830A (en
Inventor
费冬强
王雷
张坤
邓旭平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongfang Tide Software Beijing Co ltd
Original Assignee
Tongfang Tide Software Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongfang Tide Software Beijing Co ltd filed Critical Tongfang Tide Software Beijing Co ltd
Priority to CN202310884249.2A priority Critical patent/CN116599830B/en
Publication of CN116599830A publication Critical patent/CN116599830A/en
Application granted granted Critical
Publication of CN116599830B publication Critical patent/CN116599830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1034Reaction to server failures by a load balancer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Hardware Redundancy (AREA)

Abstract

The application provides a configuration method, a device, a storage medium and electronic equipment of a communication node and a link; the method comprises the following steps: selecting a main front-end processor from a plurality of front-end processors according to the node information of each front-end processor, wherein the main front-end processor is respectively connected with each other front-end processor in a communication way; the master front-end selects all communication links of each external system from which the link state meets the preset standard as a main link of the external system, and each front-end communicates with each external system according to the corresponding main link; responding to any front-end processor to acquire data from a corresponding external system through a main link, transmitting the data to a server for processing, and synchronizing the data to all other front-end processors; and responding to the server to issue instructions pointing to any external system to each front-end processor, and enabling the front-end processor corresponding to the main link of the external system to issue instructions to the external system. The application realizes redundant communication of a plurality of front-end processors, and can effectively switch links when in failure.

Description

Communication node and link configuration method and device, storage medium and electronic equipment
Technical Field
Embodiments of the present application relate to the field of communications technologies, and in particular, to a method, an apparatus, a storage medium, and an electronic device for configuring a communication node and a link.
Background
In the related communication node and link configuration manner, a fixed number of redundant front-end processors for standby are often set, so when more redundant front-end processors for standby need to be added, the number of redundant front-end processors is difficult to flexibly change when the number of redundant front-end processors needs to be reduced.
Disclosure of Invention
In view of the above, an object of the present application is to provide a method, an apparatus, a storage medium, and an electronic device for configuring a communication node and a link.
Based on the above object, the present application provides a configuration method of communication nodes and links, which is applied to an integrated processing platform, wherein the integrated processing platform comprises at least one server and a plurality of front-end processors, and each front-end processor is in communication connection with a plurality of external systems outside the integrated processing platform; the method comprises the following steps:
selecting a main front-end processor from the plurality of front-end processors according to the node information of each front-end processor, wherein the main front-end processor is respectively in communication connection with each other front-end processor;
the master front-end processor selects a main link with a link state meeting a preset standard from all communication links of each external system as a main link of the external system, and communicates with each external system according to the corresponding main link;
responding to any front-end processor to acquire data from a corresponding external system through a main link, transmitting the data to the server for processing, and synchronizing the data to all other front-end processors;
and responding to the server to issue instructions pointing to any external system to each front-end processor, and enabling the front-end processor corresponding to the main link of the external system to issue the instructions to the external system.
Further, before selecting one master front-end processor from the plurality of front-end processors according to the node information of each front-end processor, the method further comprises:
enabling each front-end processor to acquire node information of all front-end processors from the server;
and determining the respective node order and node names of other front-end processors of the front-end processor from the node information.
Further, after selecting one master front-end processor from the plurality of front-end processors according to the node information of each front-end processor, the method further comprises:
and responding to the current master front-end processor, and selecting the front one of the node sequences from all other front-end processors as a new master front-end processor according to the node sequence of each front-end processor.
Further, the link state includes a bit error rate of the link;
the main pre-processing step of enabling the main pre-processing step to select a main link which is used as the external system and has a link state meeting a preset standard from all communication links of each external system comprises the following steps:
the master front-end processor acquires the respective error rates of all communication links of the external system from each other front-end processor;
and the communication link with the lowest error rate in all the communication links is used as a main link of the external system.
Further, after each front-end processor communicates with each external system according to the corresponding main link, the method further includes:
enabling each other front-end processor to send the current link state of the respective main link to the main front-end processor according to a preset first time interval;
after receiving the current link state of the respective main link, the main front-end processor judges whether the current respective main link is abnormal;
and responding to the determination that no abnormal main link exists, continuously issuing each main link to all other front-end processors according to a preset second time interval, and enabling each front-end processor to continuously communicate according to the corresponding main link.
Further, after determining whether the current respective main links are abnormal, the method further includes:
responding to the failure of the main link of any external system and/or the failure of a front-end processor corresponding to the main link, and selecting a new main link from other communication links of the external system;
and interacting with the server by utilizing the synchronized data in the front-end processor corresponding to the new main link.
Further, after responding to the server to issue the instruction pointing to any external system to each front-end processor, the method further comprises:
and in response to determining that the front end processor corresponding to the non-main link of the external system receives the instruction, the front end processor discards the instruction.
Based on the same inventive concept, the application also provides a configuration device of the communication node and the link, comprising: the system comprises a main front-end processor election module, a main link determining module, a data interaction module and an instruction issuing module;
the main front-end processor election module is configured to elect one main front-end processor from a plurality of front-end processors according to the node information of each front-end processor, and the main front-end processor is respectively in communication connection with each other front-end processor;
the main link determining module is configured to enable the main front-end processor to select a main link which is used as a main link of the external system, the link state of the main link meets a preset standard, and enable each front-end processor to communicate with each external system according to the corresponding main link;
the data interaction module is configured to respond to the fact that any front-end processor acquires data from a corresponding external system through a main link, send the data to a server for processing, and synchronize the data to all other front-end processors;
the instruction issuing module is configured to, in response to the server issuing an instruction directed to any external system to each front-end processor, cause the front-end processor corresponding to the main link of the external system to issue the instruction to the external system.
Based on the same inventive concept, the application also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the configuration method of any one of the communication nodes and links as described above when executing the program.
Based on the same inventive concept, the present application also provides a non-transitory computer readable storage medium, wherein the non-transitory computer readable storage medium stores computer instructions for causing the computer to perform the configuration method of the communication node and the link as described above.
As can be seen from the foregoing, the configuration method, apparatus, storage medium and electronic device for a communication node and link provided by the present application are based on a plurality of configured front-end processors, and a master front-end processor selected from the plurality of front-end processors sets a master link for each external system, and comprehensively considers the load of each front-end processor to perform allocation of the master link, and simultaneously, by synchronizing each front-end processor with data interacted with a corresponding external system to all other front-end processors, when any front-end processor or the main link loaded by the front-end processor is abnormal, the corresponding external system can be quickly and stably switched to other front-end processors, and the connection of the data is ensured, and by selecting the master front-end processor from the plurality of front-end processors, the master front-end processor can be subjected to the task of managing other standby front-end processors, and based on the designed election mechanism, so that when the master front-end processor is abnormal, a new master front-end processor can be quickly selected and stable switching is realized.
Drawings
In order to more clearly illustrate the technical solutions of the present application or related art, the drawings that are required to be used in the description of the embodiments or related art will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort to those of ordinary skill in the art.
Fig. 1 is a flowchart of a method for configuring communication nodes and links according to an embodiment of the present application;
fig. 2 is a logic diagram of a configuration method of a communication node and a link according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a configuration device of a communication node and a link according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the application.
Detailed Description
The present application will be further described in detail below with reference to specific embodiments and with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present application more apparent.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present application should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present application belongs. The terms "first," "second," and the like, as used in embodiments of the present application, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
As described in the background section, it is also difficult for the related configuration method of the communication node and link to satisfy the requirement for communication stability in actual communication.
The applicant finds that in the process of implementing the present application, the main problems of the related configuration method of the communication node and the link are: in the related communication node and link configuration manner, a fixed number of redundant front-end processors for standby are often set, so when more redundant front-end processors for standby need to be added, the number of redundant front-end processors is difficult to flexibly change when the number of redundant front-end processors needs to be reduced.
Based on this, one or more embodiments of the present application provide a configuration method of a communication node and a link, which interact with a plurality of external systems based on a plurality of set front-end processors.
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
In the embodiment of the application, a back-end platform comprising at least one server and a plurality of front-end processors can be used as a comprehensive processing platform, a plurality of external systems are further arranged outside the comprehensive processing platform, and the comprehensive processing platform establishes communication connection with each external system.
Further, the integrated processing platform can perform corresponding processing based on service requests made by the external systems, so as to provide instructions corresponding to the services for the external systems, and the like.
The server in the integrated processing platform can be used for processing each service request, each front-end processor can be used for receiving the service request of each external system and sending the service request to the server, after the server finishes processing, for example, instructions pointing to one or more external systems can be obtained, and each front-end processor can also obtain the instructions from the server and send the instructions to the external system corresponding to the value.
Further, each front-end processor may be referred to as a physical front-end processor, or may be a space divided by a virtual physical front-end processor.
In a specific example, the integrated processing platform may be, for example, a rail transit integrated monitoring platform, and the respective external systems communicatively coupled to the rail transit integrated monitoring platform may be, for example, a climate control system, a power system, a fire protection system, a broadcast system, a passenger information system, a platform door system, a ticketing system, a centralized alerting system, and the like.
Referring to fig. 1, a configuration method of a communication node and a link according to an embodiment of the present application is applied to an integrated processing platform, where the integrated processing platform includes at least one server and a plurality of front-end processors, where each front-end processor is communicatively connected to a plurality of external systems outside the integrated processing platform; the method specifically comprises the following steps:
step S101, selecting a main front-end processor from the plurality of front-end processors according to the node information of each front-end processor, wherein the main front-end processor is respectively in communication connection with each other front-end processor.
In the embodiment of the application, one main front-end processor can be selected from a plurality of front-end processors, and the rest other front-end processors are all used as standby front-end processors.
Specifically, in the process of communication interaction between the integrated processing platform and each external system, each front-end processor in the integrated processing platform can be regarded as a communication node.
Based on this, each of the head-end is configured with respective node information.
In the specific example shown in fig. 2, step S201 of acquiring node information may be performed.
Specifically, the server in the integrated processing platform may synchronize the node information of each front-end processor to each front-end processor, that is, each front-end processor may acquire the node information of each front-end processor, and may also acquire the node information of each other front-end processor.
The configuration information may include a node name of a node corresponding to the front-end processor.
In a specific example configured with 4 front-end processors, the node information of each front-end processor may be specifically expressed as:
node 1:00_ISCS_FEP1;
node 2:00_ISCS_FEP2;
node 3:00_ISCS_FEP3;
node 4:00_ISCS_FEP4;
wherein, node 1, node 2, node 3 and node 4 respectively represent the respective node order of the four front-end processors; 00_ISCS_FEP1, 00_ISCS_FEP2, 00_ISCS_FEP3 and 00_ISCS_FEP4 respectively correspond to each node name representing each front-end processor.
Based on the above, each of the front-end processors can select one main front-end processor through mutual communication, and all the remaining front-end processors are used as standby front-end processors.
In the specific example shown in fig. 2, based on step S201, step S202, electing the master front-end processor, may be continued.
Specifically, after each front-end processor acquires node information of the front-end processor and other front-end processors, the order of the front-end processor can be judged through the node order of each front-end processor, that is, after each front-end processor reads the node information, the front-end processor can know what number of front-end processors is, and a plurality of front-end processors are shared, and the front-end processor with the order in front of the front-end processor can be determined.
Based on the above, when the master front-end processor is selected, each front-end processor sets itself as the master front-end processor, and after the node information of each front-end processor is read, the front-end processor can send the node sequence of itself to other front-end processors and inform the other front-end processors that the front-end processor is the master front-end processor.
Furthermore, when each front-end processor receives the node order of the opposite front-end processor sent by each other front-end processor and is informed that the opposite front-end processor is the main front-end processor, whether the node order of the opposite front-end processor is before the opposite front-end processor is judged.
Further, if the node order of the opposite side front-end processor is before itself, the master front-end processor is modified to the opposite side front-end processor.
Further, the above notifying, judging and modifying process is repeated in the next round until it is determined that only one front-end processor is the main front-end processor.
Further, the selected master front-end processor may perform the following management operations on other standby front-end processors.
Based on this, it can be seen that the master front-end processor is: the front-end processor which ranks the node order at the forefront is the front-end processor which can normally communicate and interact.
In other embodiments of the present application, when an abnormality occurs in the master front-end processor selected in the previous embodiment, that is, when a fault occurs, a new master front-end processor is selected again to replace the original master front-end processor with the abnormality.
In this embodiment, after the primary front-end processor in the foregoing embodiment is selected, all the standby front-end processors may verify whether the primary front-end processor is abnormal by continuously performing message interaction with the primary front-end processor.
Specifically, a duration interval may be preset for each standby front-end processor, for example, 1 second, and each standby front-end processor may send a verification message to the master front-end processor according to the duration interval.
The verification message is used for verifying whether the master front-end processor is abnormal or not.
Further, when the main front-end processor can normally communicate with the standby front-end processor, the main front-end processor is proved to be abnormal, and when the main front-end processor can not normally communicate with the standby front-end processor, the main front-end processor is proved to be abnormal.
In some other embodiments, the current master front-end processor may also determine whether an exception occurs in the master front-end processor itself or in other standby front-end processors in other manners.
Further, when it is determined that the current master front-end processor is abnormal, each standby front-end processor may select, from all standby front-end processors, a standby front-end processor with the highest node order as a new master front-end processor based on the respective node order.
Further, the selected new master front-end processor replaces the original master front-end processor with the abnormality, and the following management operation is performed on each standby front-end processor.
Step S102, the master front-end processor selects a main link which is used as the external system and has a link state meeting a preset standard from all communication links of each external system, and each front-end processor communicates with each external system according to the corresponding main link.
In the embodiment of the application, based on the elected main front-end processor, a main link respectively used for interacting with the comprehensive processing platform can be selected for each external system, and whether each main link and each standby front-end processor are abnormal or not is monitored.
Specifically, each external system may be connected to at least one front-end processor through communication, or may be connected to each front-end processor through communication, and further, each front-end processor may be connected to at least one external system through communication, or may be connected to each external system through communication.
Based on this, for each external system, there are a plurality of communication links for communicating with the integrated processing platform, each communication link corresponds to one front-end processor, and it can be seen that each front-end processor can determine the link state of any communication link of the front-end processor.
The link state may include, among other things, the error rate of the link, i.e., whether the link is properly connected.
Further, since the main front-end processor can be in communication connection with each standby front-end processor, the main front-end processor can acquire the link state of the communication link of each standby front-end processor, determine a main link for communicating with the integrated processing platform for each external system according to the link state, and particularly can use the main link to communicate with a corresponding front-end processor.
In the specific example shown in fig. 2, step S203 may be performed to transmit the link states of the respective communication links.
Specifically, each standby front end processor transmits the link state of all the communication links to the main front end processor.
In this embodiment, a first time interval may be preset for each standby front-end processor, for example, may be 1 second, and each standby front-end processor continuously sends the current link state of each communication link to the master front-end processor according to the first time interval.
Based on this, for each external system, the master front end can determine all communication links for that external system.
In the specific example shown in fig. 2, step S204 may be performed to determine a primary link for each external system.
Specifically, for each external system, the master front end processor may determine the link status of each communication link of the external system, that is, whether the link is normally connected, and the error rate of the link.
Further, the master front end processor can select all communication links corresponding to the external system from which normal connection can be performed, and the master front end processor has the lowest error rate and is used as the master link of the external system.
It can be seen that for each head-end it is possible to connect to multiple external systems, that is, for any head-end it may be necessary to load the communication data of multiple primary links.
Further, when any one of the front-end processors correspondingly accesses a large number of main links, it often means that the main links loaded by other front-end processors are very small, and even do not access the main links.
Therefore, when the main link is configured, the main front end processor can determine the number of main links loaded by each front end processor, and if any front end processor loads too many main links, the main link loaded by the front end processor can be reduced according to the principle of balanced distribution, and the main link is distributed to other front end processors with smaller main link load.
Based on this, the master front end processor will issue respective master links to the respective standby front end processors.
In the specific example shown in fig. 2, step S205 of publishing the primary link to the standby front end processor may be performed.
Specifically, when the main pre-processor determines the main link corresponding to each external system, each main link can be issued to all the standby pre-processors, that is, any pre-processor can acquire the main link of each external system, and even if the pre-processor does not load the main link, the pre-processor can still acquire the relevant connection information of the main link.
In this embodiment, a second time interval may be preset for each master front-end processor, for example, may be 1 second, and each master front-end processor is enabled to continuously issue each main link to each standby front-end processor according to the second time interval, and even if the main link does not need to be modified, each main link is still issued to each standby front-end processor.
Further, step S206 in step 2 may be performed to receive the primary link issued by the primary front end processor.
Specifically, after the main front end processor sends the main links to all the standby front end processors, each standby front end processor can determine the main link of the load required by the standby front end processor from all the received main links, and can continuously know the release of the main link after each second time interval, based on the release, the standby front end processor can continuously confirm that the current main link is correct, and accordingly, communication with a corresponding external system is executed.
In other embodiments of the present application, when the main link selected in the foregoing embodiment is abnormal and thus cannot communicate normally, the master front-end processor may determine a new main link for the external system, so as to replace the original main link in which the abnormality has occurred.
Specifically, based on the fact that the confirmation of the link state of the main link is continuously performed between the main front-end processor and each standby front-end processor in the foregoing embodiment, the state of the main link can be timely known when the main link is abnormal or the front-end processor corresponding to the main link is abnormal.
Further, when any main link is abnormal or the corresponding standby front-end processor is abnormal, the main front-end processor can select a new main link for the corresponding external system according to the principle of balanced distribution, and meanwhile, the main front-end processor can determine that the main link can be normally connected, and the error rate of the main link is not lower than the preset error rate standard.
Further, when the front-end processor corresponding to the main link is the main front-end processor, a new main front-end processor may be selected according to the above embodiment.
And step 103, responding to any front-end processor to acquire data from a corresponding external system through a main link, transmitting the data to the server for processing, and synchronizing the data to all other front-end processors.
In the embodiment of the application, based on the main link selected for each external system in the previous step, the integrated processing platform can acquire data from the external system through the main link to process.
In the specific example shown in fig. 2, based on the determined main link, step S207 may be performed, where the external system performs data interaction with the integrated processing platform.
Specifically, after each external system determines a respective main link, the front end processor loading the main link can collect data from the corresponding external system through the main link and send the collected data to the server so as to realize data interaction between each external system and the comprehensive processing platform.
Further, after any of the front-end processors acquires data from the corresponding external system, the data can be sent to all other front-end processors, so that all the front-end processors can synchronize the same data.
In other embodiments of the present application, based on the foregoing embodiments, when the main link is abnormal and the main pre-processor selects a new main link for the corresponding external system, the new main link will often correspond to another standby pre-processor, but because the same data is synchronized in each standby pre-processor in step S103, the new standby pre-processor corresponding to the new main link of the external system can be stably connected, and immediately perform communication interaction with the server through the new standby pre-processor, so as to achieve complete seamless connection.
It can be seen that each front-end processor in the present application can be used as a spare redundant front-end processor, that is, the number of front-end processors in the present application can be flexibly set, and further, the configuration scheme for the spare redundant front-end processor is more flexible, and is no longer a fixed redundancy mode.
Step S104, responding to the server to issue instructions pointing to any external system to each front-end processor, and enabling the front-end processor corresponding to the main link of the external system to issue the instructions to the external system.
In the embodiment of the application, each front-end processor can issue the instruction of the integrated processing platform to each corresponding external system so as to enable each external system to execute.
In the specific example shown in fig. 2, based on step S207, step S208 may be further performed, and the subsystem acquires an instruction of the integrated processing platform.
Specifically, after each external system has determined its own primary link, the integrated processing platform may send instructions to the external system.
Wherein the instructions may be directed to one or more external systems.
Based on this, the integrated processing platform can issue the instruction to all of the front-end processors.
Further, the front end processor receiving the instruction can determine whether a main link loaded by the front end processor exists in one or more main links corresponding to the one or more external systems according to the external system pointed by the instruction.
Further, when there is no main link to be loaded by the front end processor in the corresponding one or more main links, the front end processor may discard the instruction.
Further, when the main link loaded by the front end processor exists in the corresponding one or more main links, the front end processor sends the instruction to the corresponding external system through the corresponding main link, so that the external system executes the instruction.
It can be seen that, in the configuration method of the communication node and the link according to the embodiments of the present application, the main link is set for each external system based on the configured plurality of front-end processors and the main front-end processor selected from the plurality of front-end processors, the load of each front-end processor is comprehensively considered to perform the allocation of the main link, and meanwhile, by synchronizing the data interacted with the corresponding external system to all other front-end processors by each front-end processor, when any front-end processor or the main link loaded by the front-end processor is abnormal, the corresponding external system can be quickly and stably switched to other front-end processors, and the connection of the data is ensured, and by selecting the main front-end processor from the plurality of front-end processors, the main front-end processor can be subjected to the task of managing other standby front-end processors, and based on the designed election mechanism, thereby realizing that when the main front-end processor is abnormal, a new main front-end processor can be quickly selected and stable switching is realized.
It should be noted that, the method of the embodiment of the present application may be performed by a single device, such as a computer or a server. The method of the embodiment can also be applied to a distributed scene, and is completed by mutually matching a plurality of devices. In the case of such a distributed scenario, one of the devices may perform only one or more steps of the method of an embodiment of the present application, the devices interacting with each other to complete the method.
It should be noted that the foregoing describes some embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Based on the same inventive concept, the embodiment of the application also provides a configuration device of the communication node and the link, which corresponds to the method of any embodiment.
Referring to fig. 3, the configuration device of the communication node and the link includes: a main front-end processor election module 301, a main link determination module 302, a data interaction module 303 and an instruction issuing module 304;
the master front-end processor election module 301 is configured to elect a master front-end processor from a plurality of front-end processors according to node information of each front-end processor, where the master front-end processor is respectively connected with each other front-end processor in a communication manner;
the main link determining module 302 is configured to enable the main front-end processor to select a main link, which is used as a main link of the external system, with a link state meeting a preset standard from all communication links of the external system, and enable each front-end processor to communicate with the external system according to the corresponding main link;
the data interaction module 303 is configured to, in response to any front-end processor obtaining data from a corresponding external system through a main link, send the data to a server for processing, and synchronize the data to all other front-end processors;
the instruction issuing module 304 is configured to, in response to the server issuing an instruction directed to any external system to each front-end processor, cause the front-end processor corresponding to the main link of the external system to issue the instruction to the external system.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of each module may be implemented in the same piece or pieces of software and/or hardware when implementing an embodiment of the present application.
The device of the foregoing embodiment is configured to implement the configuration method of the corresponding communication node and link in any foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Based on the same inventive concept, corresponding to the method of any embodiment, the embodiment of the application further provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to implement the method for configuring the communication node and the link according to any embodiment.
Fig. 4 shows a more specific hardware architecture of an electronic device according to this embodiment, where the device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 implement communication connections therebetween within the device via a bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided by the embodiments of the present application.
The Memory 1020 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. Memory 1020 may store an operating system and other application programs, and when the embodiments of the present application are implemented in software or firmware, the associated program code is stored in memory 1020 and executed by processor 1010.
The input/output interface 1030 is used to connect with an input/output module for inputting and outputting information. The input/output module may be configured as a component in a device (not shown in the figure) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
Communication interface 1040 is used to connect communication modules (not shown) to enable communication interactions of the present device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 1050 includes a path for transferring information between components of the device (e.g., processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
It should be noted that although the above-described device only shows processor 1010, memory 1020, input/output interface 1030, communication interface 1040, and bus 1050, in an implementation, the device may include other components necessary to achieve proper operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary for implementing the embodiments of the present application, and not all the components shown in the drawings.
The device of the foregoing embodiment is configured to implement the configuration method of the corresponding communication node and link in any foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Based on the same inventive concept, the present application also provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of configuring a communication node and a link according to any of the embodiments above, corresponding to the method of any of the embodiments above.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The storage medium of the foregoing embodiments stores computer instructions for causing the computer to perform the configuration method of the communication node and the link according to any one of the foregoing embodiments, and has the advantages of the corresponding method embodiments, which are not described herein.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the application (including the claims) is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the application, the steps may be implemented in any order and there are many other variations of the different aspects of the embodiments of the application as described above, which are not provided in detail for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure embodiments of the present application. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring embodiments of the present application, and also in view of the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the embodiments of the present application are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the application, it should be apparent to one skilled in the art that embodiments of the application can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the application has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The embodiments of the application are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, equivalents, improvements and the like, which are within the spirit and principles of the embodiments of the application, are intended to be included within the scope of the application.

Claims (9)

1. The configuration method of the communication node and the link is characterized by being applied to an integrated processing platform, wherein the integrated processing platform comprises at least one server and a plurality of front-end processors, and each front-end processor is in communication connection with a plurality of external systems outside the integrated processing platform;
the method comprises the following steps:
selecting a main front-end processor from the plurality of front-end processors according to the node information of each front-end processor, wherein the main front-end processor is respectively in communication connection with each other front-end processor;
the main front-end processor selects a main link with a link state meeting a preset standard from all communication links of all external systems as a main link of the external systems, distributes the main link for each front-end processor according to a principle of balanced distribution, and communicates with all external systems according to the corresponding main link;
wherein the link state includes a bit error rate of the link;
the main front end processor selects a main link of the external system with a link state meeting a preset standard from all communication links of each external system, and the main link comprises the following components:
the master front-end processor acquires the respective error rates of all communication links of the external system from each other front-end processor;
and the communication link with the lowest error rate in all the communication links is used as a main link of the external system;
responding to any front-end processor to acquire data from a corresponding external system through a main link, transmitting the data to the server for processing, and synchronizing the data to all other front-end processors;
and responding to the server to issue instructions pointing to any external system to each front-end processor, and enabling the front-end processor corresponding to the main link of the external system to issue the instructions to the external system.
2. The method of claim 1, wherein before selecting one master front-end processor from the plurality of front-end processors based on the respective node information of each front-end processor, further comprising:
enabling each front-end processor to acquire node information of all front-end processors from the server;
and determining the respective node order and node names of other front-end processors of the front-end processor from the node information.
3. The method of claim 2, wherein after selecting one master front-end processor from the plurality of front-end processors according to the respective node information of each front-end processor, further comprising:
and responding to the abnormality of the current main front-end processor, and selecting the front one of the node sequences from all other front-end processors as a new main front-end processor according to the node sequence of each front-end processor.
4. The method of claim 1, wherein after the enabling each of the front-end processors to communicate with the external systems according to the corresponding main link, further comprises:
enabling each other front-end processor to send the current link state of the respective main link to the main front-end processor according to a preset first time interval;
after receiving the current link state of the respective main link, the main front-end processor judges whether the current respective main link is abnormal;
and responding to the determination that no abnormal main link exists, continuously issuing each main link to all other front-end processors according to a preset second time interval, and enabling each front-end processor to continuously communicate according to the corresponding main link.
5. The method of claim 4, wherein after said determining whether the respective main link is current abnormal, further comprising:
responding to the failure of the main link of any external system and/or the failure of a front-end processor corresponding to the main link, and selecting a new main link from other communication links of the external system;
and interacting with the server by utilizing the synchronized data in the front-end processor corresponding to the new main link.
6. The method of claim 1, wherein after responding to the server issuing instructions directed to any external system to each of the head-end, further comprising:
and in response to determining that the front end processor corresponding to the non-main link of the external system receives the instruction, the front end processor discards the instruction.
7. A communications node and link configuration apparatus comprising: the system comprises a main front-end processor election module, a main link determining module, a data interaction module and an instruction issuing module;
the main front-end processor election module is configured to elect one main front-end processor from a plurality of front-end processors according to the node information of each front-end processor, and the main front-end processor is respectively in communication connection with each other front-end processor;
the main link determining module is configured to enable the main front-end processor to select a main link with a link state meeting a preset standard from all communication links of each external system as a main link of the external system, allocate the main link for each front-end processor according to a principle of balanced allocation, and communicate with each external system according to the corresponding main link;
wherein the link state includes a bit error rate of the link;
the main front end processor selects a main link of the external system with a link state meeting a preset standard from all communication links of each external system, and the main link comprises the following components:
the master front-end processor acquires the respective error rates of all communication links of the external system from each other front-end processor;
and the communication link with the lowest error rate in all the communication links is used as a main link of the external system;
the data interaction module is configured to respond to the fact that any front-end processor acquires data from a corresponding external system through a main link, send the data to a server for processing, and synchronize the data to all other front-end processors;
the instruction issuing module is configured to, in response to the server issuing an instruction directed to any external system to each front-end processor, cause the front-end processor corresponding to the main link of the external system to issue the instruction to the external system.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable by the processor, wherein the processor implements the method of any one of claims 1 to 6 when the computer program is executed.
9. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 6.
CN202310884249.2A 2023-07-19 2023-07-19 Communication node and link configuration method and device, storage medium and electronic equipment Active CN116599830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310884249.2A CN116599830B (en) 2023-07-19 2023-07-19 Communication node and link configuration method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310884249.2A CN116599830B (en) 2023-07-19 2023-07-19 Communication node and link configuration method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN116599830A CN116599830A (en) 2023-08-15
CN116599830B true CN116599830B (en) 2023-10-10

Family

ID=87594196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310884249.2A Active CN116599830B (en) 2023-07-19 2023-07-19 Communication node and link configuration method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116599830B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102158387A (en) * 2010-02-12 2011-08-17 华东电网有限公司 Protection fault information processing system based on dynamic load balance and mutual hot backup
CN103618671A (en) * 2013-11-20 2014-03-05 国家电网公司 Large-scale data acquisition service multi-group distribution system and distribution method thereof
CN109495588A (en) * 2018-12-25 2019-03-19 鼎信信息科技有限责任公司 Task processing method, device, computer equipment and the storage medium of front end processor
CN111913387A (en) * 2020-08-07 2020-11-10 卡斯柯信号有限公司 System for redundancy and load balancing of multiple acquisition devices based on soft bus
CN115421900A (en) * 2022-07-08 2022-12-02 南京国电南自电网自动化有限公司 Preposed data acquisition method, system and storage medium
CN115695156A (en) * 2022-10-27 2023-02-03 宝信软件(成都)有限公司 Communication front-end processor port management system, port fault handling method and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080281938A1 (en) * 2007-05-09 2008-11-13 Oracle International Corporation Selecting a master node in a multi-node computer system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102158387A (en) * 2010-02-12 2011-08-17 华东电网有限公司 Protection fault information processing system based on dynamic load balance and mutual hot backup
CN103618671A (en) * 2013-11-20 2014-03-05 国家电网公司 Large-scale data acquisition service multi-group distribution system and distribution method thereof
CN109495588A (en) * 2018-12-25 2019-03-19 鼎信信息科技有限责任公司 Task processing method, device, computer equipment and the storage medium of front end processor
CN111913387A (en) * 2020-08-07 2020-11-10 卡斯柯信号有限公司 System for redundancy and load balancing of multiple acquisition devices based on soft bus
CN115421900A (en) * 2022-07-08 2022-12-02 南京国电南自电网自动化有限公司 Preposed data acquisition method, system and storage medium
CN115695156A (en) * 2022-10-27 2023-02-03 宝信软件(成都)有限公司 Communication front-end processor port management system, port fault handling method and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
业务平台安全运维方案研究;邵一波;迪普·下一代网络论坛;全文 *

Also Published As

Publication number Publication date
CN116599830A (en) 2023-08-15

Similar Documents

Publication Publication Date Title
CN107526659B (en) Method and apparatus for failover
CN101557315B (en) Method, device and system for active-standby switch
US20120215876A1 (en) Information processing system
CN104798349A (en) Failover in response to failure of a port
JP4695705B2 (en) Cluster system and node switching method
CN109802986B (en) Equipment management method, system, device and server
CN107659948B (en) Method and device for controlling access of AP (access point)
US20170264525A1 (en) System for support in the event of intermittent connectivity, a corresponding local device and a corresponding cloud computing platform
CN114265753A (en) Management method and management system of message queue and electronic equipment
EP3813335B1 (en) Service processing methods and systems based on a consortium blockchain network
US9032118B2 (en) Administration device, information processing device, and data transfer method
CN103324554A (en) Standby system device, a control method, and a program thereof
CN112363815B (en) Redis cluster processing method and device, electronic equipment and computer readable storage medium
CN116599830B (en) Communication node and link configuration method and device, storage medium and electronic equipment
US20120239988A1 (en) Computing unit, method of managing computing unit, and computing unit management program
CN114765706A (en) Method and device for triggering vOMCI function from OLT to send OMCI message
US11372463B1 (en) Power down of power over ethernet interfaces
CN114944697A (en) Power supply device and method and cabinet-level server
US7788379B2 (en) Network system and information processing method
CN114884959A (en) Deployment method of multi-cloud and multi-activity architecture and related equipment
CN108717384B (en) Data backup method and device
CN114424170A (en) Operation management apparatus, system, method, and non-transitory computer-readable medium storing program
CN112463514A (en) Monitoring method and device for distributed cache cluster
CN110955210B (en) AGV scheduling method, device and system
CN113452767B (en) Load balancing method and device applied to service cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant