CN116708596A - High concurrency data processing method, device and storage medium based on Ethernet/IP - Google Patents

High concurrency data processing method, device and storage medium based on Ethernet/IP Download PDF

Info

Publication number
CN116708596A
CN116708596A CN202310701195.1A CN202310701195A CN116708596A CN 116708596 A CN116708596 A CN 116708596A CN 202310701195 A CN202310701195 A CN 202310701195A CN 116708596 A CN116708596 A CN 116708596A
Authority
CN
China
Prior art keywords
data
message
event
target
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310701195.1A
Other languages
Chinese (zh)
Inventor
吴智利
张晓霞
代芳琳
石磊
施展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Technology Research Branch Of Tiandi Technology Co ltd
General Coal Research Institute Co Ltd
Original Assignee
Beijing Technology Research Branch Of Tiandi Technology Co ltd
General Coal Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Technology Research Branch Of Tiandi Technology Co ltd, General Coal Research Institute Co Ltd filed Critical Beijing Technology Research Branch Of Tiandi Technology Co ltd
Priority to CN202310701195.1A priority Critical patent/CN116708596A/en
Publication of CN116708596A publication Critical patent/CN116708596A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/12Protocol engines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/03Protocol definition or specification 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/24Negotiation of communication capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer And Data Communications (AREA)

Abstract

The disclosure provides a high concurrency data processing method, device and storage medium based on Ethernet/IP, wherein the method comprises the following steps: the method comprises the steps of receiving message data sent by a plurality of node devices, sequentially caching the message data to a first message queue in the form of data response events, distributing an event processor component to each data response event in the first message queue, processing the distributed data response event by each event processor component based on a pre-configured CIP object model to obtain target data, and storing the target data to a data storage module, so that each node device can communicate by adopting an Ethernet/IP protocol, and processing the data response event by using the CIP object model constructed by unified rules, thereby realizing data interconnection. In addition, the embodiment can also realize the multiplexing and high concurrency processing effects of data by utilizing the message queue.

Description

High concurrency data processing method, device and storage medium based on Ethernet/IP
Technical Field
The disclosure relates to the technical field of industrial data processing, in particular to a high concurrency data processing method, device and storage medium based on Ethernet/IP.
Background
Along with the high-speed development of big data technology, artificial intelligence, internet of things and other technologies, the informatization construction of coal mines is steadily developing towards intelligentization and intelligent mines. In field bus technology application, because of different business requirements of different manufacturers and different technical specifications, hundreds of subsystems (node equipment) are gradually formed, and the subsystems with different functions and communication modes play roles in all production links under the well, but have obvious defects for the intellectualization of the whole mine. In addition, in the practical application scene, the data processing system of the coal mine data needs to process the data of each node device, and along with the expansion of the scale of the mine system, the number of the node devices is increased, and meanwhile, the data concurrency and the high-performance processing capability of the data processing system are very important.
Disclosure of Invention
The present disclosure proposes a method, an apparatus and a storage medium for processing high concurrency data based on Ethernet/IP, which aims to solve at least one of the technical problems in the related art to a certain extent.
An embodiment of a first aspect of the present disclosure provides a high concurrency data processing method based on Ethernet/IP, which is applied to a data processing system, where the data processing system communicates with a plurality of node devices based on Ethernet/IP protocol, and the method includes: receiving message data sent by a plurality of node devices, and sequentially caching the message data to a first message queue in the form of a data response event; assigning an event handler component to each data response event in the first message queue; each event handler component processes the assigned data response event based on a preconfigured CIP object model to obtain target data; and storing the target data to the data storage module.
An embodiment of a second aspect of the present disclosure provides a high concurrency data processing apparatus based on Ethernet/IP, which is applied to a data processing system, where the data processing system communicates with a plurality of node devices based on Ethernet/IP protocols, and includes: the first receiving module is used for receiving message data sent by a plurality of node devices and sequentially caching the message data to a first message queue in a data response event mode; an allocation module for allocating each data response event in the first message queue to an event handler component; a processing module for each event handler component to process the assigned data response event based on a preconfigured CIP object model to obtain target data; and the storage module is used for storing the target data to the data storage module.
An embodiment of a third aspect of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the Ethernet/IP based high concurrency data processing method of the embodiments of the present disclosure.
A fourth aspect of the present disclosure proposes a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the Ethernet/IP-based high concurrency data processing method disclosed in the embodiments of the present disclosure.
In this embodiment, by receiving message data sent by a plurality of node devices, buffering the message data in a first message queue in a form of data response events in turn, allocating an event processor component to each data response event in the first message queue, each event processor component processing the allocated data response event based on a preconfigured CIP object model to obtain target data, and storing the target data in a data storage module, each node device can communicate by using an Ethernet/IP protocol, and processing the data response event by using a CIP object model constructed by a unified rule, thereby implementing data interconnection. In addition, the embodiment can also realize the multiplexing and high concurrency processing effects of data by utilizing the message queue.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a schematic flow chart of a high concurrency data processing method based on Ethernet/IP according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a data acquisition processing system architecture provided in accordance with an embodiment of the present disclosure;
fig. 3 is a flow chart of a high concurrency data processing method based on Ethernet/IP according to another embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a data processing system interacting with an application layer according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of an Ethernet/IP based high concurrency data processing apparatus provided according to another embodiment of the present disclosure;
fig. 6 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
It should be noted that, the execution body of the high concurrency data processing method based on Ethernet/IP in this embodiment may be a high concurrency data processing device based on Ethernet/IP, where the device may be implemented by software and/or hardware, and the device may be configured in an electronic device, where the electronic device may include, but is not limited to, a terminal, a server, and so on.
Fig. 1 is a flow chart of a high concurrency data processing method based on Ethernet/IP according to an embodiment of the present disclosure, where the method is applied to a data processing system, as shown in fig. 1, and the method includes:
s101: and receiving message data sent by a plurality of node devices, and sequentially caching the message data to a first message queue in the form of a data response event.
In the embodiments of the present disclosure, the data processing system may also be referred to as a data acquisition server, may be a data processing system applied in any industrial scenario, and the data processing system communicates with a plurality of node devices (may also be referred to as subsystems) based on Ethernet/IP protocol (abbreviated as "EIP").
In a specific application scenario, the data processing system may be a data acquisition processing system of a coal mine, and the node device is any possible electronic device for acquiring data in the coal mine, for example, various sensors, controllers, and the like, and the node device needs to send the data to the data processing system in a message manner (i.e., message data) for processing after acquiring the data.
In order to achieve high concurrency of events of each node device by the data processing system, the data acquisition processing system of this embodiment may employ, for example, a production-consumer design mode, the data processing system acts as a consumer, and each node device acts as a producer. FIG. 2 is a schematic diagram of a data processing system architecture provided in accordance with an embodiment of the present disclosure, as shown in FIG. 2, including, for example, an ESDK (EtherNet/IP Scanner Developers Kit) protocol stack component (EIP protocol stack in the figure), a scheduler, an event handler component (i.e., EIP Event Handler), a CIP object model (i.e., EIP device model), a data storage module, and any other possible modules, where "EIP Adapter #1", "EIP Adapter #2", "EIP Adapter # x" in FIG. 2 represent a plurality of node devices.
The CIP object model is based on network equipment (namely node equipment) interoperability and equipment interchange solution of objects, and is used as a communication protocol for carrying out automatic data transmission among the equipment, the CIP is used for treating each network equipment as a series of sets of objects, each object is a set of equipment related data, each item of equipment data is an attribute of the object, the equipment in the network is completely defined through equipment object description, the CIP provides essential control, configuration and data acquisition service functions for an automation system for an end user, and the interoperability and interchangeability of industrial automation equipment on the Ethernet are provided for the automation field. To further improve interchangeability in a CIP network of multiple provider node devices, CIP defines a standard set of object model specifications, or "device rules". A device implemented according to a standard "device specification" will respond to all the same commands and will exhibit the same network behavior as other devices executing the same specification. The specific specification comprises the following steps:
The CIP object model describes the data that each object should contain in classes Class, instance, and Attribute Attribute, describing the operations supported by the object using Service Code. Wherein: a class is an abstract collection of a series of instances with similar functionality. While an instance is a specific object in a class, different instances may have different properties, but a common property for all instances in a class is called a class property. The object model specification addresses all device nodes running on the CIP network and the data accessible therein in a unified manner, including: node Address (Node Address), class identifier (Class ID), instance identifier (instance ID), attribute identifier (Attribute ID), and Service Code (Service Code). The node address is an integer value assigned to the device node on the CIP network (typically an IP address on the EtherNet/IP network, and MAC addresses on the DeviceNet and ControlNet networks). Class identifier: an integer value is assigned to each class accessible on the network. Instance identifier: an integer value is used in all instances of the same class to distinguish between different instances of the class. Attribute identifier: the number of the attribute field in a class or instance is expressed in integer values. Service code: in a particular object instance or class, the service request code is represented by an integer value.
Also, the CIP object model is a collection of objects, including the following:
1. and a connection manager. Is responsible for managing the opening and closing of connections on the network, providing a transport target for implicit and explicit connection requests.
2. Connectionless information manager (Unconnectted Message Manager, UCMM). The method mainly provides the message service crossing the network through the message analysis based on the unconnected transmission mode and can carry out the message copying detection and retry service. It is noted that UCMM in CIP is not a real object, but a functional component related to product implementation.
3. A message router. And the method is responsible for receiving an explicit message from the UCMM or a transmission layer, removing a message header, analyzing data, and routing a target object according to the class and attribute path to be accessed.
4. An object is identified. All services and attributes associated with the network when the product accesses the network are included, such as providing device-related information such as VendorID, IP address and port number.
5. Network specific objects. Configuration and status information for the network infrastructure is provided, such as the DeviceNet network and the ControlNet network each use different network-specific object classes.
6. An application object.
7. The objects are combined. For enabling the transmission and reception of node data on a network.
One device in the CIP protocol contains three types of objects: must objects, application objects, vendor defined objects.
The necessary object is an object which is necessary to be selected for realizing the basic requirement of the equipment, is one of preconditions for realizing the interchangeability and the interoperability of the equipment, and comprises a connection object (an explicit message connection object and an implicit message connection object), a message routing object, an identification (identity) object and a network specific object;
while the application object defines the data of the device package. Such objects correspond to different devices. The motor object in the driving system comprises attribute data such as descriptive frequency, rated current, motor size and the like; the analog input object of the I/O device includes attribute data such as a type of analog input and a current value.
Vendor defined objects, in turn, refer to special objects that are not specified in the specifications, but are built by the vendor itself, and the same method of accessing the necessary objects and application objects can be used in accessing such objects. Wherein application objects and vendor custom objects are not necessary.
The composite object (composite object class number fixed to 0x 04) provides one mechanism: attributes in different classes and objects can be flexibly combined and mapped into a single attribute in the combined object. The mapping mechanism of the combined object can greatly improve the information exchange efficiency on the network. The combined object provides a message production-consumer model that produces messages that can be sent to all consumers, based on UDP implementation. The input/output combination objects of CIP are differentiated. The input and output are dependent on the perspective of the data in/out behavior with respect to the combined object itself. The input object is to collect the information of other objects of the equipment into the input object and send the information to the network; the output object receives the network data and writes the network data into the attributes of other application objects.
And, explicit and implicit connectivity communications are defined in the CIP object model. CIP is a connection-based protocol. When a Connection is established, a unique Connection identifier (Connection CID) that both parties negotiate for assignment will be obtained. The definition and format of CIDs is network specific. By the CID information, the connection message does not need to contain all information related to connection, and only needs to contain CID, thereby improving the transmission efficiency of the network. Wherein, the connection is established by using an unconnected message manager (Unconnectted Message Manager, UCMM). UCMM messages are one way to send data requests to devices that have not previously established a connection, and UCMM objects are responsible for handling requests and responses of unconnected explicit messages. The establishment of the communication connection is completed by using UCMM forward_Open service request and attaching connection request parameter information (such as message timeout time, data window maximum value, unicast or multicast, etc.), and the connection is identified by using a unique CID identifier. If a bi-directional connection is initiated between two devices, both parties hold two CID connection identifiers. In the CIP protocol family, controlNet, etherNet/IP and CompoNet are supporting access to application objects via UCMM messages (i.e., object data within the device can also be accessed via UCMM messages in the unconnected state), whereas in DeviceNet, UCMM messages can only be used to initiate connection requests. All connections of the CIP network can be generalized or explicit messaging connections as I/O (implicit) connections. Explicit messaging provides a versatile and versatile communication path between two devices, and provides a typical request/response network communication scheme, supporting only point-to-point communication based on TCP/IP. The meaning of "explicit" is that the class, object, attribute, and request service code information to be accessed are specified in the request message. I/O connections provide a dedicated, special purpose communication path between two devices or between a single device and multiple devices. Specific application data is transmitted through the I/O connection, and unicast or multicast communication modes, also called implicit connection, can be supported based on the UDP protocol. The meaning of "implicit" is that the source and composition of the message data is identified by CID.
In the embodiment of the disclosure, the CIP object model can be respectively constructed for each node device based on the standard object model specification.
In practical application, the data processing system firstly loads node information of node equipment (for example, "EIP Adapter #1" and "EIP Adapter # 2") needing to collect data, and the node information, for example, the ip address of the node equipment, the name of the equipment, etc., is not limited; further, loading a CIP object model corresponding to the node device, information of the CIP object model being stored in a database (e.g., mysql database); further, the data processing system initiates a connection request to each node device according to the node information, waits for the connection result of the node device, and if the node device fails to connect (the network is not through or the node is not on line), tries to repeatedly connect until successful connection; further, after the connection with the node equipment is successful, data is requested to each node equipment, wherein an ESDK protocol stack can be used as a gateway in the system and is responsible for initiating a connection request to the node equipment and requesting the data; further, each node device may respond to the request to collect data and send the collected data to the data processing system in a message.
In this case, the data processing system of the present embodiment may first receive the packet data of each node device through the ESDK protocol stack; further, the ESDK protocol stack submits the message data sent by each node device to the scheduler in the form of an event (i.e., a data response event); further, the scheduler buffers data response events of one or more node devices in a non-blocking manner to a message queue configured by the scheduler, which is referred to as a first message queue (corresponding to the EIP event queue in fig. 2).
S102: each data response event in the first message queue is assigned an event handler component.
In an embodiment of the present disclosure, the data processing system may configure the event handler component (EIP Event Handler) according to the number of node devices, namely: each node device may correspond to an event handling component. The event handler component of this embodiment needs to dynamically generate persistent CIP object model information from the database after the data processing system and the node device are connected, and uniformly register the persistent CIP object model information to the scheduler, so as to accept the scheduling of the scheduler.
After the event handler component persists the CIP object model, the CIP object model is stored in a memory in the form of the following table:
Table 1 communication configuration storage structure
Device ID Network Address
1 172.20.1.10
Table 2 application object storage Structure (runtime determination)
Table 3 Combined object storage Structure (run-time determination)
Wherein, each Device capable of actually communicating is allocated with a Device ID number and records the IP address of the Device. In the EIP protocol, the explicit connection port number is fixed to be 0xAF12, and the implicit connection fixed port number is fixed to be 0x08AE, so that the port number information does not need to be recorded. According to the ESDK protocol stack, at system start-up, each device is assigned 1 explicit connection for cyclic access to application object data (service code: get_attr_lst), and each combined object of devices has to maintain an implicit connection separately for receiving implicit datagrams. After connection is established, the CID tag for recording the session is updated to the corresponding data structure in the processor element. The application object model is determined with the triple information structured in class, instance, attribute and the data type information is added to mark the data length. This approach can adapt existing application objects and accommodate future application object extensions, which do not result in program modification. The offset field is added in the mapping relation between the data of the combined instance and the specific attribute of the application object, so that the method can support the direct skipping of the uninteresting part in the implicit datagram in the data analysis process, and only the specific data which is interested in the system is analyzed. The table only describes the minimum field requirement in the communication flow, and when the database is stored in a persistent mode, information such as equipment description information, attribute description information of an application object, and scale-up parameters of data can be added as required. In addition to the data model described above, the event handling functions in the model include: initiate connection request function (explicit/implicit), connection setup/timeout/break-link event response function, explicit response datagram parsing function, explicit data request function, implicit datagram parsing function. After the components are registered with the scheduler, they are actively invoked by the scheduler upon arrival of an event.
After buffering the data response events in the first message queue, the scheduler of the embodiment may further use the event loop thread to allocate each data response event in the first message queue to an event handler component through a multiplexing allocation policy.
In this embodiment, an event queue (first message queue) is created by the scheduler, so that a data arrival event and a work task for data analysis can be separated, when new message data arrives, the message data is cached in the queue in a mode of responding to the event, so that a protocol stack call is returned as soon as possible, and the event queue introduces another work thread to synchronously and orderly allocate an event processor component. Therefore, the effects of high concurrency of the network, improvement of the data throughput of the system and solving the problem of uneven busy and idle are achieved.
S103: each event handler component processes the assigned data response event based on a preconfigured CIP object model to obtain target data.
Each node device is regarded as a set of a series of objects by the CIP object model, and corresponds to the node devices in the network one by one, and the CIP object model is used for managing communication session identification, application objects, combined object data and data request and response data analysis of the CIP object model and the specific devices.
In this embodiment, each event handler component performs parsing processing on the allocated data response event (message data, which may be implicit data or display data) based on the configured CIP object model, so as to determine data to be collected, where the data is called target data, and the target data is actual data collected by the node device. Therefore, the event processor components can analyze the message data of the node devices in parallel to obtain the required target data, and the effect of high concurrency response events is realized.
S104: the target data is stored to the data storage module.
That is, the data processing system stores the processed target data to the data storage module.
In this embodiment, by receiving message data sent by a plurality of node devices, buffering the message data in a first message queue in a form of data response events in turn, allocating an event processor component to each data response event in the first message queue, each event processor component processing the allocated data response event based on a preconfigured CIP object model to obtain target data, and storing the target data in a data storage module, each node device can communicate by using an Ethernet/IP protocol, and processing the data response event by using a CIP object model constructed by a unified rule, thereby implementing data interconnection. In addition, the embodiment can also realize the multiplexing and high concurrency processing effects of data by utilizing the message queue.
Fig. 3 is a flow chart of a high concurrency data processing method based on Ethernet/IP according to another embodiment of the present disclosure, as shown in fig. 3, the method includes:
s301: and receiving message data sent by a plurality of node devices, and sequentially caching the message data to a first message queue in the form of a data response event.
The specific description of S301 is referred to the above embodiments, and is not described herein.
S302: and acquiring the identity information carried by each data response event, and determining a target CIP object model corresponding to the identity information.
In the embodiment of the present disclosure, the message data of the node device may carry identity information (CID information) of the message, where the identity information is for example: the node device number, name, etc. are not limited thereto. Moreover, each CIP object model of the present embodiment may have corresponding identity information.
For example, the identity information carried by the message data sent by the three node devices "EIP adapter#1", "EIP adapter#2" and "EIP adapter#3" are "#1", "#2", "#3" in sequence, and the identity information corresponding to the CIP object model of the node device is "#1", "#2", "#3" in sequence.
In this case, in the operation of allocating one event handler component to each data response event in the first message queue, the present embodiment first obtains identity information carried by each data response event (i.e., message data); further, a CIP object model corresponding to the identity information is determined, which may be referred to as a target CIP object model, for example, the node device is "EIP Adapter #1", and the corresponding target CIP object model is "#1".
S303: the data response events are distributed to event handler components that configure the target CIP object model.
Further, the data response event is assigned to an event handler component that configures the target CIP object model.
For example, the data response event of "EIP Adapter #1" is assigned to the event handler component deploying "#1" the cip object model, the data response event of "EIP Adapter #2" is assigned to the event handler component deploying "#2" the cip object model, and the data response event of "EIP Adapter #3" is assigned to the event handler component deploying "#3" the cip object model.
The specific description of S303 is referred to the above embodiments, and will not be repeated here.
S304: and receiving the control message sent by the application layer, and determining a target event handler component matched with the control message.
Wherein the application layer is controlled by a user, as shown in fig. 2, the application layer includes various applications, such as advanced applications, control related applications, other applications, etc., which are not limited thereto.
In the process of processing the data response event, as shown in fig. 2, the data processing system of this embodiment may further receive a control message (control request) sent by the application layer, specifically, the control message sent by the application layer reaches the control message gateway of the scheduler through the broker. The control message is used to control the node device, for example, to control a certain node device (e.g., a water pump) to open or close a valve, or may control a start/stop conveyor belt, a start/stop coal mining machine, a start/stop loader, etc., or any other possible control of the node device, which is not limited.
Further, the scheduler determines the event handler components that match the control message, namely: scheduling control messages. Wherein the event handler component that matches each control message is referred to as a target event handler component, that is, control messages for different applications need to match different event handler components for processing.
For example, the control message 1 is used to control the node device "EIP Adapter #1" (e.g. water pump) to stop running, and then the target event handler component corresponding to the control message 1 is an event handler component configured with a "#1" cip object model.
S305: the control message is cached in a second message queue configured by the target event handler component.
In addition, the control message (control request) of the present embodiment may be introduced into the message queue during execution. In particular, the present embodiment may configure a message queue in each event handler component, which is referred to as a second message queue (corresponding to the "control request queue" in fig. 2). After receiving the control messages of the application layer, each control message may be cached to a second message queue configured by a corresponding target event handler component.
S306: each event handler component processes the assigned data response event based on a preconfigured CIP object model to obtain target data.
In the process of processing the data response event by each event processor component, firstly judging whether a second message queue is empty, and processing the distributed data response event if the second message queue is empty; in the event that the second message queue is not empty, the event handler components interrupt the polling process of data in response to the event and respond to the control message buffered in the second message queue, that is, each event handler component preferentially executes control messages of the application layer.
In some embodiments, fig. 4 is a schematic diagram illustrating an interaction process between a data processing system and an application layer, where, as shown in fig. 4, an "application" is an application layer, and a "front-end" corresponds to the data processing system according to the present embodiment, the application layer first sends a control request (carrying control information) to the data processing system and starts a clock timer 1 (a first timer); the data processing system receives the control request sent by the application layer, checks the control request to judge whether the request is executed by the system, determines a target event processor component matched with the control message to process the request under the condition that the check is passed, and sends a request confirmation message (namely, request confirmation) to the application layer; further, there are multiple cases for the application layer, case 1: the application layer receives a request acknowledgement message (for example, the result field of the request acknowledgement message is fail), and determines that control is terminated, case 2: the application layer receives a successful request acknowledge message (e.g. the result field of the request acknowledge message is success), the first timer automatically expires and a second timer (clock timer 2) is started, case 3: if the application layer does not receive the request confirmation message until the first timer is overtime, judging that the control is terminated; after the data processing system processes the request, the data processing system sends the control response message to the application layer so that the application layer displays the execution result of the control message, and starts a third timer (clock timer), and if the third timer is overtime, the data processing system resends the control response message, wherein the maximum resending number is 3; the application layer waits for receiving the control response message after starting the second timer, and judges that the control fails if the control response message is not received after overtime; further, after the application layer successfully receives the control response information, the control result is displayed, and a response confirmation message is sent to the data processing system; and the data processing system receives the response confirmation message and ends the processing of the control message. Thus, the embodiment can also respond to the request of the application layer while collecting the data.
S307: and sequentially caching the target data corresponding to each data response event to the third message queue.
Specifically, as shown in fig. 2, the data storage module of the present embodiment may configure a message queue, which is referred to as a third message queue (corresponding to the task queue in fig. 2).
In practical applications, as the number of accessed node devices increases, the concurrency of data increases, and the access to databases increases more frequently. In view of this, this embodiment may configure a connection pool for the data storage module, where a plurality of database connections are configured in the connection pool, and the database connections are used to operate the database to read and write data, where each database link may have a different state, such as "idle", "in use", "locked". By establishing a database connection pool and a matched connection use management strategy, the database connection can be reused efficiently and safely, and the expense of frequently establishing and closing the database connection is avoided. At system start-up, the connection pool pre-creates a portion of the free connections for subsequent use. As the concurrency data increases, the number of available free connections of the database is insufficient, then connections are automatically created by the connection pool until the maximum total number of connections for its configuration is reached. On the other hand, when the number of idle connections in the operation process of the connection pool is large, the available connections are properly destroyed, so that the number of idle connections reaches a dynamically stable value.
In addition, the present embodiment further configures a thread pool for the data storage module, where a plurality of threads (Tread) are configured, for example: tread#1, tread#2, etc., the thread pool presets the number of work threads in the pool when created, and the work threads take tasks out of the task queue and execute the tasks.
In the operation of storing the target data of each node device to the data storage module, the embodiment of the disclosure may encapsulate the storage process of each target data into a task object, and first cache the target data to the third message queue in a data stream manner.
S308: each thread in the call thread pool obtains target data of a data response event from the third message queue, allocates a database link with idle state from the connection pool, and sets the database link to a locking state.
Further, a thread is called from the thread pool to acquire target data of a data response event from the third message queue, namely: retrieving a task object from the third message queue using a thread; further, a task router is used to allocate a database link with a state free from the connection pool for each task object (i.e., target data), and to set the state of the database link from "free" to a locked state (i.e., "locked").
S309: the target data is stored to the data storage module based on the database link.
Further, based on the locked database link, the corresponding target data is stored to the data storage module.
In some embodiments, after storing in the data storage module, the database link is released and set to an idle state to wait for the next task to use. So that the database links in the connection pool can be recycled.
In practical applications, the task router (connection selector) does not check the state of the connection when it extracts an idle connection, since the problem of a database connection in the connection pool is not a high probability event. When a connection interrupt occurs, the execution of the task may be errant, at which point an attempt is made to repair the interrupt error when returning the connection resource to the connection pool. In many cases, unless the network fails to repair or the database service crashes, no database reconnection failure occurs. If the reconnection is unsuccessful within the timeout period, the repair is again attempted the next time it is taken out for use. Tasks that fail execution will be completely deleted and will not attempt to be placed in the task queue for re-execution. This is because the data exchange of the same device is frequent, and the latest data always covers the old data, and only needs to wait for the next data update.
Therefore, the embodiment stores the data to the data storage module by using the thread pool, the connection pool and the message queue, and can improve the data storage efficiency.
In this embodiment, by receiving message data sent by a plurality of node devices, buffering the message data in a first message queue in a form of data response events in turn, allocating an event processor component to each data response event in the first message queue, each event processor component processing the allocated data response event based on a preconfigured CIP object model to obtain target data, and storing the target data in a data storage module, each node device can communicate by using an Ethernet/IP protocol, and processing the data response event by using a CIP object model constructed by a unified rule, thereby implementing data interconnection. In addition, the embodiment can also realize the multiplexing and high concurrency processing effects of data by utilizing the message queue. In addition, the embodiment can respond to the request of the application layer while collecting the data. In addition, the embodiment stores data to the data storage module by using the thread pool, the connection pool and the message queue, so that the data storage efficiency can be improved.
In order to implement the above embodiments, the present disclosure also proposes a high concurrency data processing apparatus based on Ethernet/IP.
Fig. 5 is a schematic diagram of an Ethernet/IP based high concurrency data processing apparatus provided according to another embodiment of the present disclosure.
As shown in fig. 5, the Ethernet/IP based high concurrency data processing device 50 includes:
a first receiving module 501, configured to receive message data sent by a plurality of node devices, and sequentially buffer the message data to a first message queue in a form of a data response event;
an allocation module 502 for allocating one event handler component to each data response event in the first message queue;
a processing module 503, configured to process the distributed data response event by each event handler component based on a preconfigured CIP object model to obtain target data; and
a storage module 504 for storing the target data to the data storage module.
In some embodiments, the allocation module 502 is specifically configured to: acquiring identity information carried by each data response event, and determining a target CIP object model corresponding to the identity information; and an event handler component that distributes the data response event to the configuration target CIP object model.
Some embodiments, the apparatus further comprises: the second receiving module is used for receiving the control message sent by the application layer and determining a target event processor component matched with the control message; the first buffer module is used for buffering the control message to a second message queue configured by the target event processor component; and, the processing module is specifically configured to: in the event that the second message queue is not empty, the event handler component interrupts processing the data response event and responds to the control message buffered by the second message queue.
Some embodiments, the apparatus further comprises: the second buffer module is used for buffering the target data corresponding to each data response event to a third message queue in sequence; and, the storage module is specifically used for: each thread in the thread pool is called to acquire target data of a data response event from the third message queue, a database link with an idle state is distributed from the connection pool, and the database link is set to be in a locking state; and storing the target data to the data storage module based on the database link.
Some embodiments, the storage module is specifically configured to: releasing the database link and setting the database link to be in an idle state.
Some embodiments, the first receiving module is specifically configured to: respectively loading node information of node equipment needing to acquire data and a corresponding CIP object model; respectively configuring node information of each node device and a CIP object model in a plurality of event processor components; and initiating a connection request to each node device, and receiving a data response event sent by each node device under the condition that the connection is successful.
Some embodiments, the second receiving module is specifically configured to: receiving a control request carrying a control message sent by an application layer, determining a target event processor component matched with the control message and sending a request confirmation message to the application layer under the condition that the control request passes verification; after responding to the control message cached in the second message queue, the method further comprises the following steps: sending a control response message to the application layer and starting a third timer so that the application layer displays the execution result of the control message, and retransmitting the control response message if the third timer is overtime; and receiving a response acknowledgement message from the application layer; the method comprises the steps that when an application layer sends a control request, a first timer is started; and if the application layer receives the failed request confirmation message, judging that the control is terminated; if the application layer receives the successful request confirmation message, the first timer is terminated and the second timer is started until the control response message is received; if the application layer does not receive the request confirmation message until the first timer expires, it is determined that control is terminated.
In this embodiment, by receiving message data sent by a plurality of node devices, buffering the message data in a first message queue in a form of data response events in turn, allocating an event processor component to each data response event in the first message queue, each event processor component processing the allocated data response event based on a preconfigured CIP object model to obtain target data, and storing the target data in a data storage module, each node device can communicate by using an Ethernet/IP protocol, and processing the data response event by using a CIP object model constructed by a unified rule, thereby implementing data interconnection. In addition, the embodiment can also realize the multiplexing and high concurrency processing effects of data by utilizing the message queue.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
To achieve the above embodiments, the present disclosure also proposes a computer program product which, when executed by an instruction processor in the computer program product, performs a high concurrency data processing method based on Ethernet/IP as proposed in the foregoing embodiments of the present disclosure.
Fig. 6 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure. The electronic device 12 shown in fig. 6 is merely an example and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device 12 is in the form of a general purpose computing device. Components of the electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include industry Standard architecture (Industry Standard Architecture; hereinafter ISA) bus, micro channel architecture (Micro Channel Architecture; hereinafter MAC) bus, enhanced ISA bus, video electronics standards Association (Video Electronics Standards Association; hereinafter VESA) local bus, and peripheral component interconnect (Peripheral Component Interconnection; hereinafter PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory; hereinafter: RAM) 30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, commonly referred to as a "hard disk drive").
Although not shown in fig. 6, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a compact disk read only memory (Compact Disc Read Only Memory; hereinafter CD-ROM), digital versatile read only optical disk (Digital Video Disc Read Only Memory; hereinafter DVD-ROM), or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the various embodiments of the disclosure.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods in the embodiments described in this disclosure.
The electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the electronic device 12, and/or any devices (e.g., network card, modem, etc.) that enable the electronic device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks, such as a local area network (Local Area Network; hereinafter: LAN), a wide area network (Wide Area Network; hereinafter: WAN) and/or a public network, such as the Internet, via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 over the bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications, such as implementing the Ethernet/IP-based high concurrency data processing method mentioned in the foregoing embodiment, by running a program stored in the system memory 28.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
It should be noted that in the description of the present disclosure, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present disclosure, unless otherwise indicated, the meaning of "a plurality" is two or more.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
Furthermore, each functional unit in the embodiments of the present disclosure may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present disclosure have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the present disclosure, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the present disclosure.

Claims (10)

1. A high concurrency data processing method based on Ethernet/IP, which is applied to a data processing system, wherein the data processing system communicates with a plurality of node devices based on Ethernet/IP protocol, the method comprising:
receiving message data sent by the plurality of node devices, and sequentially caching the message data to a first message queue in the form of a data response event;
assigning an event handler component to each of said data response events in said first message queue;
each event handler component processes the assigned data response event based on a preconfigured CIP object model to obtain target data; and
and storing the target data to a data storage module.
2. The method of claim 1, wherein said assigning each of said data response events in said first message queue to an event handler component comprises:
Acquiring identity information carried by each data response event, and determining a target CIP object model corresponding to the identity information; and
the data response events are distributed to event handler components that configure the target CIP object model.
3. The method of claim 1, wherein the method further comprises:
receiving a control message sent by an application layer, and determining a target event processor component matched with the control message;
caching the control message to a second message queue configured by the target event handler component;
and, the event handler component processes the assigned data response event based on a preconfigured CIP object model, comprising:
in the event that the second message queue is not empty, the event handler component interrupts processing the data response event and responds to a control message buffered by the second message queue.
4. The method of claim 1, wherein after obtaining the target data, further comprising:
sequentially caching the target data corresponding to each data response event into a third message queue;
and, the storing the target data to a data storage module includes:
Each thread in the thread pool is called to acquire target data of a data response event from the third message queue, a database link with an idle state is distributed from the connection pool, and the database link is set to be in a locking state; and
and storing the target data to a data storage module based on the database link.
5. The method of claim 4, wherein after storing the target data to a data storage module based on the database link, further comprising:
releasing the database link and setting the database link to be in an idle state.
6. The method of claim 3, wherein the receiving the control message sent by the application layer and determining the target event handler component that matches the control message comprises:
receiving a control request carrying a control message sent by an application layer, determining a target event processor component matched with the control message and sending a request confirmation message to the application layer under the condition that the control request passes verification;
after responding to the control message cached in the second message queue, the method further comprises:
sending a control response message to the application layer and starting a third timer so that the application layer displays the execution result of the control message, and retransmitting the control response message if the third timer is overtime; and
Receiving a response acknowledgement message from the application layer;
the application layer starts a first timer when sending the control request; and if the application layer receives the failed request confirmation message, judging that the control is terminated; if the application layer receives the successful request confirmation message, the first timer is terminated and a second timer is started until the control response message is received; and if the application layer does not receive the request confirmation message until the first timer is overtime, judging that the control is terminated.
7. An Ethernet/IP-based high concurrency data processing apparatus, applied to a data processing system, the data processing system communicating with a plurality of node devices based on Ethernet/IP protocol, comprising:
the first receiving module is used for receiving the message data sent by the plurality of node devices and sequentially caching the message data to a first message queue in the form of a data response event;
an allocation module for allocating one event handler component to each of said data response events in said first message queue;
a processing module for each event handler component to process the assigned data response event based on a preconfigured CIP object model to obtain target data; and
And the storage module is used for storing the target data to the data storage module.
8. The apparatus of claim 7, wherein the allocation module is specifically configured to:
acquiring identity information carried by each data response event, and determining a target CIP object model corresponding to the identity information; and
the data response events are distributed to event handler components that configure the target CIP object model.
9. The apparatus of claim 7, wherein the apparatus further comprises:
the second receiving module is used for receiving the control message sent by the application layer and determining a target event processor component matched with the control message;
the first buffer module is used for buffering the control message to a second message queue configured by the target event processor component;
and, the processing module is specifically configured to: in the event that the second message queue is not empty, the event handler component interrupts processing the data response event and responds to a control message buffered by the second message queue.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202310701195.1A 2023-06-13 2023-06-13 High concurrency data processing method, device and storage medium based on Ethernet/IP Pending CN116708596A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310701195.1A CN116708596A (en) 2023-06-13 2023-06-13 High concurrency data processing method, device and storage medium based on Ethernet/IP

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310701195.1A CN116708596A (en) 2023-06-13 2023-06-13 High concurrency data processing method, device and storage medium based on Ethernet/IP

Publications (1)

Publication Number Publication Date
CN116708596A true CN116708596A (en) 2023-09-05

Family

ID=87832157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310701195.1A Pending CN116708596A (en) 2023-06-13 2023-06-13 High concurrency data processing method, device and storage medium based on Ethernet/IP

Country Status (1)

Country Link
CN (1) CN116708596A (en)

Similar Documents

Publication Publication Date Title
US11188400B2 (en) System, method and computer program product for sharing information in a distributed framework
US8140688B2 (en) Method and system for establishing connections between nodes in a communication network
US6353861B1 (en) Method and apparatus for treating a logical programming expression as an event in an event-driven computer environment
US20060133275A1 (en) Architecture and run-time environment for network filter drivers
US7266822B1 (en) System and method for controlling and managing computer farms
US10303529B2 (en) Protocol for communication of data structures
US8352619B2 (en) Method and system for data processing
WO2019019864A1 (en) Communication system, method and apparatus for embedded self-service terminal
US20040047361A1 (en) Method and system for TCP/IP using generic buffers for non-posting TCP applications
US7499987B2 (en) Deterministically electing an active node
EP2015190B1 (en) Technique of controlling communication of installed apparatus with outside by means of proxy server
CN115878301A (en) Acceleration framework, acceleration method and equipment for database network load performance
CN116708596A (en) High concurrency data processing method, device and storage medium based on Ethernet/IP
US20050188070A1 (en) Vertical perimeter framework for providing application services
CN114697334B (en) Method and device for executing scheduling task
KR101560879B1 (en) System for managing task and executing service of steel process middleware
CN111708568B (en) Modularized development decoupling method and terminal
JP2013186765A (en) Batch processing system, progress confirmation device, progress confirmation method and program
KR100900963B1 (en) Hardware device and method for sending the network protocol packet
CN115348333B (en) Data transmission method, system and equipment based on UDP double-end communication interaction
CN112448854B (en) Kubernetes complex network policy system and implementation method thereof
CN110750369B (en) Distributed node management method and system
CN116360940A (en) Task processing request response method, device and storage medium
CN117170777A (en) State transition method and related equipment
CN117749916A (en) Network card firmware upgrading method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination