CN113672411B - Method and device for realizing network equipment virtualization driving adaptation layer - Google Patents

Method and device for realizing network equipment virtualization driving adaptation layer Download PDF

Info

Publication number
CN113672411B
CN113672411B CN202110983483.1A CN202110983483A CN113672411B CN 113672411 B CN113672411 B CN 113672411B CN 202110983483 A CN202110983483 A CN 202110983483A CN 113672411 B CN113672411 B CN 113672411B
Authority
CN
China
Prior art keywords
message
data
service
driving
sda
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110983483.1A
Other languages
Chinese (zh)
Other versions
CN113672411A (en
Inventor
罗超
张小虎
刘博文
杨合明
高腾飞
李炳根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fiberhome Telecommunication Technologies Co Ltd
Original Assignee
Fiberhome Telecommunication Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fiberhome Telecommunication Technologies Co Ltd filed Critical Fiberhome Telecommunication Technologies Co Ltd
Priority to CN202110983483.1A priority Critical patent/CN113672411B/en
Publication of CN113672411A publication Critical patent/CN113672411A/en
Application granted granted Critical
Publication of CN113672411B publication Critical patent/CN113672411B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to the technical field of network communication, and provides a method and a device for realizing a virtualized drive adaptation layer of network equipment. Splitting a thread for directly operating a chip into an independent driving process and at least one business process, wherein business components are carried by the split business processes, and the business components interact with the driving process in a message form; and the service process performs data mapping operation on service resources of the service process in the virtual equipment and data table items in the driving process. The invention divides the driving thread which directly operates the chip into independent processes, and then the independent processes can be respectively and independently operated, and the driving processes can be independently debugged.

Description

Method and device for realizing network equipment virtualization driving adaptation layer
[ field of technology ]
The present invention relates to the field of network communications technologies, and in particular, to a method and an apparatus for implementing a virtualized driver adaptation layer of a network device.
[ background Art ]
With the popularization of the virtualization technology, the network device virtualization technology is increasingly applied, and the network device virtualization is divided into one network device virtual multiple virtual devices and one virtual device virtual multiple network devices. In the case of one network device to virtualize a plurality of virtual devices, since the chip driver can only exist in one process and cannot be accessed by multiple processes, how to solve the problem that the virtualized plurality of processes access the drive process that cannot be virtualized and the virtualized plurality of processes are configured under the drive process becomes the current urgent problem to be solved.
[ invention ]
The technical problem to be solved by the invention is that a plurality of virtualized different processes access a driving process which cannot be virtualized and a plurality of virtualized processes issue configuration to a driving execution and report performance to a virtualized device process.
The invention adopts the following technical scheme:
in a first aspect, the present invention provides a method for implementing a virtualized driver adaptation layer of a network device, where the method includes:
splitting a thread which directly operates a chip into an independent driving process and at least one business process, wherein business components are borne by the split business processes, and the business components interact with the driving process in a message form;
the service process performs data mapping operation on service resources of the service process in the virtual equipment and data table items in the driving process;
wherein the business resources of each business process include one or more of a DHCM component, a UDM component, a GTHER component, and a PM component.
Preferably, when the service data object is forwarded to the driving process, the method includes:
converting the service data object into a corresponding data table item in a driving process according to the data mapping, and transmitting the data table item to an upper SDA in the service process;
and the upper SDA of the business process carries out validity judgment on the data list item, and the data list item passing the validity judgment is transmitted to the lower SDA of the driving process in a message interaction mode through the upper SDA of the business process so that the lower SDA calls a driving interface to finish the operation of corresponding data on a chip.
Preferably, the interaction of the message is completed between the driving process and the service process through the shared memory, and the implementation method specifically comprises the following steps:
the sending side of the message modifies the signal quantity of the receiving side and is used for informing the receiving side of performing reading operation;
the receiving side judges whether the semaphore is modified or not, if the semaphore is modified, the receiving side reads a message linked list of the receiving side, and accordingly corresponding data content is obtained;
the sending side and the receiving side of the message are respectively a driving process and a business process, or respectively a business process and a driving process.
Preferably, the receiving side judges whether the semaphore is modified, and in the driving process, the driving process opens up corresponding message receiving threads for one or more business processes, wherein the number of business processes and the number of message receiving threads are in a one-to-one relationship or in a many-to-one relationship, and the method further comprises:
the driving process sorts the messages acquired from each message receiving thread according to the preset type and then puts the messages into a message queue;
the message receiving thread reads and analyzes the message from the head of the message queue, and transmits the message to the lower SDA after the analysis is completed.
Preferably, for the GATHER module in the business process, the method further comprises:
the driving process carries out data table entry management according to the identification value of the driving data plane object DDPO in each service process, and the data table entry corresponding to the identification value of each DDPO stores alarm performance data required by each service process;
the driving process transmits the alarm performance data stored in the data table entry corresponding to the identification value of each DDPO to the upper SDA of the corresponding service process through the message interaction between the lower SDA and the upper SDA of the service process;
the GATHER module of each business process periodically collects alarm performance data from the corresponding upper SDA.
Preferably, the driving process performs data entry management according to the identification value of the driving data plane object DDPO in each service process, and specifically includes:
the driving process stores different data table items according to the configuration issued by the platform, and each data table item corresponds to an independent performance acquisition sub-module;
each acquisition unit in the performance acquisition sub-module corresponds to one performance acquisition, the acquisition units access the bottom layer interface in a single-thread mode, and an acquisition object is determined and stored according to the enabling setting of the performance acquisition.
Preferably, the lower SDA in the driving process creates a timing acquisition task to poll and read each performance acquisition sub-module, assembles corresponding data into a message to be added into a message queue, and pushes the message to the upper SDA in the service process for subsequent processing.
Preferably, each virtual device supports a routing table configured with a virtual same routing forwarding domain vrf, and a forwarding mapping relationship from a virtual device ID and a virtual routing forwarding domain vrf in a service process to a chip routing forwarding domain in a driving process is realized in a lower SDA;
each virtual device is distinguished through a virtual routing forwarding domain vrf, and when service data is forwarded, the external service data forwarding is completed after the chip routing table items searched by each virtual device.
Preferably, in order to realize that service data is forwarded only on the same virtual device, different virtual devices are isolated from each other, and for the L2VPN service, module resources to be managed include one or more of AC, VC, TUNNEL and LSP, the method further includes:
the module resource ID management of the AC and the LSP is realized through the slot positions and the virtual equipment ID numbers;
module resource ID management of the VCs and tunels is managed by virtual device ID numbers;
therefore, independent allocation of module resources and uniqueness of module resource IDs are realized, and mutual isolation among different virtual machines is realized when the ID numbers of the virtual devices are different.
In a second aspect, the present invention further provides an implementation apparatus of a network device virtualization driver adaptation layer, configured to implement the method for implementing the network device virtualization driver adaptation layer in the first aspect, where the apparatus includes:
at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, where the instructions are executable by the at least one processor to perform the method for implementing the network device virtualization driver adaptation layer of the first aspect.
In a third aspect, the present invention further provides a non-volatile computer storage medium, where computer executable instructions are stored, where the computer executable instructions are executed by one or more processors to implement a method for implementing the network device virtualization driver adaptation layer according to the first aspect.
The invention divides the driving thread which directly operates the chip into independent processes, and then the independent processes can be respectively and independently operated, and the driving processes can be independently debugged. The initialization of the board card does not depend on virtual equipment any more, and the initialization of the drive process is faster because the process is split, so that the service drive of each virtual equipment in the prior art is started without repeatedly starting the drive process, and the single-disk initialization time is saved by about 2 minutes.
[ description of the drawings ]
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are required to be used in the embodiments of the present invention will be briefly described below. It is evident that the drawings described below are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a split process architecture diagram provided by an embodiment of the present invention;
fig. 2 is a schematic flow chart of an implementation method of a virtualized driving adaptation layer of a network device according to an embodiment of the present invention;
FIG. 3 is a diagram of an original process structure on a service board according to an embodiment of the present invention;
FIG. 4 is a split process architecture diagram according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a shared memory communication flow according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a POST message triggering process according to an embodiment of the present invention;
FIG. 7 is a block diagram of a UDM component prior to disassembly in accordance with an embodiment of the present invention;
FIG. 8 is a block diagram of a UDM component according to an embodiment of the present invention after being disassembled;
FIG. 9 is a block diagram of a GATHER assembly after disassembly in accordance with an embodiment of the present invention;
FIG. 10 is a configuration diagram under a single thread of driving provided by an embodiment of the present invention;
fig. 11 is a three-layer service configuration issue diagram provided in an embodiment of the present invention;
FIG. 12 is an isolated view of an AC module provided by an embodiment of the invention;
fig. 13 is an isolation diagram of a VC module according to an embodiment of the present invention;
fig. 14 is a schematic diagram of an implementation apparatus of a virtualized driving adaptation layer of a network device according to an embodiment of the present invention.
[ detailed description ] of the invention
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In embodiments of the present invention, the relationships between the subject object concepts are explained as follows: the drive and various platforms are operated in the board card; it is understood that each business process is assigned to each platform, and the corresponding driver process is an independent process that provides hardware device driver services for each business process.
In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Example 1:
the embodiment 1 of the invention provides a method for realizing a virtualized drive adaptation layer of network equipment, which is as shown in fig. 1, wherein a thread directly operating a chip is split into an independent drive process and at least one service process, service components are carried by the split service processes, and the service components interact with the drive process in a message form.
The operation thread of the chip in the embodiment of the invention consists of a driving process and one or more service processes serving each virtual device, wherein the service processes and the driving processes interact data through messages; the driving process is operated with a driving interface for directly operating the chip, and each business process is used for realizing interaction of corresponding virtual equipment side components; the service process performs data mapping operation on service resources of the service process in the virtual equipment and data table items in the driving process; wherein the business resources of each business process include one or more of a DHCM component, a UDM component, a GTHER component, and a PM component.
The invention divides the driving thread which directly operates the chip into independent processes, and then the independent processes can be respectively and independently operated, and the driving processes can be independently debugged. The initialization of the board card does not depend on virtual equipment any more, and the initialization of the drive process is faster because the process is split, so that the service drive of each virtual equipment in the prior art is started without repeatedly starting the drive process, and the single-disk initialization time is saved by about 2 minutes.
In the embodiment of the present invention, there is a preferred implementation scheme, where when forwarding a service data object to a driving process, a method shown in fig. 2 includes:
in step 201, after the service data object is converted into a corresponding data entry in the driving process according to the data mapping, the data entry is transferred to the upper SDA in the service process;
in step 202, the upper SDA of the service process performs validity judgment on the data entry, and for the data entry passing the validity judgment, the data entry is transmitted to the lower SDA of the driving process in a message interaction manner through the upper SDA of the service process, so that the lower SDA invokes the driving interface, thereby completing the operation on the chip on the corresponding data.
In the embodiment of the present invention, there is a preferred implementation scheme capable of improving data interaction performance and efficiency between a driving process and a service process, specifically, the driving process and the service process complete interaction of a message through a shared memory, and the implementation method further includes:
the sending side of the message modifies the signal quantity of the receiving side and is used for informing the receiving side of performing reading operation;
the receiving side judges whether the semaphore is modified or not, if the semaphore is modified, the receiving side reads a message linked list of the receiving side, and accordingly corresponding data content is obtained; the sending side and the receiving side of the message are respectively a driving process and a business process, or respectively a business process and a driving process.
In a specific implementation manner, a block of scheduling message memory is applied immediately after data is read from the shared memory, and a message is hung on a message linked list of a receiving component, and the shared memory storing the data before is released, so that the use efficiency of shared content is improved.
The receiving side judges whether the semaphore is modified or not, and the semaphore is expressed in a driving process as if the driving process opens up corresponding message receiving threads for one or more business processes, wherein the number of the business processes and the number of the message receiving threads are in one-to-one relationship or in many-to-one relationship, and the method further comprises:
the driving process sorts the messages acquired from each message receiving thread according to the preset type and then puts the messages into a message queue;
the message receiving thread reads and analyzes the message from the head of the message queue, and transmits the message to the lower SDA after the analysis is completed.
The interaction of the message between the driving process and the service process through the message linked list can be completed by using a channel POST message, wherein when the shared memory is released, the POST channel queue is traversed, and the address of the shared memory which needs to be released at this time (namely, the address where the data which needs to be sent) is directly sent to the sending side which initiates the POST request through the POST feedback message.
In a specific implementation process, particularly when the POST message is used to transfer the shared content data, the service data can be directly sent by using the shared memory, and if the service data is not sent, the shared memory needs to be released. And if the transmission is not transmitted, the service is required to automatically release the applied shared memory, otherwise, the shared memory space is occupied all the time. Therefore, in combination with the embodiment of the present invention, a sending timeout determination may be further set, and if the sending timeout of the data in the shared content of the corresponding application is not reached, and the release of the shared content cannot be completed through the sending process, the action of releasing the shared memory needs to be separately executed.
In the specific implementation process, the driving process is divided into timing messages, common messages and emergency messages according to the priority of the messages from low to high, the messages received from each business process are ordered according to the priority, and then the messages are processed by the message processing thread.
The driving process manages data table entries according to the identification values (such as DDPO1 and DDPO2 are the expression forms of corresponding identification values) of the driving data plane objects DDPO (totally called as Drive Data Plane Object) in each service process, and alarm performance data required by each service process are stored in the data table entries corresponding to the identification values of each DDPO;
the driving process transmits the alarm performance data stored in the data table entry corresponding to the identification value of each DDPO to the upper SDA of the corresponding service process through the message interaction between the lower SDA and the upper SDA of the service process;
the GATHER module of each business process periodically collects alarm performance data from the corresponding upper SDA.
The driving process carries out data table entry management according to the identification value of the driving data plane object DDPO in each service process, and specifically comprises the following steps:
the driving process stores different data table items according to the configuration issued by the virtual equipment, and each data table item corresponds to an independent performance acquisition sub-module;
each acquisition unit in the performance acquisition sub-module corresponds to one performance acquisition, the acquisition units access the bottom layer interface in a single-thread mode, determine an acquisition object according to the enabling setting of the performance acquisition, and store the performance acquisition object of the equipment corresponding to the performance alarm display item of the network manager/controller, specifically, interface/sub-interface performance, L2/L3 service performance, alarm and the like. Preferably, the data is stored in a non-reading mode, wherein the reading mode is that after the data is collected once, the count is cleared, the difference between the return and the last collection is returned, and the non-reading mode is that the data is not cleared, and the total value is returned.
The lower SDA in the driving process creates a timing acquisition task to poll and read each performance acquisition sub-module, corresponding data are assembled into a message to be added into a message queue, and the message is pushed to the upper SDA in the service process for subsequent processing. Typically, the SDA access performance acquisition soft table is in a multi-threaded manner, requiring a locking process. The SDA access performance acquisition soft table refers to a multi-thread mode, namely a plurality of lower SDAs are accessed; the lower SDA locks when accessing the performance acquisition soft table.
Each virtual device supports a routing table configured with a virtual same routing forwarding domain vrf, and a forwarding mapping relationship from a virtual device ID and a virtual routing forwarding domain vrf in a service process to a chip routing forwarding domain in a driving process is realized in a lower SDA; each virtual device is distinguished through a virtual routing forwarding domain vrf, and when service data is forwarded, the external service data forwarding is completed after the chip routing table items searched by each virtual device.
For a distributed device, different ports of the same chip (also described as a single disk) belong to one or more virtual devices, and in addition, the ports of different chips may belong to the same virtual device, so as to realize that service data is forwarded only between the same virtual device, and the different virtual devices are isolated from each other, and for L2VPN service, module resources to be managed include one or more of AC, VC, TUNNEL and LSP, and the method further includes:
the module resource ID management of the AC and the LSP is realized through a slot position and a virtual equipment ID (such as a string of customized character strings);
module resource ID management of the VCs and tunels is managed by virtual device ID numbers;
therefore, independent allocation of module resources and uniqueness of module resource IDs (comprising AC, LSP, VC, TUNNEL and the like) are realized, and when the ID numbers of the virtual devices are different, mutual isolation among different virtual machines is realized.
Example 2:
in comparison with example 1, the present embodiment of the present invention will be exemplarily described with reference to example scenarios one by one from several core elements constituting the technical solution of example 1.
As shown in fig. 3, taking an existing fosv5 platform architecture (i.e. an example representation of a virtual device) as an example, the fosv5 platform architecture under-Driver is used as a thread in a single disk (which is expressed as a chip in embodiment 1, and more will be described as a single disk in actual situations), and as shown in fig. 3, a single disk Driver process, a device hardware control Management DHCM (which is fully called: device Hardware Control Management), a device Driver process Driver, a User Data plane Management UDM (which is fully called: user Data-plane Management), an alarm performance collection report GATHER, and a performance Management PM Client (which is fully called: performance Management) are generally included. The access chip processes the function call provided by the driver, and after the device is virtualized, a plurality of Board main processes exist, and the driver cannot realize the plurality of process calls at present, so that the method in embodiment 1 is continued, the board main processes are split, the thread directly operating the chip is split into one driving process, other business components exist in the form of the board main processes as another process (i.e. the business process in embodiment 1), and the components interact with the driving processes in the form of messages (e.g. the memory sharing manner set forth in embodiment 1) as shown in fig. 4.
The two-way message communication between the virtual device side components (DHCM components, UDM components, etc.) and the driver processes takes the form of proxy processes, i.e. separate proxy processes are started on the board, within which the proxy components (the proxy components exist in the proxy processes, independently of the other components).
As shown in fig. 5, the transmitting end PROC1 notifies the receiving process PROC2 to perform a message reading operation by setting the signal quantity of the receiving process. After the receiving read thread of the receiving process PROC2 is triggered, it is judged whether it is a POST message or a data message according to the BITMAP, so as to query different linked lists.
For data messages, the receiving process PROC2 traverses the message chain table of the receiving channel, and for the channel with the received data flag bit set, reads the message chain table of the channel. When the data is read from the shared memory, a block of dispatch message memory is applied immediately, and the message is hung on a message dispatch queue of the receiving component, and the shared memory is released.
The shared memory message communication provides a POST function in addition to the message linked list mode. When the sender fails to apply for the shared memory, the sender can pre-apply for a block of POST memory by performing a POST request. It should be noted that if the POST REQ is not performed, then the shared memory is not notified when it is available. The POST request is message channel based and requires that the size of the POST message be defined. The POST message triggering flow chart 6 and 7 includes:
the memory of the POST can only be reserved memory, and when the reserved memory with the corresponding size is triggered by any channel to release operation, the POST channel queue corresponding to the reserved block can be queried. And the POST flow of the upper graph is triggered, and the POST can send the address of the released shared memory to a component initiating the POST request directly through POST information. In the POST message callback, the service can directly use the shared memory to send, and if not, the memory needs to be released. The shared memory of the shared memory communication application does not need service release if transmission is called, and the service is required to be released by itself if no transmission is called, otherwise the shared memory space is occupied all the time.
In the implementation of the embodiment of the invention, the specific modification mode of the UDM component is as follows: the UDM component is a component responsible for configuration entry delivery, the structure of the UDM component before modification is shown in fig. 7, and the flow before modification is as follows:
after the DDPO finishes data mapping, the configuration data from the FDPO is transferred to the SDA through function call, and the SDA transfers the data to the driver through logic such as judgment conversion and the like and then transfers the FHDRV interface.
After the splitting, the UDM component structure is as shown in fig. 8, and the adaptation layer udm_sda is split into an upper udm_sda and a lower udm_sda, where the upper udm_sda mainly performs validity judgment on configuration data, and sends the data to the lower udm_sda in a message form. The driving process divides the messages into different priorities through different message receiving threads to sort and put the messages into a message queue. And the independent message receiving thread reads and analyzes the message from the head of the message queue, calls the lower UDM_SDA to process after the analysis is completed, and is also provided with a driving service module corresponding to the UDM component in the service process in the driving process to complete the service function operation. The lower UDM_SDA transfers the data to the driver by calling the FHDRV interface through logic such as judgment conversion. The FDPO is service data to be configured to the driving process through the DDPO. The DDPO1 expressions in UDM1 and UDM2 are not differently expressed in fig. 8, respectively, and it is understood that they are module components having the same function, but differ only in that they serve different UDMs. In the actual implementation process, different UDMs are usually identified by the service process IDs (or virtual device IDs) to which the different UDMs belong, which can be understood as one of the most direct and effective means for distinguishing the UDMs in the different service processes.
After the DDPO finishes data mapping, the data is transferred to the upper SDA, and the upper SDA needs to be judged by data validity, and is judged according to a boundary value, a data dependency relationship and the like designed by a driving function, for example, a return error exceeding the boundary is generated, a plurality of data have a return error depending on the condition that the condition is not satisfied. If the detection is not returned directly, the data is transmitted to the driving process through the message by the detection qualification, and the driving process does not need to be returned to the DDPO after the processing is completed.
In the embodiment of the invention, the modified driving process is also specifically expressed as follows:
splitting the adaptation layer SDA into an upper SDA and a lower SDA;
the upper SDA and the lower SDA are matched to finish DDPO data adaptation treatment and then issued to the chip; the upper SDA mainly performs validity judgment of the configuration data, and sends the data to the lower SDA in a message form. The lower SDA is used for receiving the message of the upper SDA, caching the message, and calling a corresponding message processing function to complete processing through the chip.
The driving process divides the messages into different priorities through different message receiving threads to sort and put the messages into a message queue. The message receiving thread receives the message and puts the message into the message processing thread, and the message processing thread performs sequencing processing on the received message; the driver receives the message through the multithreading and divides the message into a normal message and an urgent message. Common information is adopted in common configuration, and the common information has no concept of priority, namely first in first out; for performance requirements, for example: the protection switching and the like mostly adopt an emergency message processing mechanism, and an emergency message is inserted into the queue head. The MM (fully Message Management) message parsing module is responsible for receiving virtual device messages, placing the messages into FIFOs according to priority, parsing the messages by a single message processing thread, and calling a write hardware interface for processing.
The independent message receiving thread reads the message from the head of the message queue, analyzes the message, and calls the lower SDA to process after the analysis is completed. The lower SDA transfers the data to the chip through logic recall driving interface such as judgment conversion.
In the embodiment of the invention, the specific modification mode of the GATHER component is as follows:
the GATHER component is responsible for reporting performance alarms, and the existing GATHER component adopts virtual equipment to start a timer at present and polls and calls a drive interface to read the performance; after modification, a timer is started by the driver, and after all performance readings are completed, the performance readings are actively pushed to the GATHER component through a message.
The modified structure of the GATHER component is shown in FIG. 9, and the specific implementation method is as follows:
the driver process reports the alert performance to the gap component at regular time. The upper gath_sda module (e.g., upper gath_sda1 or upper gath_sda2 in fig. 9) is specifically configured to perform buffering, and the driving process needs to perform data entry management based on DDPOkey (a specific expression of the identification value of DDPO in embodiment 1, that is, a key value used when the index value DDPO data stores access), where alarm performance data is stored. The timing of the GATHER module in each Board main goes to the upper GATH_SDA module to collect, in the specific implementation process, according to the difference of alarm data collection and classification, the timing can be similar to that shown in FIG. 9, and is divided into two types, namely an upper GATH_SDA1 and an upper GATH_SDA2, and correspondingly, a lower GATH_SDA1 and a lower GATH_SDA2, and in the driving process, the timing also corresponds to the GATHER module in the service process, and a driving alarm performance module is correspondingly arranged for completing the alarm service function with the GATHER module in the service process. Association fig. 8 it can be appreciated that for the driver traffic module and the driver alert performance module, there are architectures as shown in fig. 8 and 9, each having their own MM message parsing module inside.
The driving process stores different performance acquisition tables according to the configuration issued by the virtual equipment, and each table corresponds to an independent performance acquisition sub-module, so that the method is mainly realized:
1. collecting performance statistics of a bottom layer at regular time;
2. managing alarm performance objects;
3. and (5) processing alarm performance data.
Each acquisition submodule corresponds to one performance, accesses the bottom layer interface in a single-thread mode, determines which objects are acquired according to a performance acquisition switch (which objects are acquired through performance acquisition enabling or disabling), and (performance alarms of a network manager/controller display the performance acquisition objects of equipment corresponding to the items, specifically, interface/subinterface performance, L2/L3 service performance, alarms and the like), wherein data is stored in a performance acquisition soft table in a non-reading mode (reading is that after the performance is acquired once, counting is cleared, a difference value between the return and the last acquisition is returned, non-reading is that the zero clearing is not cleared, and a total value is returned).
The lower GATH_SDA creates a timing acquisition task to poll and read each performance acquisition object, corresponding data are assembled into a message to be added into a message queue, the message is pushed to the GATHER component of different Board main, and the upper GATH_SDA carries out subsequent processing. The lower GATH_SDA access performance acquisition soft table is in a multithreading mode, and locking processing is needed. The lower GATH_SDA access performance acquisition soft table refers to a multi-thread mode, namely a plurality of upper GATH_SDA access is performed; the upper gath_sda locks when accessing the performance acquisition soft table.
Before the driver separates the independent process, a plurality of process call driver provided function interfaces are configured and issued, the driver adds mutex locks in externally provided functions, each function needs to acquire and release the locks due to the existence of the locks, and in addition, the processing time of the configuration can be increased due to the processing switching of the threads caused by the acquisition waiting of the locks in a multithreading mode, so that after the driver separates the independent process, the configuration issuing mode is modified into a single-thread issuing mode (as shown in fig. 10, a chip driver FHDRV is a driver interface named by the first letter of a beacon and is a driver which is independently developed on the basis of the chip driver, MM is short for a MM message analysis module, platform software is the direct content representation of virtual equipment in embodiment 1, SDA adaptation is the lower SDA in each embodiment and is also described as lower UDM_SDA, lower GATH_SDA and the like in different real-time scenes), and the method is specifically realized as follows:
the virtual device software and the driving software respectively issue executable files, but the communication format and the content between the virtual device software and the driving software need to be matched, so that the virtual device software and the driving software are uniformly issued in a software package form, and the matching of different versions of the software can be ensured. The virtual equipment and the driver interact through information, including configuration issuing, configuration obtaining, performance alarm reporting, event reporting and the like; each message has a specific message code and message content, and a unified message header format.
The driver receives the message through the multithreading and divides the message into a normal message and an urgent message. Common information is adopted in common configuration, and the common information has no concept of priority, namely first in first out; for performance requirements, such as protection switching, an urgent message processing mechanism is often adopted, and an urgent message is inserted into the queue head. The MM message analysis module is responsible for receiving virtual equipment messages, placing the messages into the FIFO according to the priority, analyzing the messages through a single message processing thread, and calling a write hardware interface for processing.
The timing information processes the performance alarm data and reports the performance alarm data to the virtual equipment side at regular time.
The multiple virtual devices issue configuration to the driver, the driver side needs to distinguish different virtual devices, and for two-layer and three-layer services, different forwarding table items of the virtual devices need to be planned. The isolation of different virtual devices of two-layer and three-layer service list items is needed, and three-layer service configuration isolation is shown in fig. 11, and the specific implementation method is as follows (two-layer L2 refers to a switch, and three-layer L3 refers to a router):
taking routing as an example, different virtual devices support routing tables of configurations vrf and ip, a layer of one-to-one mapping relationship of vsid+config vrf- > chip vrf is realized on the upper SDA and the lower SDA, what is described here is what is shown in fig. 11, vs is a device virtual ID number, vrf/ip is an ip address and vrf (virtual routing forwarding domain) of routing forwarding, and different vrf segments are ensured when the configuration is issued to the bottom layer. When the service is forwarded, the chip routing table items searched by different virtual devices are distinguished through the vrf (key). Where DATA may be understood as traffic DATA generated in the virtual device to be forwarded.
For other forwarding table items, similar processing flows are adopted, and independent chip resources are divided for each virtual device at the software level, so that the mutual independence of service forwarding flows is ensured.
The business specifications of each device can be configured, the FEID module (namely the ID management module is used for realizing the allocation and management of the ID resources of the bottom chip) uniformly manages all the bottom chip resources, different virtual devices apply for resources in own resource pools, different business specifications can be customized according to the needs, and the business specifications are set by reading configuration files.
The forwarding between different virtual devices of the two-layer service is consistent with the three-layer service, and no essential difference exists; briefly described as follows (two layers refer to switches, three layers refer to routers, here referred to as L2/L3 traffic):
for centralized equipment, no special virtualization treatment is performed; for the distributed device, different ports of the same single disk may belong to different virtual devices, in addition, the ports of different single disks may belong to the same virtual device, so that in order to realize that services are only forwarded by the same virtual machine device, different virtual machines are isolated from each other, and independent allocation management of ID resources is mainly required to be realized.
The modules required to be managed for the L2VPN service mainly comprise AC, VC, TUNNEL, LSP and the like, wherein the AC and the LSP have slot port information, resources can be managed based on slots and virtual equipment IDs, and the VC, TUNNEL has no slot port information, belongs to global configuration, and can be used for managing ID resources based on the virtual equipment IDs; therefore, independent allocation of resources and uniqueness of the resource ID are realized, and when the IDs are different, mutual isolation among different virtual machines is realized.
L2VPN refers to an L2 virtual private network, which includes two service models, VPWS and VPLS, and may be carried on different tunnel types, and is assumed to be carried on an MPLS network, where two layers of user data are transparently transmitted on the MPLS network for a user.
The connection between the AC user and the service provider is responsible for accessing different users; a Virtual Circuit (VC) Virtual link, a bi-directional Virtual connection between two PE devices; tunel is used to carry PW, and one TUNNEL can carry multiple PWs. The tunnel is a direct connection channel between the local PE and the opposite PE, and can be an MPLS or GRE tunnel for completing transparent data transmission between the PEs; LSP represents different links on tunnel, which is used for user data transmission; for the specification configuration of each virtual device, the FEID module (namely an Id management module for realizing the allocation and management of the bottom chip ID resource) is used for unified management, the module can specify the specifications of different virtual devices by reading the configuration file, and the specifications of different modules under different virtual devices can also be customized by reading the configuration file;
the specific implementation manner of ID allocation is as follows:
an AC module: connection between AC (Attachment Circuit, access circuit) user and service provider, responsible for accessing different users
The KEY value information of the module is mainly interface index, different AC interface indexes are different, and in addition, different virtual devices are considered, the interface indexes are possibly the same, so that when a certain AC is confirmed, the unique AC can be ensured by adding the virtual device ID information; namely, virtual equipment ID+interface index+slot position+port are used as KEY values to generate unique ID as an AC index, and different KYE values generate different AC indexes, so that different virtual equipment can be isolated from each other, as shown in FIG. 12; the ifindex (i.e. shorthand of interface index) represents an interface index, and different interface indexes represent different ports, and different ports can belong to the same slot or different slots; VM is actually a virtual device ID; what AC21 and AC11 want to express is the unique identifier generated by ifindx1+vm2 and ifindx1+vm1 above.
And a VC module: a Virtual Circuit (VC) Virtual link, a bi-directional Virtual connection between two PE devices.
The KEY value information of the module is mainly vc_id and pei_ip, the module has no slot information, global creation is needed, service specification is affected, virtual equipment ID information is considered to be added, virtual equipment ID+VC_ID+Peer_IP is taken as KEY, the KEY is created under specified virtual equipment when service is created, and mutual isolation of different virtual equipment is ensured, as shown in fig. 13.
For LSP modules with slot information reference is made to an AC implementation, for TUNNEL modules without slot information reference is made to a VC implementation.
Example 3:
fig. 14 is a schematic architecture diagram of an implementation apparatus of a virtualized driver adaptation layer of a network device according to an embodiment of the present invention. The implementation apparatus of the network device virtualization driver adaptation layer of this embodiment includes one or more processors 21 and a memory 22. In fig. 14, a processor 21 is taken as an example.
The processor 21 and the memory 22 may be connected by a bus or otherwise, which is illustrated in fig. 14 as a bus connection.
The memory 22 is used as a nonvolatile computer readable storage medium for storing nonvolatile software programs and nonvolatile computer executable programs, such as the implementation method of the network device virtualization driver adaptation layer in embodiment 1. The processor 21 executes the implementation of the network device virtualization driver adaptation layer by running non-volatile software programs and instructions stored in the memory 22.
The memory 22 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 22 may optionally include memory located remotely from processor 21, which may be connected to processor 21 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory 22, which when executed by the one or more processors 21, perform the method of implementing the network device virtualization driver adaptation layer in embodiment 1 described above, for example, performing the steps shown in fig. 2, 5, and 6 described above.
It should be noted that, because the content of information interaction and execution process between modules and units in the above-mentioned device and system is based on the same concept as the processing method embodiment of the present invention, specific content may be referred to the description in the method embodiment of the present invention, and will not be repeated here.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the embodiments may be implemented by a program that instructs associated hardware, the program may be stored on a computer readable storage medium, the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (9)

1. The method for realizing the network equipment virtualization driving adaptation layer is characterized by comprising the following steps:
splitting a thread which directly operates a chip into an independent driving process and at least one business process, wherein business components are borne by the split business processes, and the business components interact with the driving process in a message form;
the service process performs data mapping operation on service resources of the service process in the virtual equipment and data table items in the driving process;
the service resource of each service process comprises one or more of a Device Hardware Control Management (DHCM) component, a user data plane management (UDM) component, an alarm performance acquisition reporting (GATHER) component and a Performance Management (PM) component;
when the business resource in the business process forwards the business data object to the driving process, the method comprises the following steps:
converting the service data object into a corresponding data table item in a driving process according to the data mapping, and transmitting the data table item to an upper SDA in the service process;
and the upper SDA of the business process carries out validity judgment on the data list item, and the data list item passing the validity judgment is transmitted to the lower SDA of the driving process in a message interaction mode through the upper SDA of the business process so that the lower SDA calls a driving interface to finish the operation of corresponding data on a chip.
2. The method for implementing the virtualized driving adaptation layer of the network device according to claim 1, wherein the interaction of the messages is completed between the driving process and the business process through the shared memory, and the implementing method specifically comprises:
the sending side of the message modifies the signal quantity of the receiving side and is used for informing the receiving side of performing reading operation;
the receiving side judges whether the semaphore is modified or not, if the semaphore is modified, the receiving side reads a message linked list of the receiving side, and accordingly corresponding data content is obtained;
the sending side and the receiving side of the message are respectively a driving process and a business process, or respectively a business process and a driving process.
3. The method for implementing the virtualized driving adaptation layer of a network device according to claim 2, wherein the receiving side determines whether the semaphore is modified, and wherein the determining is performed by the driving process that the driving process opens up corresponding message receiving threads for one or more service processes, and wherein the number of service processes and the number of message receiving threads are in a one-to-one relationship or a many-to-one relationship, and the method further comprises:
the driving process sorts the messages acquired from each message receiving thread according to the preset type and then puts the messages into a message queue;
the message receiving thread reads and analyzes the message from the head of the message queue, and transmits the message to the lower SDA after the analysis is completed.
4. The method for implementing the network device virtualization driver adaptation layer according to claim 1, wherein for the gap component in the business process, the method further comprises:
the driving process carries out data table entry management according to the identification value of the driving data plane object DDPO in each service process, and the data table entry corresponding to the identification value of each DDPO stores alarm performance data required by each service process;
the driving process transmits the alarm performance data stored in the data table entry corresponding to the identification value of each DDPO to the upper SDA of the corresponding service process through the message interaction between the lower SDA and the upper SDA of the service process;
the gap component of each business process periodically collects alarm performance data from the corresponding upper SDA.
5. The method for implementing the virtualized driving adaptation layer of the network device according to claim 4, wherein the driving process performs data entry management according to the identification value of the driving data plane object DDPO in each service process, specifically comprising:
the driving process stores different data table items according to the configuration issued by the platform, and each data table item corresponds to an independent performance acquisition sub-module;
each acquisition unit in the performance acquisition sub-module corresponds to one performance acquisition, the acquisition units access the bottom layer interface in a single-thread mode, and an acquisition object is determined and stored according to the enabling setting of the performance acquisition.
6. The method for implementing the virtualized driving adaptation layer of network equipment according to claim 5, wherein the lower SDA in the driving process creates a timing acquisition task to poll and read each performance acquisition sub-module, assembles the corresponding data into a message, adds the message into a message queue, and pushes the message to the upper SDA in the service process for subsequent processing.
7. The method for implementing the virtualized driving adaptation layer of network equipment according to claim 1, wherein each virtual equipment supports configuration of a routing table of a virtual same routing forwarding domain vrf, and a forwarding mapping relationship from a virtual equipment ID and a virtual routing forwarding domain vrf in a service process to a chip routing forwarding domain in a driving process is implemented in a lower SDA;
each virtual device is distinguished through a virtual routing forwarding domain vrf, and when service data is forwarded, the external service data forwarding is completed after the chip routing table items searched by each virtual device.
8. The method for implementing a virtualized driver adaptation layer of a network device according to claim 1, wherein, to implement forwarding of service data only in the same virtual device, different virtual devices are isolated from each other, and module resources to be managed for the L2VPN service include one or more of AC, VC, TUNNEL and LSPs, the method further comprising:
the module resource ID management of the AC and the LSP is realized through the slot positions and the virtual equipment ID numbers;
module resource ID management of the VCs and tunels is managed by virtual device ID numbers;
therefore, independent allocation of module resources and uniqueness of module resource IDs are realized, and mutual isolation among different virtual machines is realized when the ID numbers of the virtual devices are different.
9. An implementation apparatus of a network device virtualization driver adaptation layer, wherein the apparatus includes:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor for performing the method of implementing the network device virtualization driver adaptation layer of any one of claims 1-8.
CN202110983483.1A 2021-08-25 2021-08-25 Method and device for realizing network equipment virtualization driving adaptation layer Active CN113672411B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110983483.1A CN113672411B (en) 2021-08-25 2021-08-25 Method and device for realizing network equipment virtualization driving adaptation layer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110983483.1A CN113672411B (en) 2021-08-25 2021-08-25 Method and device for realizing network equipment virtualization driving adaptation layer

Publications (2)

Publication Number Publication Date
CN113672411A CN113672411A (en) 2021-11-19
CN113672411B true CN113672411B (en) 2023-08-11

Family

ID=78546306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110983483.1A Active CN113672411B (en) 2021-08-25 2021-08-25 Method and device for realizing network equipment virtualization driving adaptation layer

Country Status (1)

Country Link
CN (1) CN113672411B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115098220B (en) * 2022-06-17 2024-04-16 西安电子科技大学 Large-scale network node simulation method based on container thread management technology
CN114327664B (en) * 2022-03-15 2022-06-17 武汉普赛斯电子技术有限公司 Software management method for card-insertion type case equipment, computer equipment and storage medium
WO2024060228A1 (en) * 2022-09-23 2024-03-28 华为技术有限公司 Data acquisition method, apparatus and system, and storage medium
CN117240715B (en) * 2023-11-14 2024-01-23 湖南恒茂信息技术有限公司 Frame type switch service board card mixed management method

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577647A (en) * 2009-06-15 2009-11-11 中兴通讯股份有限公司 Alarm box in support of multi-VLAN and processing method of alarming thereof
CN102088367A (en) * 2010-12-10 2011-06-08 北京世纪互联工程技术服务有限公司 Method for quickly deploying in virtualization environment
CN102325178A (en) * 2011-09-07 2012-01-18 西安交通大学 Virtual non-volatile flash storage device equipment designing method based on hypervisor framework
CN102857363A (en) * 2012-05-04 2013-01-02 运软网络科技(上海)有限公司 Automatic computing system and method for virtual networking
CN103797465A (en) * 2011-09-14 2014-05-14 阿尔卡特朗讯 Method and apparatus for providing isolated virtual space
CN104636076A (en) * 2013-11-15 2015-05-20 中国电信股份有限公司 Distributed block device driving method and system for cloud storage
CN105242872A (en) * 2014-06-18 2016-01-13 华中科技大学 Virtual cluster-oriented shared memory system
CN105700826A (en) * 2015-12-31 2016-06-22 华为技术有限公司 Virtualization method and device
CN105912892A (en) * 2016-04-08 2016-08-31 浪潮电子信息产业股份有限公司 Process protection method and framework based on cloud computing
CN106339257A (en) * 2015-07-10 2017-01-18 中标软件有限公司 Method and system for lightweighting client computer operating system and virtualized operating system
WO2017119918A1 (en) * 2016-01-05 2017-07-13 Hewlett Packard Enterprise Development Lp Virtual machine messaging
CN109062671A (en) * 2018-08-15 2018-12-21 无锡江南计算技术研究所 A kind of high-performance interconnection network software virtual method of lightweight
CN111177804A (en) * 2018-11-13 2020-05-19 江苏南大电子信息技术股份有限公司 System and method based on multi-platform data security isolation and service cooperative work
CN113256481A (en) * 2021-06-21 2021-08-13 腾讯科技(深圳)有限公司 Task processing method and device in graphic processor, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10474392B2 (en) * 2017-09-19 2019-11-12 Microsoft Technology Licensing, Llc Dynamic scheduling for virtual storage devices

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577647A (en) * 2009-06-15 2009-11-11 中兴通讯股份有限公司 Alarm box in support of multi-VLAN and processing method of alarming thereof
CN102088367A (en) * 2010-12-10 2011-06-08 北京世纪互联工程技术服务有限公司 Method for quickly deploying in virtualization environment
CN102325178A (en) * 2011-09-07 2012-01-18 西安交通大学 Virtual non-volatile flash storage device equipment designing method based on hypervisor framework
CN103797465A (en) * 2011-09-14 2014-05-14 阿尔卡特朗讯 Method and apparatus for providing isolated virtual space
CN102857363A (en) * 2012-05-04 2013-01-02 运软网络科技(上海)有限公司 Automatic computing system and method for virtual networking
CN104636076A (en) * 2013-11-15 2015-05-20 中国电信股份有限公司 Distributed block device driving method and system for cloud storage
CN105242872A (en) * 2014-06-18 2016-01-13 华中科技大学 Virtual cluster-oriented shared memory system
CN106339257A (en) * 2015-07-10 2017-01-18 中标软件有限公司 Method and system for lightweighting client computer operating system and virtualized operating system
CN105700826A (en) * 2015-12-31 2016-06-22 华为技术有限公司 Virtualization method and device
WO2017119918A1 (en) * 2016-01-05 2017-07-13 Hewlett Packard Enterprise Development Lp Virtual machine messaging
CN105912892A (en) * 2016-04-08 2016-08-31 浪潮电子信息产业股份有限公司 Process protection method and framework based on cloud computing
CN109062671A (en) * 2018-08-15 2018-12-21 无锡江南计算技术研究所 A kind of high-performance interconnection network software virtual method of lightweight
CN111177804A (en) * 2018-11-13 2020-05-19 江苏南大电子信息技术股份有限公司 System and method based on multi-platform data security isolation and service cooperative work
CN113256481A (en) * 2021-06-21 2021-08-13 腾讯科技(深圳)有限公司 Task processing method and device in graphic processor, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种面向多样化网络业务融合的SDN网络架构;龚向阳;王文东;;中兴通讯技术(第05期);全文 *

Also Published As

Publication number Publication date
CN113672411A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN113672411B (en) Method and device for realizing network equipment virtualization driving adaptation layer
JP7085565B2 (en) Intelligent thread management across isolated network stacks
EP3461087B1 (en) Network-slice resource management method and apparatus
CN105207873B (en) A kind of message processing method and device
CN101753362B (en) Configuring method and device of stacking virtual local area network of distributed network device
CA2679634C (en) A method and system for monitoring messages passed over a network
JP3640187B2 (en) Fault processing method for multiprocessor system, multiprocessor system and node
JP2007158870A (en) Virtual computer system and network communication method thereof
JPH0372739A (en) Communication system and routing information learning system
CN109586864A (en) Data transmission method, apparatus and system
CN104219327A (en) Distributed cache system
US10397353B2 (en) Context enriched distributed logging services for workloads in a datacenter
WO2010040716A1 (en) Queue manager and method of managing queues in an asynchronous messaging system
CN109960634A (en) A kind of method for monitoring application program, apparatus and system
CN101207522A (en) Method and apparatus for implementation of collocation task scheduling
US9588685B1 (en) Distributed workflow manager
CN106941522B (en) Lightweight distributed computing platform and data processing method thereof
CN114363269A (en) Message transmission method, system, equipment and medium
CN106557690A (en) Method and apparatus for managing multi-container system
WO2021103657A1 (en) Network operation method, apparatus, and device and storage medium
CN109558235A (en) A kind of dispatching method of processor, device and computer equipment
CN110519147A (en) Data frame transmission method, device, equipment and computer readable storage medium
CN100550844C (en) The method of reducing redirected message characteristic information
CN107819622B (en) MAC Address management method and device
CN108282383A (en) A kind of method and apparatus for realizing troubleshooting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant