CN110855468B - Message sending method and device - Google Patents

Message sending method and device Download PDF

Info

Publication number
CN110855468B
CN110855468B CN201910940648.XA CN201910940648A CN110855468B CN 110855468 B CN110855468 B CN 110855468B CN 201910940648 A CN201910940648 A CN 201910940648A CN 110855468 B CN110855468 B CN 110855468B
Authority
CN
China
Prior art keywords
cpu
soc
message
cache
stored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910940648.XA
Other languages
Chinese (zh)
Other versions
CN110855468A (en
Inventor
耿云志
李生
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910940648.XA priority Critical patent/CN110855468B/en
Publication of CN110855468A publication Critical patent/CN110855468A/en
Application granted granted Critical
Publication of CN110855468B publication Critical patent/CN110855468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a message sending method and device, relates to the field of network on cloud, and is beneficial to improving the IO performance of a client virtual machine in a network on cloud system. The method comprises the following steps: acquiring message storage conditions in N CPU caches and SOC caches; if the message storage condition is that messages are stored in M CPU caches in the N CPU caches and messages are stored in the SOC cache, the bandwidth allocation device sends the messages stored in the M CPU caches to the network card; after the messages stored in the M CPU caches are all sent, sending the messages stored in the SOC cache to the network card; wherein M is not less than 1 and not more than N, and M is an integer; or if the message storage condition is that the messages are stored in all the M CPU caches in the N CPU caches but no message is stored in the SOC cache, the bandwidth allocation device sends the messages stored in the M CPU caches to the network card.

Description

Message sending method and device
Technical Field
The present application relates to the field of an OVS (optional switch) or an EVS (elastic volume service), and in particular, to a message sending method and apparatus.
Background
In a network system on a cloud, generally, a plurality of virtual machines are run on one host. The network flow table can be used to accelerate the network response speed, and the creation, the refresh and the lookup of the network flow table are usually processed by a Central Processing Unit (CPU) in the host, however, because the refresh and the lookup of the network flow table are frequent and the data volume of the flow table is huge, a large amount of CPU resources are consumed for the refresh and the lookup of the network flow table.
At present, a heterogeneous System On Chip (SOC) unloading scheme is introduced for the problem that refreshing and searching of a network flow table consume a large amount of CPU resources. As shown in fig. 1, a network-on-cloud system under the SOC offload scheme includes: two CPUs (e.g., CPU0 and CPU1), an SOC, a bus device (e.g., I/O channel, memory, video card, sound card, hard disk controller, hard disk, etc.), and a network card, etc. The connection relationship between these devices (or apparatuses) is shown in fig. 1. The design idea of the unloading scheme is to physically divide the network card of the X16 into 2X 8 interfaces. An X8 interface is used for SOC to create, refresh and look up network flow table, thereby reducing CPU resource consumption. Another X8 interface is used for CPU messaging. However, in this scheme, the network card with the original bandwidth of X16 can only use the bandwidth of X8 for message transmission of the CPU, which may cause IO performance degradation of the virtual machine connected to the CPU, and thus cause poor user experience.
Disclosure of Invention
The embodiment of the application provides a message sending method and device, which are beneficial to improving the IO performance of a client virtual machine in a cloud network system.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, a message sending method is provided, which is applied to a cloud network system, where the cloud network system includes N CPUs, a system on chip SOC, a network card, and a bandwidth allocation device, and the bandwidth allocation device is connected to the N CPUs, the SOC, and the network card, respectively; the bandwidth allocation apparatus includes: n CPU caches and SOC caches; the system comprises a CPU (central processing unit) cache, an SOC (system on chip) cache and a bandwidth allocation device, wherein the CPU cache is used for caching a message sent by the CPU to the bandwidth allocation device; n is a positive integer; the method comprises the following steps: the bandwidth allocation device acquires message storage conditions in N CPU caches and SOC caches; if the message storage condition includes that messages are stored in M CPU caches in the N CPU caches and messages are stored in the SOC cache, the bandwidth allocation device sends the messages stored in the M CPU caches to the network card; after the messages stored in the M CPU caches are all sent, sending the messages stored in the SOC cache to the network card; wherein M is not less than 1 and not more than N, and M is an integer; if the message storage condition includes that the messages are stored in M CPU caches in the N CPU caches but the messages are not stored in the SOC caches, the bandwidth allocation device sends the messages stored in the M CPU caches to the network card. Therefore, the bandwidth allocation device can preferentially ensure that the messages stored in the CPU cache are sent to the network card, and the problem of poor I/O performance of the virtual machine of the CPU due to the fact that the SOC occupies the bandwidth of the network card is solved.
In one possible design, the sending, by the bandwidth allocation apparatus, the messages stored in the M CPU caches to the network card includes: the bandwidth allocation device sends the message stored in the mth CPU cache in the M CPU caches to the network card by adopting the mth probability, wherein M is more than or equal to 1 and less than or equal to M; m is an integer; the sum of the 1 st probability to the M th probability is 1. The mth probability is the probability of the mth CPU cache in the M CPU caches, and the probability of the mth CPU cache is the probability of the message stored in the mth CPU cache being sent in the process of sending the message by the bandwidth allocation device for one time. The probabilities of any two CPU caches may be the same or different. The probability of any one CPU buffering may be user configured or SOC indicated. In one implementation, the probability of any one CPU cache is pre-configured by the user in the bandwidth allocation device. In another implementation, the SOC sends the probability of any CPU cache pre-configured by the user to the bandwidth allocation device in advance. Optionally, for each CPU cache, the probability may be modified after being configured, for example, the SOC may modify the probability of the CPU cache according to an instruction of a user, and issue the modified probability of the CPU cache to the bandwidth allocation device. Therefore, the probability of the CPU cache can be set according to actual requirements, so that the messages stored in the CPU cache can be preferentially sent to the network card. And the I/O performance of the virtual machine of the CPU is improved.
In another possible design, the bandwidth allocation apparatus sends the message stored in the mth CPU cache of the M CPU caches to the network card with the mth probability, including: the bandwidth allocation device selects one CPU cache from the CPU caches storing unsent messages in the M CPU caches as a target CPU cache according to a selection algorithm, and sends the messages in the target CPU cache until the messages in the M CPU caches are sent completely; wherein the selection algorithm is a selection algorithm that makes the probability of sending the packet stored in the mth CPU cache the mth probability. The selection algorithm may be a weight and priority based arbitration algorithm. Therefore, the messages stored in the CPU caches are sent by using the specific probability, the data stored in each CPU cache can be sent to the network card according to the specific probability, and the I/O performance of the virtual machine of the CPU is improved.
In another possible design, the bandwidth allocation apparatus prestores a correspondence between first CPU domain attribute information and SOC domain attribute information of each of a plurality of resources of a first CPU of the M CPUs; each resource refers to a virtual resource or a physical resource which is allocated to the first CPU by the SOC, and the attribute information comprises an address or a bus device function identification BDF; the method further comprises the following steps: the bandwidth allocation device determines attribute information in an SOC domain corresponding to the attribute information in the first CPU domain carried in the first message according to the corresponding relationship between the first CPU domain attribute information of each resource in the plurality of resources and the SOC domain attribute information; the first message is any one message in the first CPU cache; replacing the attribute information of the first CPU domain carried in the first message with the attribute information in the SOC domain corresponding to the attribute information of the first CPU domain carried in the first message to obtain a second message; the bandwidth allocation device sends the messages stored in the M CPU caches to the network card and comprises the following steps: the bandwidth allocation device sends a second message to the network card. Therefore, when the CPU cache and the SOC cache share one network card, the address conflict of the virtual resources and the physical resources of the network card can be solved, the CPU cache and the SOC cache share one network card, and the I/O performance of the virtual machine of the CPU is improved.
In another possible design, the message sending method further includes: the bandwidth allocation device receives attribute information of an SOC domain of a first resource sent by an SOC, wherein the first resource comprises a virtual resource or a physical resource allocated to a first CPU; the bandwidth allocation device receives attribute information of a first CPU domain of a first resource sent by a first CPU; the bandwidth allocation device establishes a correspondence between attribute information of an SOC domain of a first resource and attribute information of a first CPU domain of the first resource. Therefore, when the CPU cache and the SOC cache share one network card, the address conflict of the virtual resources and the physical resources of the network card can be solved, the attribute information of the CPU domain is related to the attribute information of the SOC domain, the CPU cache and the SOC cache share one network card, and the I/O performance of the virtual machine of the CPU is improved.
In another possible design, the method further includes: and if the message storage condition comprises that no message is stored in each CPU cache of the N CPU caches and the message is stored in the SOC cache, sending the message stored in the SOC cache to the network card. Therefore, the SOC can send the message by using the network card under the condition that the CPU cache does not store the message, and the actions of creating, refreshing, searching and the like of the flow table are realized. The use efficiency of the network card is improved.
In a second aspect, a message sending method is provided, which is applied to a cloud network system, where the cloud network system includes N CPUs, a system on chip SOC, a network card, and a bandwidth allocation device, and the bandwidth allocation device is connected to the N CPUs, the SOC, and the network card, respectively; the bandwidth allocation apparatus includes: n CPU caches and SOC caches; the system comprises a CPU (central processing unit) cache, an SOC (system on chip) cache and a bandwidth allocation device, wherein the CPU cache is used for caching a message sent by the CPU to the bandwidth allocation device, and the SOC cache is used for caching a message sent by the SOC to the bandwidth allocation device; the N is a positive integer; the method comprises the following steps: the SOC enumerates virtual resources and physical resources on the network card; acquiring attribute information of the virtual resource and attribute information of the physical resource; the attribute information comprises an address of the SOC domain or a bus device function identification BDF of the SOC domain; the SOC sends the attribute information of the virtual resource and the physical resource to the bandwidth allocation device; the attribute information of the physical resource is used for the bandwidth allocation device to establish a corresponding relation between the attribute information of the physical resource in the SOC domain and the attribute information of the physical resource in the CPU domain; the virtual resource is used for the bandwidth allocation device to establish a corresponding relation between the attribute information of the virtual resource in the SOC domain and the attribute information of the virtual resource in the CPU domain; the corresponding relation is used for the bandwidth allocation device to send the message sent by the CPU to the network card. Therefore, the SOC can not only allocate resources, but also realize that the network card resources are shared by the CPU and the SOC. This helps to improve the I/O performance of the virtual machine on the CPU.
In a third aspect, a bandwidth allocation apparatus is provided, which may be configured to perform any of the methods provided by any of the possible designs of the first aspect, for example, the bandwidth allocation apparatus may be a chip or a device.
In one possible design, the apparatus may be divided into functional modules according to the method provided in any one of the above-mentioned first aspect or the first aspect, for example, the functional modules may be divided corresponding to the functions, or two or more functions may be integrated into one processing block.
In one possible design, the apparatus may include a processor, a memory, and a transceiver. The transceiver may be used for the reception and transmission of messages. The memory is for storing a computer program. The processor is adapted to invoke the computer program to perform any of the methods provided by the first aspect and any of the possible designs of the first aspect. The transceiver may be a pin on a chip or a piece of circuitry.
In a fourth aspect, a system-on-chip SOC is provided that is operable to perform any of the methods provided by any of the possible designs of the second aspect described above.
In a fifth aspect, a computer-readable storage medium, such as a computer-non-transitory readable storage medium, is provided. Having stored thereon a computer program (or instructions) which, when run on a computer, causes the computer to perform any of the methods provided by the first aspect or any of the possible designs of the first aspect.
In a sixth aspect, a computer-readable storage medium, such as a computer-non-transitory readable storage medium, is provided. Having stored thereon a computer program (or instructions) which, when run on a computer, causes the computer to perform any of the methods provided by any of the possible designs of the second aspect or the second aspect described above.
In a seventh aspect, there is provided a computer program product enabling, when running on a computer, any one of the methods provided by the first aspect or any one of the possible designs of the first aspect to be performed.
In an eighth aspect, there is provided a computer program product which, when run on a computer, causes the performance of any one of the methods provided by the second aspect or any one of the possible designs of the second aspect.
A ninth aspect provides a network on cloud system, including: a plurality of CPUs, a SOC, and a bandwidth allocation apparatus, the bandwidth allocation apparatus may be any one of the bandwidth allocation apparatuses provided in the third aspect described above, and the SOC may be the system on chip provided in the fourth aspect.
In a tenth aspect, a chip comprises a processor and an interface, the processor being configured to call code stored in a memory to perform any of the methods provided by any of the possible designs of the first aspect; or perform any of the methods provided by any of the possible designs of the second aspect described above.
In a possible design, the chip may be divided into functional modules according to the method provided in the second aspect, for example, the functional modules may be divided corresponding to the functions, or two or more functions may be integrated into one processing block.
In one possible design, the chip may include a processor, a memory, and a transceiver. The transceiver may be used for the reception and transmission of messages. The memory is for storing a computer program. The processor is used to call the computer program to execute any one of the methods provided by the second aspect. The transceiver may be a pin on a chip or a piece of circuitry.
It is understood that any of the bandwidth allocation apparatuses, computer storage media, or computer program products provided above can be applied to the corresponding methods provided above, and therefore, the beneficial effects achieved by the bandwidth allocation apparatuses, the computer storage media, or the computer program products can refer to the beneficial effects in the corresponding methods, and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of a heterogeneous SOC network system in the background art of the present application;
fig. 2 is a schematic diagram of an architecture of a network-on-cloud system to which the technical solution provided in the embodiment of the present application is applied;
fig. 3 is a schematic structural diagram of a bandwidth distribution apparatus to which the technical solution provided in the embodiment of the present application is applied;
fig. 4 is an interaction schematic diagram of a resource allocation method in a network-on-cloud system to which the technical solution provided in the embodiment of the present application is applied;
fig. 5 is an interaction diagram of a message sending method in a network on cloud system to which the technical solution provided in the embodiment of the present application is applied;
fig. 6 is a schematic diagram of a message sending method for intelligently allocating network card bandwidth by a bandwidth allocation device in a network on cloud system according to the technical solution provided in the embodiment of the present application;
fig. 7 is a schematic structural diagram of a bandwidth distribution apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a system on chip SOC according to an embodiment of the present application.
Detailed Description
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more. "plurality" means two or more.
In the embodiment of the present application, "and/or" is only one kind of association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 2 is a schematic diagram of an architecture of a network-on-cloud system according to an embodiment of the present disclosure. The network-on-cloud system shown in fig. 2 includes a Baseboard Management Controller (BMC) 100, one or more CPUs 101 (described as including CPU0 and CPU1 in fig. 2), an SOC102, a bandwidth allocation device 103, and a network card 104.
The baseboard management controller 100 can be used for baseboard control such as monitoring, starting, restarting, re-supplying power, and powering off of a network system on the cloud.
The CPU101 may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions). One or more virtual machines may be deployed on each CPU 101.
SOC102, may refer to one or more devices, an integrated circuit with dedicated targets, and/or a processing core for processing data (e.g., computer program instructions). The SOC102 may be configured to enumerate virtual resources and physical resources after the SOC is powered on, and allocate the enumerated virtual resources and physical resources to each CPU and the SOC in the network system on the cloud. As one example, SOC102 may also be used to create, refresh, look up network flow tables, and the like.
The bandwidth allocation device 103 is configured to receive attribute information of the virtual resources and the physical resources allocated to each CPU, which is sent by the SOC, in a process of starting and configuring the network system on the cloud, and generate image files of the virtual resources and the physical resources. And after the network system starting configuration process is finished, acquiring the message storage condition in each CPU cache and SOC cache, and sending the message to the network card according to the message storage condition. Optionally, the bandwidth allocation apparatus 103 is further configured to establish a corresponding relationship between attribute information of resources (including virtual resources and physical resources) of the resources of each CPU in each CPU domain and attribute information of the resources in the SOC domain; and after the network system on the cloud is started and configured, respectively receiving messages sent by each CPU and SOC to corresponding CPU cache and SOC cache, and replacing the attribute information of the CPU domain carried in the messages from the CPU with the attribute information of the SOC domain and the like.
The network card 104, which may be a network interface board (NIC), is a semi-autonomous unit and includes a memory and a processor. Connected to the bandwidth allocation means 103 via a bus.
It should be noted that, in the cloud network system under the SOC offload scheme, because the matching step in the message processing of the SOC102 for querying the flow table is delayed for a long time, and is slower than the message processing of the CPU101 and different from the path of the CPU101 for accessing the network card, in the embodiment of the present application, it may be considered that the CPU101 accesses the network card through the fast path, and the SOC102 accesses the network card through the slow path, for example, the lookup flow of the network flow table includes the following steps: the SOC102 receives a message sent by a client virtual machine from a network card, analyzes header information of the message and generates an uploading message, wherein the uploading message comprises the header information of the message; and then searching a network flow table according to the message header information carried in the uploading message, finding an accurate flow table of the message, namely an accurate forwarding path of the message, storing the accurate flow table into the network flow table of the user, and forwarding the message according to the accurate flow table. And then, if the SOC receives a message with the same header information as the message, directly searching the accurate flow table in the network flow table stored by the SOC, and forwarding the message according to the accurate flow table so as to improve the processing speed of the message.
Fig. 3 is a schematic structural diagram of a bandwidth distribution apparatus 103 according to an embodiment of the present disclosure. The bandwidth allocation apparatus 103 may be configured to execute the bandwidth allocation method provided in the embodiment of the present application, and the bandwidth allocation apparatus 103 may include at least one processor 201, a communication line 202, a memory 203, a communication interface 204, and a router 205.
The processor 201 may be a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of programs in accordance with the teachings of the present application.
The communication link 202 may include a path for communicating information between the aforementioned components (e.g., the at least one processor 201, the communication link 202, the memory 203, and the router 205).
The memory 203 may be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these. The memory 203 may be separate and coupled to the processor 201 via the communication line 202. The memory 203 may also be integrated with the processor 201. The memory 203 provided by the embodiment of the present application may generally have a nonvolatile property. The memory 203 is used for storing computer instructions for executing the scheme of the application, and is controlled by the processor 201 to execute. The processor 201 is configured to execute the computer instructions stored in the memory 203, thereby implementing the methods provided by the embodiments described below. Different caches are divided in the memory 203 for messages with different sources, and for example, a message with a source of the CPU0 is stored in the CPU0 cache, a message with a source of the CPU1 is stored in the CPU1 cache, and a message with a source of the SOC is stored in the SOC cache.
The communication interface 204 may be any device, such as a transceiver, for communicating with other devices or communication networks, such as Wide Area Networks (WAN), Local Area Networks (LAN), and the like.
The router 205 is a mini router including an uplink port and two downlink ports, can be used for link extension, and is connected to the router 205 through the communication line 202 to implement a message routing function, and support two routing modes, namely, address-based routing and BDF-based routing.
Optionally, the computer instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
In a specific implementation, as an embodiment, the bandwidth allocating device 103 may include a plurality of processors, and each of the processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The network-on-cloud system in the application needs to complete a configuration starting process before sending the message.
Fig. 4 is a schematic flowchart of a configuration starting process of a network-on-cloud system according to an embodiment of the present application. For example, the present embodiment may be applied to the system architecture shown in fig. 2, and the SOC, the CPU0, the CPU1, the bandwidth allocation device, and the network card in the present embodiment may respectively correspond to the SOC102, the CPU0, the CPU1, the bandwidth allocation device 103, and the network card 104 in fig. 2. The method shown in fig. 4 may comprise the steps of:
s201: the baseboard management controller receives a user instruction.
S202: and the baseboard management controller sends an indication message to the SOC according to the instruction of the user, wherein the indication message is used for indicating the SOC to start and enumerate.
S203: and the SOC starts and enumerates according to the indication message, and then obtains the enumerated attribute information of the virtual resources and the attribute information of the physical resources.
Specifically, the SOC is started according to the indication information, and enumerates virtual resources (e.g., all virtual resources) in the network on cloud system and enumerates physical resources (e.g., all physical resources) in the network on cloud system after the SOC is started.
Specifically, the enumeration process may include a series of actions, such as the SOC reading the capacity of a Base Address Register (BAR) in the SOC and uniformly addressing, reading the virtual resources and the physical resources and configuring the read virtual resources and physical resources. This enumeration process is not limited by this application.
After the configuration of each virtual resource or physical resource is completed, a unique identifier corresponds to the virtual resource or physical resource, and the identifier can be a BDF, and the BDF is a function identification of the bus device. Each virtual or physical resource has a unique address corresponding to it, which may be a bus address, which may be 64 bits, 32 bits, or 48 bits, depending on the number of bits of the address bus.
In one example, for convenience of description, enumerated virtual resources and physical resources are numbered below, where VF represents a virtual resource and PF represents a physical resource. The virtual resources and physical resources enumerated by the SOC are as shown in table 1 below:
TABLE 1
Figure BDA0002222782940000071
It should be noted that the PF corresponding to each VF is fixed, and there is a corresponding relationship between PFs and VFs in the same row in table 1. Table 1 lists physical resources and virtual resources enumerated by the SOC in only one example, for example, the physical resource enumerated by the SOC is PF0, and the virtual resource corresponding to the physical resource PF0 includes: VF01, VF02, VF03, VF04, VF05, VF06, VF07, and VF 08. Tables 2 to 6 are similar to this and will not be described in detail below.
It can be understood that, in the system architecture shown in fig. 1, SOC enumeration also exists, but in this step, the content of SOC enumeration is different from that of SOC enumeration in the architecture shown in fig. 1, the SOC enumerates all virtual resources and physical resources of the network card in the present application, whereas the SOC in the background art enumerates only virtual resources and physical resources physically partitioned to the SOC.
In one example, the attribute information of the virtual resource includes a BDF of the virtual resource and the attribute information of the physical resource includes a BDF of the physical resource. In another example, the attribute information of the virtual resource includes an address of the virtual resource and the attribute information of the physical resource includes an address of the physical resource. In another example, the attribute information of the virtual resource includes a BDF and an address of the virtual resource, and the attribute information of the physical resource includes a BDF and an address of the physical resource. The following specific examples are all described by taking the last example herein as an example.
S204: the SOC allocates enumerated virtual resources and physical resources to the CPU (e.g., each CPU in the network system on the cloud) and the SOC, respectively.
The embodiment of the present application does not limit the specific implementation manner of S204.
In one implementation, the SOC allocates the enumerated virtual and physical resources to each CPU and SOC according to a rule (e.g., an average allocation rule). Therefore, automatic operation and maintenance of the network system on the cloud can be realized.
In another implementation, the SOC may include an application for managing virtual and physical resources, which may receive user instructions for allocating the virtual and physical resources for each CPU and SOC; the SOC may allocate enumerated virtual and physical resources to each and the SOC based on instructions received by the application. For example, enumerated virtual resources and physical resources are allocated to each CPU and SOC directly according to the allocation scheme indicated by the instruction or according to a scheme obtained by adjusting the allocation scheme indicated by the instruction.
The total amount of resources (including virtual resources and/or physical resources) allocated by the SOC to each CPU may be the same or different. The total amount of resources (including virtual resources and/or physical resources) that the SOC allocates to itself and to any CPU may be the same or different. The same resource (including virtual or physical) can typically only be allocated to one of the SOC and each CPU. Hereinafter, the case where the total amount of the SOC allocated to each CPU resource is the same will be described as an example.
In the following, taking the CPUs in the network on cloud system as CPU0 and CPU1 as examples, SOC allocation resources are described:
based on Table 1, the virtual and physical resources allocated by the SOC to the CPU0 may be as shown in Table 2 below:
TABLE 2
Figure BDA0002222782940000081
Based on table 1, the virtual and physical resources allocated by the SOC to the CPU1 may be as shown in table 3 below:
TABLE 3
Figure BDA0002222782940000082
Based on table 1, the virtual and physical resources allocated by the SOC to the SOC itself may be as shown in table 4 below:
TABLE 4
Figure BDA0002222782940000083
S205: the SOC transmits attribute information of a virtual resource and attribute information of a physical resource allocated to the CPU (e.g., each CPU) to the bandwidth allocation apparatus.
S206: the bandwidth allocation device generates an image file (i.e., an image virtual resource) of the virtual resource of the CPU (e.g., each CPU) according to the received attribute information of the virtual resource allocated to the CPU, where the image file of the virtual resource includes a physical resource on which the virtual resource is mounted, identification information of the virtual resource, and the like. According to the received attribute information of the physical resource allocated to the CPU, an image file of the physical resource of the CPU (i.e., an image physical resource) is generated, where the image file of the physical resource includes which virtual resources, identification information of the physical resource, and the like are mounted on the physical resource.
It should be noted that, for each CPU, the mirror virtual resource and the mirror physical resource of the CPU generated by the bandwidth allocation apparatus generally need to be renumbered. Also, the numbers of the mirrored virtual resources and the mirrored physical resources for different CPUs may be the same or different. The following are exemplified:
based on table 2, the mirrored virtual resources and mirrored physical resources of CPU0 generated by the bandwidth allocation apparatus may be as shown in table 5:
TABLE 5
Figure BDA0002222782940000084
Based on table 3, the mirrored virtual resources and mirrored physical resources of CPU1 generated by the bandwidth allocation apparatus may be as shown in table 6:
TABLE 6
Figure BDA0002222782940000085
Figure BDA0002222782940000091
S207: and the bandwidth allocation device sends a notification message for notifying that the generation of the image file is finished to the SOC.
S208: the SOC sends a message to the baseboard controller, and the message is used for informing the baseboard management controller that the SOC is started and enumerated.
At this point, the startup configuration process of the SOC is completed.
S209: after receiving the notification message sent by the SOC, the bmc sends an instruction message to the CPU (e.g., each CPU), where the instruction message is used to instruct the CPU to start and enumerate.
S210: the CPU (e.g., each CPU) starts and enumerates as instructed by the instruction message. Specifically, the CPU is started according to the indication information, and after the CPU is started, the virtual resources (e.g., all virtual resources) allocated to the CPU by the SOC are enumerated, and the physical resources (e.g., all physical resources) allocated to the CPU by the SOC are enumerated.
Specifically, the CPU enumeration process may include a series of operations, such as the CPU reading the capacity of the base address register of the CPU and addressing uniformly, reading the mirror image virtual resource and the mirror image physical resource of the CPU generated by the bandwidth allocation device, and configuring the read mirror image virtual resource and the read mirror image physical resource. This enumeration process is not limited by this application.
For any one CPU, after configuration is completed, an address in the CPU domain and a BDF in the CPU domain are allocated to a mirror virtual resource and a mirror physical resource of the CPU, which are hereinafter referred to as a CPU domain address and a CPU domain BDF.
After the configuration is completed, the attribute information of the mirror image virtual resource of the CPU may include the CPU domain address and the CPU domain BDF of the mirror image virtual resource; the attribute information of the mirrored physical resource of the CPU may include the CPU domain address and the CPU domain BDF of the mirrored physical resource.
S211: and for each CPU, sending the attribute information of the mirror image virtual resource and the attribute information of the mirror image physical resource after the enumeration of the CPU is completed to a bandwidth allocation device.
S212: for each CPU, the bandwidth allocation device establishes an attribute information mapping relation according to the attribute information of the virtual resource of the CPU, the attribute information of the mirror image virtual resource generated by the virtual resource, the attribute information of the physical resource of the CPU and the attribute information of the mirror image physical resource generated by the physical resource.
Taking the example that the network on cloud system includes the CPU0 and the CPU1, the bandwidth allocation apparatus may establish a first attribute information mapping relationship according to the attribute information of the virtual resource of the CPU0 and the attribute information of the mirror image virtual resource of the CPU0, and the attribute information of the physical resource of the CPU0 and the attribute information of the mirror image physical resource of the CPU 0; and a second attribute information mapping relationship is established according to the attribute information of the virtual resource of the CPU1 and the attribute information of the mirror virtual resource of the CPU1, and the attribute information of the physical resource of the CPU1 and the attribute information of the mirror physical resource of the CPU 1.
The embodiment of the present application does not limit the specific representation form of the attribute information mapping relationship (e.g., the first attribute information mapping relationship and the second attribute information mapping relationship), and may be, for example, a form such as a table, a formula, or an if else statement. The following description will take the mapping relationship as a mapping table. In this case, the first attribute information mapping relationship may be labeled as a first attribute information mapping table, and the second attribute information mapping relationship may be labeled as a second attribute information mapping table.
The first attribute information mapping table is a correspondence table between attribute information of the CPU0 domain and attribute information of the SOC domain. For example: suppose that the identification of a virtual resource in the CPU0 domain is VF01, the attribute information of the virtual resource in the CPU0 domain is (BDF3, ADDRESS3), the identification of the virtual resource in the SOC domain is VF21, and the attribute information of the virtual resource in the SOC domain is (BDF23, ADDRESS 23); the BDF3 corresponds one-to-one to the BDF23 and the addresses 3 corresponds one-to-one to the addresses 23 in the first attribute information mapping table.
The second attribute information mapping table is a correspondence table between the attribute information of the CPU1 domain and the attribute information of the SOC domain. For example: the identifier of a virtual resource in the CPU1 domain is CF01, the attribute information of the virtual resource in the CPU1 domain is (BDF3, ADDRESS3), and the identifier of the virtual resource in the SOC domain is VF 51; attribute information of the virtual resource in the SOC domain is (BDF53, ADDRESS 53); the BDF3 corresponds one-to-one to the BDF53 and the addresses 3 corresponds one-to-one to the addresses 53 in the second attribute information mapping table.
Based on the example in S203, the attribute correspondence relationship between the partial physical resource and the virtual resource in the first attribute information mapping table may be as shown in table 7 below:
TABLE 7
CPU0 domain BDF SOC Domain BDF CPU0 domain address SOC field address
BDF0(PF0) BDF02(PF2) address0(PF0) address02(PF2)
BDF1(PF1) BDF13(PF3) address1(PF1) address13(PF3)
BDF2(PF2) BDF24(PF4) address2(PF2) address24(PF4)
BDF3(VF01) BDF23(VF21) address3(VF01) address23(VF21)
In table 7, "BDF 0(PF0)," BDF02(PF2), "address 0(PF0) and" address02(PF2) "respectively represent: the BDF of the physical resource PF0 in the CPU0 domain is BDF0, the address of the physical resource in the CPU0 domain is address0, and the identification of the physical resource PF0 in the SOC domain is PF 2; the BDF of the physical resource in the SOC domain is BDF02, and the address of the physical resource in the SOC domain is address 02. Other examples are not listed.
Based on the example in S203, the correspondence relationship between the partial physical resource and the virtual resource in the second attribute information mapping table may be as shown in table 8 below:
TABLE 8
CPU0 domain BDF SOC Domain BDF CPU0 domain address SOC field address
BDF0(PF0) BDF05(PF5) address0(PF0) address05(PF5)
BDF1(PF1) BDF16(PF6) address1(PF1) address16(PF6)
BDF2(PF2) BDF27(PF7) address2(PF2) address27(PF7)
BDF3(VF01) BDF33(VF31) address3(VF01) address33(VF31)
It can be understood that, in the technical solution provided in the embodiment of the present application, the CPU0, the CPU1, and the SOC share one network card. The method includes that when a CPU (such as the CPU0 and the CPU1) enumerates, a mirror image virtual resource and a mirror image physical resource are read, the mirror image virtual resource and the mirror image physical resource are configured, attribute information of the mirror image virtual resource of the CPU after the CPU (such as the CPU0 and the CPU1) enumerates is different from attribute information of the real virtual resource, and attribute information of the physical resource after the CPU enumerates is different from attribute information of the real physical resource. When the SOC enumerates, all virtual resources and physical resources of the network card are directly enumerated, attribute information obtained by the SOC enumerating the virtual resources is real attribute information of the virtual resources, and attribute information obtained by the SOC enumerating the physical resources is real attribute information of the physical resources. In general, a message carries attribute information indicating a destination to which the message is to be transmitted. When the CPU0 or the CPU1 sends a message, the attribute information carried in the message is the attribute information of the mirror image virtual resource or the attribute information of the mirror image physical resource. The mirror virtual resource and the mirror physical resource are both in the bandwidth allocation device, so that the message cannot be sent to the real destination but to the bandwidth allocation device. For example, the attribute information carried in the message sent by the CPU0 is a domain address of the CPU0, BDF, or the CPU 0. Therefore, a first attribute information mapping table and a second attribute information mapping table need to be respectively established to perform the conversion of the attribute information carried in the CPU packet.
It should be noted that, the above description is given by taking an example that the network on cloud system includes two CPUs and one SOC, in the process of practical application, the network on cloud system may include only one CPU or may include more than two CPUs, and the SOC may allocate enumerated virtual resources and physical resources according to the CPUs specifically included in the network on cloud system. The bandwidth allocation device establishes a mirror image virtual resource file and a mirror image physical resource file for each CPU according to the attribute information of the allocated virtual resource and physical resource, establishes a mapping relation between the virtual resource allocated to the CPU and the attribute information of the virtual resource in the SOC for each CPU, and establishes a mapping relation between the physical resource allocated to the CPU and the attribute information of the physical resource in the SOC.
The method comprises the steps of the message sending method. After the start configuration process of S201 to S212 is completed by the network on cloud system, the network on cloud system may support deployment of the virtual machine. It should be noted that each CPU (for example, CPU0 or CPU1) in the network system on the cloud may be respectively deployed with one or more virtual machines, or may be deployed on one CPU, or may be deployed on multiple CPUs, and the virtual machines are deployed as needed when in actual application. The present application does not limit this, and fig. 5 takes an example in which the CPU0 and the CPU1 are both deployed with a plurality of virtual machines to send messages.
The method for sending the message comprises the following steps:
s301: each CPU (e.g., CPU0 or CPU1) and SOC receives user instructions separately.
Specifically, when any one of the virtual machines deployed on each CPU (e.g., CPU0 or CPU1) receives a user instruction, the CPU that deployed the virtual machine receives the instruction.
S302: each CPU (e.g., CPU0 or CPU1) and SOC sends a message to the bandwidth allocation device based on the user instruction.
It should be noted that, each CPU (e.g., CPU0 or CPU1) and SOC may send messages to the bandwidth allocation apparatus in parallel. Of course, the embodiments of the present application are not limited thereto. The transmitted message may be a PCIE message.
In addition, for the SOC and each CPU in the network system on the cloud, when there is a message transmission demand, the message is transmitted to the bandwidth allocation apparatus. The above S301 to S302 are only an example, and do not limit the specific implementation manner of the bandwidth allocation apparatus receiving the message in this embodiment. In addition, during actual implementation, in a time period, there may be a message sending requirement for one or more of the SOC and all the CPUs in the network on cloud system, or there may be a message sending requirement for neither the SOC nor all the CPUs in the network on cloud system.
S303: the bandwidth allocation device receives messages sent by each CPU (such as CPU0 or CPU1) and SOC, and stores the received messages to corresponding caches.
Illustratively, the bandwidth allocation device receives a message sent by the CPU0, and stores the message into the CPU0 for caching; the bandwidth allocation device receives the message sent by the CPU1 and stores the message into the CPU1 for caching; and the bandwidth allocation device receives the message sent by the SOC and stores the message into the SOC for caching.
S304, for the message in the CPU cache (such as the CPU0 cache or the CPU1 cache), the bandwidth allocation device searches different attribute information mapping relations according to different caches stored by the message, so as to replace the attribute information of the CPU domain (including the CPU0 domain or the CPU1 domain) in the attribute information carried in the message with the attribute information of the SOC domain corresponding to the attribute information of the CPU domain in the attribute information mapping relations, and thus obtain a new message.
When the attribute information carried in the message is a CPU domain address, replacing the CPU domain address with an SOC domain address; and when the attribute information carried in the message is the CPU domain BDF, replacing the CPU domain BDF with the SOC domain BDF.
For example, for a packet stored in the cache of the CPU0, the bandwidth allocation apparatus searches the first attribute information mapping relationship for attribute information under the SOC domain corresponding to the attribute information carried in the packet header domain, and replaces the attribute information in the packet header domain with the found attribute information under the SOC domain to obtain a new packet.
For the message stored in the cache of the CPU1, the bandwidth allocation apparatus searches the second attribute information mapping relationship for the attribute information in the SOC domain corresponding to the attribute information carried in the message header domain, and replaces the attribute information in the message header domain with the found attribute information in the SOC domain to obtain a new message.
Specifically, the messages stored in the CPU cache (including the CPU0 domain and the CPU1 domain) may be one or more of mem messages, IO messages, cfg messages, and msg messages.
It should be noted that S304 may be executed before the message is stored in the cache in S303, or may be executed after the message is stored in the cache, which is not limited in this application.
S305: the bandwidth allocation device obtains the message storage condition in each of the CPU cache (such as the CPU0 cache and the CPU1 cache) and the SOC cache.
Optionally, the bandwidth allocation apparatus periodically determines whether each CPU cache (e.g., CPU0 cache and CPU1 cache) and the SOC cache store unsent packets, and obtains the packet storage condition according to the determination result in each CPU cache (e.g., CPU0 cache and CPU1 cache) and the SOC cache. The unsent message refers to a message that has already been replaced with the attribute information in step S304 and is not sent to the network card. The same is also meant for the unsent messages in the following.
Taking the CPUs included in the network on cloud system as the CPU0 and the CPU1 as examples, the message storage condition may include:
case 1: the CPU0 caches and stores unsent messages, the CPU1 caches and stores unsent messages, and the SOC caches and stores unsent messages;
case 2: the CPU0 caches and stores unsent messages, the CPU1 caches and stores unsent messages, and the SOC cache does not store unsent messages;
case 3: the CPU0 caches and stores unsent messages, the CPU1 caches and stores no unsent stored messages, and the SOC caches and stores no unsent messages;
case 4: the CPU0 caches messages which are not stored and sent, the CPU1 caches messages which are not stored and sent, and the SOC caches messages which are not stored and sent;
case 5: the CPU0 caches and stores unsent messages, the CPU1 caches and does not store unsent messages, and the SOC caches and stores unsent messages;
case 6: the CPU0 caches messages which are not sent, the CPU1 caches messages which are not sent, and the SOC caches messages which are not sent;
case 7: the CPU0 caches messages which are not sent, the CPU1 caches messages which are not sent, and the SOC caches messages which are not sent;
case 8: the CPU0 caches and stores no unsent messages, the CPU1 caches and stores no unsent messages, and the SOC cache stores and stores unsent messages.
The method is extensible, for example, a network system on the cloud includes N CPUs, and the message storage conditions may include the following conditions one to four:
in the first case, messages are stored in all M of N CPU caches in the cloud network system, and messages are stored in the SOC cache, where M is greater than or equal to 1 and less than or equal to N, and M and N are integers. For example, cases 1,5, and 6 described above.
In case two, messages are stored in all M of N CPU caches in the network system on the cloud, but no message is stored in the SOC cache. For example, cases 2, 3, and 7 described above.
And in the third case, no message is stored in each CPU cache of the N CPU caches in the network system on the cloud, and a message is stored in the SOC cache. For example, case 8 above.
And in the fourth case, no message is stored in the SOC cache in the network system on the cloud and each CPU cache in the N CPU caches. For example, case 4 above.
Taking the CPUs in the network-on-cloud system as the CPU0 and the CPU1 as examples, if the probability of the cache of the CPU0 is configured as any one percentage P, where P is greater than or equal to 0 and less than or equal to 1, then the probability of the cache of the CPU1 is 1-P. The probability of the CPU0 cache and the probability of the CPU1 cache may or may not be equal. For example, when the two are not equal, the probability of the CPU0 buffering and the probability of the CPU1 buffering may be 60% and 40%, respectively, and the like.
S306: the bandwidth allocation device sends a message (specifically, an unsent message) to the network card based on the acquired message storage condition in each CPU cache (such as the CPU0 cache or the CPU1 cache) and the SOC cache. Specifically, the method comprises the following steps:
firstly, when the message storage condition is the condition I, the bandwidth allocation device preferentially sends the messages stored in the M CPU caches, and then sends the messages stored in the SOC caches after all the messages stored in the M CPU caches are sent.
Optionally, when M is greater than or equal to 2, the sending, by the bandwidth allocation device, the messages stored in the M CPU caches may include: obtaining the cache probability of each CPU in the M CPUs; sending the message stored in the mth CPU cache in the M CPU caches by adopting the mth probability, wherein M is more than or equal to 1 and less than or equal to M; m is an integer. The mth probability is a probability of the mth CPU buffer, and the sum of the first probability to the mth probability is 1.
The probability of the CPU cache refers to the probability of the bandwidth allocation apparatus sending the message stored in the CPU cache in the process of sending the message once. The probabilities of any two CPU caches may be the same or different. The probability of any one CPU buffering may be user configured or SOC indicated. In one implementation, the probability of any one CPU cache is pre-configured by the user in the bandwidth allocation device. In another implementation, the SOC sends the probability of any CPU cache pre-configured by the user to the bandwidth allocation device in advance. Optionally, for each CPU cache, the probability may be modified after being configured, for example, the SOC may modify the probability of the CPU cache according to an instruction of a user, and issue the modified probability of the CPU cache to the bandwidth allocation device.
The specific implementation manner of how the bandwidth allocation device sends the messages in the M CPU caches according to the probability of each CPU cache in the M CPU caches is not limited in the embodiments of the present application. For example:
in one implementation, the bandwidth allocation device selects one CPU cache as a target CPU cache from among the CPU caches storing unsent messages among the M CPU caches according to a selection algorithm, and sends one message in the target CPU cache; repeating the steps until the messages in the M CPU caches are sent; the selection algorithm is a selection algorithm which enables the probability of sending the message stored in the mth CPU cache to be the mth probability.
For example, the selection algorithm may be a weight or priority based arbitration algorithm, or the like. For convenience of description, taking the CPUs in the network system on the cloud as the CPU0 and the CPU1 as examples, and taking the probabilities of the CPU0 cache and the CPU1 cache as 50% as an example when the acquired message storage condition is the above condition 1, the random number selection algorithm is explained: assuming that the probability of the cache of the CPU0 is 50%, the corresponding value range is [1,50 ]; the probability of the CPU1 cache is 50%, and the corresponding value range is [51,100 ]; the bandwidth allocation device is provided with a random number, and the random number is any integer from 1 to 100. When the random number belongs to [1,50], caching the CPU0 as a target COU; when the random number belongs to [51,100], the CPU1 is cached as a target CPU. And then, transmitting the a messages stored in the cache of the target CPU. Wherein a is an integer greater than or equal to 1. And repeating the steps until all the messages stored in the M CPU caches are sent.
It can be understood that when the probability of the CPU0 cache is not equal to the probability of the CPU1 cache, the value ranges corresponding to the two CPU caches are not evenly divided; for example, when the probability of the CPU0 cache and the probability of the CPU1 cache are 60% and 40%, respectively, if the range of the generated random numbers is still an integer in the range of 1 to 100, the range of values corresponding to the CPU0 cache may be an integer in [1,60 ]; the value range corresponding to the cache of the CPU1 may be an integer in [61,100 ]. Other examples are not listed. And, the method can be also applied to the scenes of three or more than three CPU caches in an extensible way.
In addition, taking as an example that when the storage condition of the acquired message is the above condition 1, and the probabilities of the cache of the CPU0 and the cache of the CPU1 are both 50%:
optionally, in a period, the bandwidth allocation device sends the message stored in the cache of the CPU0 to the network card n times, and sends the message stored in the cache of the CPU1 to the network card n times. And repeating the steps until all the messages stored in the M CPU caches are sent. Wherein n is a positive integer.
In one example, after the messages stored in the CPU0 cache are sent to the network card M times continuously, the messages stored in the CPU1 cache are sent to the network card M times continuously, and so on until all the messages stored in the M CPU caches are sent. m is an integer of 2 or more and less than a threshold. It can be understood that the longer the time of one period is, the closer the probability that the bandwidth allocation device sends the message stored in the cache of the CPU0 to the network card in the whole period is, the closer the probability is to 50%; the closer the probability that the bandwidth allocation device sends the message stored in the cache of the CPU1 to the network card is, the closer the probability is to 50%.
In another example, the messages stored in the caches of the CPU0 and the CPU1 are sequentially sent to the network card in a loop, and the front message and the rear message come from different CPU caches respectively. For example, the message stored in the cache of the CPU0 is sent last time, then the message stored in the cache of the CPU1 is sent next time, then the message stored in the cache of the CPU0 is sent next time, then the message stored in the cache of the CPU1 is sent again, and so on until the messages stored in the cache of the CPU0 and the cache of the CPU1 are all sent.
It should be noted that, if the probability of one CPU cache is a, the percentage of the time that the bandwidth allocation apparatus sends the packet stored in the CPU cache in the whole cycle is a. A is more than or equal to 0 and less than or equal to 1.
Optionally, when M is equal to 1, the sending, by the bandwidth allocation apparatus, the packet stored in the M CPU caches may include: and directly sending the message stored in the CPU cache. In this case, the probability of the bandwidth allocation apparatus sending the message in the CPU cache is 100%.
For example, when the acquired message storage condition is the condition 5, the bandwidth allocation device sends the message stored in the cache of the CPU0 to the network card with a 100% probability, until all the messages stored in the cache of the CPU0 are sent, the bandwidth allocation device sends the message stored in the SOC cache to the network card with a 100% probability.
For another example, when the obtained message storage condition is the condition 6, the bandwidth allocation device sends the message stored in the cache of the CPU1 to the network card with the probability of 100%, until all the messages stored in the cache of the CPU1 are sent, the bandwidth allocation device sends the message stored in the SOC cache to the network card with the probability of 100%.
It should be noted that, if there is no message in the CPU0 cache in a half of a cycle, the message stored in the SOC cache is sent to the network card, and then the time for sending the message stored in the CPU0 cache in the whole cycle accounts for 50% of the whole cycle. If the messages are stored in the cache of the CPU0 in one period, the probability that the messages stored in the cache of the CPU0 are sent to the network card is 100%.
The application protects the prior transmission of messages stored in the cache of a CPU (comprising a CPU0 and a CPU 1). The various CPUs are treated fairly for the bandwidth allocation means. The above examples are only some of the cases that may be encountered in the present application, and other differences in the percentage of the whole cycle of the time for the bandwidth allocation apparatus to transmit the message stored in the CPU buffer within one cycle, which are caused by the data amount in each buffer (including the CPU buffer and the SOC buffer), are within the protection scope of the present application.
And secondly, when the message storage condition is the second condition, the bandwidth allocation device sends the messages stored in the M CPU caches.
Optionally, when M is greater than or equal to 2, the bandwidth allocation apparatus may send the messages stored in the M CPU caches in the same manner as when M is greater than or equal to 2 in case one.
Optionally, when M is equal to 1, the bandwidth allocation apparatus transmits the packet stored in the CPU cache in which the packet is stored, with a probability of 100%. For example, when the acquired message storage situation is the case 3, the bandwidth allocation apparatus transmits the message stored in the cache of the CPU0 with a probability of 100%. For another example, when the acquired message storage condition is the case 7, the bandwidth allocation apparatus transmits the message stored in the cache of the CPU1 with a probability of 100%.
And thirdly, when the message storage condition is the third condition, the bandwidth allocation device transmits the message stored in the SOC cache by adopting the probability of 100%.
And fourthly, when the message storage condition is the fourth condition, the bandwidth allocation device does not send the message. For example, case 4 above.
The specific process of managing the message transmission by the bandwidth allocation device in steps S305 to S306 according to the acquired message storage conditions in the CPU0 cache, the CPU1 cache, and the SOC cache, and the peak bandwidth conditions that the CPU0 and the CPU1 can reach when the message is transmitted by using this method are shown in fig. 6, where the peak bandwidth represents the maximum value of the number of messages that the CPU can transmit in a unit time. In fig. 6, the total amount of the network card bandwidth is x16 as an example.
When the acquired message storage condition is the case 1 or 2, the peak bandwidths that can be used by the CPU0 and the CPU1 are respectively × 8.
When the acquired message storage condition is any one of the cases 3, 7, and 8, the peak bandwidths that can be used by the CPU0 and the CPU1 are × 16, respectively.
When the acquired message storage condition is the case 5 or 6, the peak bandwidths that can be used by the CPU0 and the CPU1 are × 16, respectively.
In the above, the example of preferentially sending the message stored in the CPU cache is taken, and when the device is actually used, the probability value may be integrally allocated to the message stored in the CPU cache and the message stored in the SOC cache according to the requirement. In the comprehensive distribution process, if the sum of the probabilities of distributing the messages stored in each CPU cache is Q, the probability of distributing the messages stored in the SOC cache is 1-Q, and Q is more than or equal to 0 and less than or equal to 1. And if the acquired message storage condition is one, the bandwidth allocation device sends the messages stored in the M CPU caches and the messages stored in the SOC cache according to the probability.
Optionally, when M is greater than or equal to 2, the sending, by the bandwidth allocation device, the messages stored in the M CPU caches and the messages stored in the SOC cache may include: acquiring the cache probability of each CPU and the SOC cache probability of the M CPUs; sending the message stored in the mth CPU cache in the M CPU caches by adopting the mth probability, wherein M is more than or equal to 1 and less than or equal to M; m is an integer. The mth probability is a probability of the mth CPU cache, and a sum of the first probability to the mth probability is Q. For example: the sum of the probabilities of message distribution stored in the CPU cache is 80%, and the probability of SOC distribution is 20%.
Likewise, the probability of any one of CPU buffering and SOC buffering may be user configured or SOC indicated. In one implementation, the probability of any one of the CPU cache and the SOC cache is pre-configured in the bandwidth allocation apparatus by a user. In another implementation manner, the SOC sends the probability of any CPU cache and the probability of SOC cache, which are configured in advance by the user, to the bandwidth allocation apparatus. Optionally, for each CPU cache, the probability may be modified after being configured, for example, the SOC may modify the probability of the CPU cache according to an instruction of a user, and issue the modified probability of the CPU cache to the bandwidth allocation device.
In the above embodiment, because the bandwidth allocation apparatus is added, the messages sent by each CPU and SOC are all uniformly sent to the corresponding cache of the bandwidth allocation apparatus. And then the bandwidth allocation device sends the messages stored in each cache according to the message storage condition in each cache so as to achieve the effect of dynamically and intelligently dividing the physical bandwidth of the network card, which is beneficial to improving the I/O performance and further improving the user experience. For example, the SOC may be used to perform creation, refreshing, and lookup of network flow tables. Therefore, the technical scheme provided by the embodiment of the application is beneficial to solving the problems of large CPU resource consumption in creating, refreshing and searching the network flow table, and simultaneously improving the I/O performance, thereby improving the user experience.
The scheme provided by the embodiment of the application is mainly introduced from the perspective of a method. To implement the above functions, it includes hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the exemplary method steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the functional modules may be allocated to the bandwidth allocation apparatus according to the above method example, for example, each functional module may be allocated to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the allocation of the modules in the embodiment of the present application is illustrative, and is only one kind of logic function allocation, and there may be another division manner in actual implementation.
Fig. 7 is a schematic structural diagram of a bandwidth distribution apparatus according to an embodiment of the present application. The bandwidth allocation apparatus 103 may be configured to perform the functions performed by the bandwidth allocation apparatus in any of the above embodiments (e.g., any of the embodiments shown in fig. 4-6). The bandwidth allocating means 103 may include: a storage module 1030, a processing module 1031, and a transceiver module 1032. The storage module 1030 is connected to the processing module 1031 and the transceiver module 1032 respectively; a transceiver module 1032, configured to implement routing of a packet; the module supports two routing modes of address routing and BDF routing, a config _ read message or a config _ write message sent to the network card by the SOC needs to pass through the transceiving module, and a message for completing the response of the network card to the SOC also needs to pass through the transceiving module.
The storage module 1030 comprises N central processing unit CPU caches and a system on chip SOC cache; the system comprises a CPU (central processing unit) cache, an SOC (system on chip) cache and a bandwidth allocation device, wherein the CPU cache is used for caching a message sent by the CPU to the bandwidth allocation device, and the SOC cache is used for caching a message sent by the SOC to the bandwidth allocation device; n is a positive integer; a processing module 1031, configured to obtain message storage conditions in the N CPU caches and the SOC cache; in connection with fig. 5, the processing module 1031 may be configured to perform S304-S305. The transceiving module 1032 is configured to send the messages stored in the M CPU caches to the network card if the message storage condition includes that the messages are stored in all the M CPU caches of the N CPU caches and the messages are stored in the SOC cache; after the messages stored in the M CPU caches are all sent, sending the messages stored in the SOC cache to the network card; wherein M is not less than 1 and not more than N, and M is an integer; or if the message storage condition includes that the messages are stored in all M CPU caches in the N CPU caches but the messages are not stored in the SOC cache, sending the messages stored in the M CPU caches to the network card. For example, in conjunction with fig. 5, the transceiver module 1032 may be configured to perform the transmitting step of S306.
Optionally, when M is greater than or equal to 2, the transceiving module 1032 is specifically configured to: sending the message stored in the mth CPU cache of the M CPU caches to the network card by adopting the mth probability, wherein M is more than or equal to 1 and less than or equal to M; m is an integer; the sum of the first probability to the mth probability is 1. For example, in conjunction with fig. 6, the transceiver module 1032 may be configured to perform the transmitting step of S306.
Optionally, the processing module 1031 is specifically configured to select one CPU cache from among the CPU caches in which the unsent messages are stored among the M CPU caches as a target CPU cache according to a selection algorithm, and the transceiver module 1032 is configured to send the messages in the target CPU cache until the messages in the M CPU caches are completely sent; the selection algorithm is a selection algorithm which enables the probability of sending the message stored in the mth CPU cache to be the mth probability. For example, in conjunction with fig. 6, the transceiver module 1032 may be configured to perform the transmitting step of S306.
Optionally, the storage module 1030 is configured to store a corresponding relationship between the first CPU domain attribute information of each resource of the multiple resources of the first CPU in the M CPUs and the SOC domain attribute information; the resource is a virtual resource or a physical resource allocated to the first CPU by the SOC, and the attribute information includes an address or a bus device function identifier BDF. The processing module 1031 is specifically configured to determine, according to a correspondence between the first CPU domain attribute information of each resource in the multiple resources and the SOC domain attribute information, attribute information in the SOC domain corresponding to the attribute information in the first CPU domain carried in the first message; the first message is any one message in the first CPU cache; and replacing the attribute information of the first CPU domain carried in the first message with the attribute information in the SOC domain corresponding to the attribute information of the first CPU domain carried in the first message to obtain a second message. For example, in conjunction with fig. 5, the processing module 1031 may be used to execute S304. The transceiver module 1032 is configured to send the messages stored in the M CPU caches to the network card, and includes: the transceiver module 1032 is configured to send the second message to the network card. For example, in conjunction with fig. 6, the transceiver module 1032 may be configured to perform the transmitting step of S306.
Optionally, the transceiver module 1032 is configured to receive attribute information of an SOC domain of a first resource sent by the SOC, where the first resource includes a virtual resource or a physical resource allocated to the first CPU. The transceiving module 1032 is configured to receive attribute information of the first CPU domain of the first resource sent by the first CPU. For example, in conjunction with fig. 4, the transceiver module 1032 may be configured to perform the receiving step in S211. The processing module 1031 is configured to establish a corresponding relationship between attribute information of the SOC domain of the first resource and attribute information of the first CPU domain of the first resource. For example, in conjunction with fig. 4, processing module 1031 may be used to perform S206 and S212.
Optionally, if the message storage condition includes that no message is stored in each of the N CPU caches and a message is stored in the SOC cache, the transceiver module 1032 is configured to send the message stored in the SOC cache to the network card. For example, in connection with fig. 6, the transceiver module 1032 may be configured to perform case 8 in S306.
In one example, referring to fig. 3, the transceiver module 1032 and the processing module 1031 described above may each be implemented by the processor 201 in fig. 3 calling a computer program stored in the memory 203. The transceiving module 1032 may comprise the router 205 and the communication interface 204 in fig. 3, where the router 205 may be a virtual switch or a virtual network switch (vSwitch) or a mini switch, and the structure of the router 205 may be a 1up-2dp structure with one upstream port and two downstream ports.
For the detailed description of the above alternative modes, reference is made to the foregoing method embodiments, which are not described herein again. In addition, for any explanation and beneficial effect description of the bandwidth allocation apparatus 103 provided above, reference may be made to the corresponding method embodiment described above, and details are not repeated.
Fig. 8 is a schematic structural diagram of a system on chip SOC according to an embodiment of the present application. The system-on-chip SOC80 may be used to perform the functions performed by the SOC in any of the embodiments described above (e.g., any of the embodiments shown in fig. 4-5). The system-on-chip SOC80 may include: a processing module 801, a storage module 802 and a transceiver module 803. The storage module 802 is connected to the processing module 801 and the transceiver module 803, respectively.
A processing module 801, configured to enumerate virtual resources and physical resources on a network card; acquiring attribute information of the virtual resource and attribute information of the physical resource; the attribute information includes an address of the SOC domain or a bus device function identification BDF of the SOC domain.
A transceiver module 803, configured to send attribute information of the virtual resource and the physical resource to a bandwidth allocation apparatus; the attribute information of the physical resource is used for the bandwidth allocation device to establish the corresponding relation between the attribute information of the physical resource in the SOC domain and the attribute information of the physical resource in the CPU domain; the virtual resource is used for the bandwidth allocation device to establish the corresponding relation between the attribute information of the virtual resource in the SOC domain and the attribute information of the virtual resource in the CPU domain. And sending and receiving messages. For example, in conjunction with fig. 4, the processing module 801 may be configured to perform S203-S204. The transceiver module 803 may be configured to perform the receiving step in S202 and S207, and perform the transmitting step in S205. In conjunction with fig. 5, the transceiver module 803 may be configured to perform the receiving step in S301 and the transmitting step in S302.
Optionally, the processing module 801 is also used for creating, refreshing and searching the flow table.
For the detailed description of the above alternative modes, reference is made to the foregoing method embodiments, which are not described herein again. In addition, for any explanation and beneficial effect description of the SOC80 provided above, reference may be made to the corresponding method embodiments described above, and details are not repeated.
It should be noted that the processor described above may be implemented by hardware or may be implemented by software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like. When implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory. The memory may be integrated within the processor or may be external to the processor and stand alone.
The embodiment of the present application further provides a network bandwidth processing system, which includes the CPU provided above, and the SOC and bandwidth allocation device and the network card provided above. Reference may be made to the above for the steps performed by the bandwidth allocation apparatus, which are not described herein again.
The embodiment of the application also provides a chip. The chip has integrated therein a circuit and one or more interfaces for implementing the functions of the above-described bandwidth allocation apparatus or SOC. Optionally, the functions supported by the chip may include processing actions in the embodiment described in fig. 4 and fig. 5, which are not described herein again. Those skilled in the art will appreciate that all or part of the steps for implementing the above embodiments may be implemented by a program instructing the associated hardware to perform the steps. The program may be stored in a computer-readable storage medium. The above-mentioned storage medium may be a read-only memory, a random access memory, or the like. The processing unit or processor may be a central processing unit, a general purpose processor, an Application Specific Integrated Circuit (ASIC), a microprocessor (DSP), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof.
The embodiments of the present application also provide a computer program product containing instructions, which when executed on a computer, cause the computer to execute any one of the methods in the above embodiments. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application are all or partially generated upon loading and execution of computer program instructions on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer-readable storage media can be any available media that can be accessed by a computer or can comprise one or more data storage devices, such as servers, data centers, and the like, that can be integrated with the media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It should be noted that the above devices for storing computer instructions or computer programs provided in the embodiments of the present application, such as, but not limited to, the above memories, computer readable storage media, communication chips, and the like, are all nonvolatile (non-volatile).
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Although the present application has been described in conjunction with specific features and embodiments thereof, various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application.

Claims (16)

1. A message sending method is characterized by being applied to a cloud network system, wherein the cloud network system comprises N Central Processing Units (CPU), a System On Chip (SOC), a network card and a bandwidth allocation device, and the bandwidth allocation device is respectively connected with the N CPUs, the SOC and the network card; the bandwidth allocation apparatus includes: n CPU caches and SOC caches; a CPU buffer is used for buffering a message sent by the CPU to the bandwidth allocation device, and the SOC buffer is used for buffering a message sent by the SOC to the bandwidth allocation device; the N is a positive integer; the method comprises the following steps:
the bandwidth allocation device acquires message storage conditions in the N CPU caches and the SOC cache;
if the message storage condition includes that messages are stored in M CPU caches in the N CPU caches and messages are stored in the SOC cache, the bandwidth allocation device sends the messages stored in the M CPU caches to the network card; after all the messages stored in the M CPU caches are sent, sending the messages stored in the SOC cache to the network card; wherein M is not less than 1 and not more than N, and M is an integer;
or, if the message storage condition includes that messages are stored in all M of the N CPU caches but no message is stored in the SOC cache, the bandwidth allocation device sends the messages stored in the M CPU caches to the network card.
2. The method of claim 1, wherein M is greater than or equal to 2; the bandwidth allocation device sending the messages stored in the M CPU caches to the network card comprises:
the bandwidth allocation device sends the message stored in the mth CPU cache of the M CPU caches to the network card by adopting the mth probability, wherein M is more than or equal to 1 and less than or equal to M; m is an integer; the sum of the first probability to the mth probability is 1.
3. The method according to claim 2, wherein the bandwidth allocating device sends the message stored in the mth CPU cache of the M CPU caches to the network card with the mth probability, including:
the bandwidth allocation device selects one CPU cache from the CPU caches storing unsent messages in the M CPU caches as a target CPU cache according to a selection algorithm, and sends the messages in the target CPU cache until the messages in the M CPU caches are sent completely; wherein the selection algorithm is a selection algorithm that makes the probability of sending the packet stored in the mth CPU cache the mth probability.
4. The method according to any one of claims 1 to 3, wherein the bandwidth allocation device prestores a correspondence between first CPU domain attribute information and SOC domain attribute information for each of a plurality of resources of a first CPU of the M CPUs; the resource is a virtual resource or a physical resource which is allocated to the first CPU by the SOC, the first CPU domain attribute information comprises an address or bus device function identification BDF, and the SOC domain attribute information comprises an address or bus device function identification BDF; the method further comprises the following steps:
the bandwidth allocation device determines attribute information in the SOC domain corresponding to the attribute information in the first CPU domain carried in the first message according to the corresponding relationship between the first CPU domain attribute information of each resource in the plurality of resources and the SOC domain attribute information; the first message is any one message in a first CPU cache;
the bandwidth allocation device replaces the attribute information of the first CPU domain carried in the first message with the attribute information in the SOC domain corresponding to the attribute information of the first CPU domain carried in the first message to obtain a second message;
the bandwidth allocation device sending the messages stored in the M CPU caches to the network card comprises:
and the bandwidth allocation device sends the second message to the network card.
5. The method of claim 4, further comprising:
the bandwidth allocation device receives attribute information of an SOC domain of a first resource sent by the SOC, wherein the first resource comprises a virtual resource or a physical resource allocated to the first CPU;
the bandwidth allocation device receives attribute information of a first CPU domain of the first resource sent by the first CPU;
the bandwidth allocation device establishes a correspondence between attribute information of the SOC domain of the first resource and attribute information of the first CPU domain of the first resource.
6. The method according to any one of claims 1-3, 5, further comprising:
and if the message storage condition comprises that no message is stored in each CPU cache of the N CPU caches and the message is stored in the SOC cache, sending the message stored in the SOC cache to the network card.
7. The method of claim 4, further comprising:
and if the message storage condition comprises that no message is stored in each CPU cache of the N CPU caches and the message is stored in the SOC cache, sending the message stored in the SOC cache to the network card.
8. A bandwidth allocation apparatus, comprising:
the storage module comprises N Central Processing Unit (CPU) caches and a System On Chip (SOC) cache; a CPU buffer is used for buffering a message sent by the CPU to the bandwidth allocation device, and the SOC buffer is used for buffering a message sent by the SOC to the bandwidth allocation device; the N is a positive integer;
the processing module is used for acquiring message storage conditions in the N CPU caches and the SOC cache;
the receiving and sending module is used for sending the messages stored in the M CPU caches to the network card if the message storage condition comprises that the messages are stored in the M CPU caches in the N CPU caches and the messages are stored in the SOC cache; after all the messages stored in the M CPU caches are sent, sending the messages stored in the SOC cache to the network card; wherein M is not less than 1 and not more than N, and M is an integer; or if the message storage condition includes that messages are stored in all the M CPU caches in the N CPU caches but no message is stored in the SOC cache, sending the messages stored in the M CPU caches to the network card.
9. The apparatus according to claim 8, wherein M is greater than or equal to 2;
the transceiver module is specifically configured to: sending the message stored in the mth CPU cache of the M CPU caches to the network card by adopting the mth probability, wherein M is more than or equal to 1 and less than or equal to M; m is an integer; the sum of the first probability to the mth probability is 1.
10. The bandwidth allocation apparatus according to claim 9,
the processing module is specifically configured to: according to a selection algorithm, selecting one CPU cache from the CPU caches storing unsent messages from the M CPU caches as a target CPU cache, and sending the messages in the target CPU cache by the transceiver module until the messages in the M CPU caches are sent completely; wherein the selection algorithm is a selection algorithm that makes the probability of sending the packet stored in the mth CPU cache the mth probability.
11. The apparatus according to any one of claims 8 to 10, wherein the storage module is further configured to store a correspondence between the first CPU domain attribute information and the SOC domain attribute information of each of the plurality of resources of the first CPU of the M CPUs; the resource is a virtual resource or a physical resource which is allocated to the first CPU by the SOC, the first CPU domain attribute information comprises an address or bus device function identification BDF, and the SOC domain attribute information comprises an address or bus device function identification BDF;
the processing module is further configured to: determining attribute information in the SOC domain corresponding to the attribute information in the first CPU domain carried in the first message according to the corresponding relationship between the first CPU domain attribute information of each resource in the plurality of resources and the SOC domain attribute information; the first message is any one message in a first CPU cache; replacing the attribute information of the first CPU domain carried in the first message with the attribute information in the SOC domain corresponding to the attribute information of the first CPU domain carried in the first message to obtain a second message;
the transceiver module is specifically configured to: and sending the second message to the network card.
12. The bandwidth allocation apparatus according to claim 11,
the transceiver module is further configured to: receiving attribute information of an SOC domain of a first resource sent by the SOC, wherein the first resource comprises a virtual resource or a physical resource allocated to the first CPU; receiving attribute information of a first CPU domain of the first resource sent by the first CPU;
the processing module is further configured to: and establishing a corresponding relation between the attribute information of the SOC domain of the first resource and the attribute information of the first CPU domain of the first resource.
13. The apparatus according to any one of claims 8-10 and 12, wherein the transceiver module is further configured to:
and if the message storage condition comprises that no message is stored in each CPU cache of the N CPU caches and the message is stored in the SOC cache, sending the message stored in the SOC cache to the network card.
14. The apparatus for allocating bandwidth as defined in claim 11, wherein the transceiver module is further configured to:
and if the message storage condition comprises that no message is stored in each CPU cache of the N CPU caches and the message is stored in the SOC cache, sending the message stored in the SOC cache to the network card.
15. A bandwidth distribution apparatus, comprising: a memory and a processor; the memory is for storing a computer program, and the processor is for invoking the computer program to perform the method of any of claims 1-7.
16. A computer-readable storage medium, having stored thereon a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 7.
CN201910940648.XA 2019-09-30 2019-09-30 Message sending method and device Active CN110855468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910940648.XA CN110855468B (en) 2019-09-30 2019-09-30 Message sending method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910940648.XA CN110855468B (en) 2019-09-30 2019-09-30 Message sending method and device

Publications (2)

Publication Number Publication Date
CN110855468A CN110855468A (en) 2020-02-28
CN110855468B true CN110855468B (en) 2021-02-23

Family

ID=69597275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910940648.XA Active CN110855468B (en) 2019-09-30 2019-09-30 Message sending method and device

Country Status (1)

Country Link
CN (1) CN110855468B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809236A (en) * 2016-03-08 2016-07-27 武汉梦芯科技有限公司 System on chip supporting SIM card function and method for realizing SIM card function
CN206559386U (en) * 2017-03-22 2017-10-13 山东万腾电子科技有限公司 Embedded industry intelligent gateway and internet of things data acquisition monitoring system based on SoC

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011259062A (en) * 2010-06-07 2011-12-22 Elpida Memory Inc Semiconductor device
CN104038415A (en) * 2014-05-23 2014-09-10 汉柏科技有限公司 Method for batch processing of messages
CN106385379A (en) * 2016-09-14 2017-02-08 杭州迪普科技有限公司 Message caching method and device
US10599442B2 (en) * 2017-03-02 2020-03-24 Qualcomm Incorporated Selectable boot CPU
CN109729059B (en) * 2017-10-31 2020-08-14 华为技术有限公司 Data processing method and device and computer
CN108023829B (en) * 2017-11-14 2021-04-23 东软集团股份有限公司 Message processing method and device, storage medium and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809236A (en) * 2016-03-08 2016-07-27 武汉梦芯科技有限公司 System on chip supporting SIM card function and method for realizing SIM card function
CN206559386U (en) * 2017-03-22 2017-10-13 山东万腾电子科技有限公司 Embedded industry intelligent gateway and internet of things data acquisition monitoring system based on SoC

Also Published As

Publication number Publication date
CN110855468A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
US9678918B2 (en) Data processing system and data processing method
JP2021190125A (en) System and method for managing memory resource
US11829309B2 (en) Data forwarding chip and server
US11568092B2 (en) Method of dynamically configuring FPGA and network security device
US20170286149A1 (en) Method for Managing Memory of Virtual Machine, Physical Host, PCIE Device and Configuration Method Thereof, and Migration Management Device
CN113590364B (en) Data processing method and device based on distributed shared memory system
CN105518631B (en) EMS memory management process, device and system and network-on-chip
EP3217616B1 (en) Memory access method and multi-processor system
AU2015402888B2 (en) Computer device and method for reading/writing data by computer device
US11604742B2 (en) Independent central processing unit (CPU) networking using an intermediate device
CN115374046B (en) Multiprocessor data interaction method, device, equipment and storage medium
CN115964319A (en) Data processing method for remote direct memory access and related product
CN112052100A (en) Virtual machine communication method and equipment based on shared memory
EP2913759A1 (en) Memory access processing method based on memory chip interconnection, memory chip, and system
US20240022501A1 (en) Data Packet Sending Method and Device
CN110855468B (en) Message sending method and device
CN114253733B (en) Memory management method, device, computer equipment and storage medium
US20220269411A1 (en) Systems and methods for scalable shared memory among networked devices comprising ip addressable memory blocks
EP3379423A1 (en) Technologies for fine-grained completion tracking of memory buffer accesses
CN111865794A (en) Correlation method, system and equipment of logical port and data transmission system
CN117851289B (en) Page table acquisition method, system, electronic component and electronic device
CN116107937A (en) Data processing method, device, electronic equipment and storage medium
CN116149539A (en) Data reading method and related equipment
CN115576622A (en) Setting method and device of BIOS configuration mode and storage medium
CN115905036A (en) Data access system, method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant