KR101639797B1 - Network interface apparatus and method for processing virtual machine packets - Google Patents

Network interface apparatus and method for processing virtual machine packets Download PDF

Info

Publication number
KR101639797B1
KR101639797B1 KR1020150144474A KR20150144474A KR101639797B1 KR 101639797 B1 KR101639797 B1 KR 101639797B1 KR 1020150144474 A KR1020150144474 A KR 1020150144474A KR 20150144474 A KR20150144474 A KR 20150144474A KR 101639797 B1 KR101639797 B1 KR 101639797B1
Authority
KR
South Korea
Prior art keywords
virtual machine
packet
queue
queues
flow
Prior art date
Application number
KR1020150144474A
Other languages
Korean (ko)
Inventor
정기웅
Original Assignee
주식회사 구버넷
정기웅
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 구버넷, 정기웅 filed Critical 주식회사 구버넷
Priority to KR1020150144474A priority Critical patent/KR101639797B1/en
Application granted granted Critical
Publication of KR101639797B1 publication Critical patent/KR101639797B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6255Queue scheduling characterised by scheduling criteria for service slots or service orders queue load conditions, e.g. longest queue first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2483Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/627Queue scheduling characterised by scheduling criteria for service slots or service orders policing

Abstract

Disclosed are a network interface apparatus to process virtual machine packets, and a method thereof. The network interface apparatus, which is connected to a server in which a plurality of virtual machines are implemented, comprises a plurality of cues, and when receiving virtual machine packets to be transmitted to the virtual machines through a physical net, identifies virtual machine flow of the virtual machine packets; stores the virtual machine packets in a virtual machine flow unit in a plurality of the cues; and then processes the packets in parallel through a multiprocessor. The present invention provides the network interface apparatus to process virtual machine packets and the method thereof which increases an efficiency of parallel processing. The network interface apparatus to process virtual machine packets includes at least one processor, the cues, a packet reception unit, a packet analysis unit, a monitoring unit, a cue management unit, and a scheduler.

Description

[0001] The present invention relates to a network interface apparatus and a method for processing virtual machine packets,

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a network interface device and a packet processing method, and more particularly, to a network interface card capable of processing a virtual machine packet and a virtual machine packet processing method using the same.

Recently, the amount of communication through the Internet has increased rapidly, and accordingly, the capacity and speed of servers have been rapidly increasing. On the other hand, virtualization of servers is being accelerated in order to solve the increase in physical volume due to the capacity increase of servers and cost reduction. It is essential to increase the efficiency of parallel processing of large amount of data including data packets generated in a virtualization environment received from a physical network in accordance with capacity increase, speed up, and virtualization of servers. When a virtual switch function is performed in a virtualization server, The load of the server according to the function of the virtual switch is transferred to the physical network interface device, and realization of the technology concept is required.

In the case of a NIC supporting a conventional virtualization environment, there is an attempt to reduce the bottleneck between the network interface device and the virtual switch of the server by creating and managing a queue on a virtual machine basis as a method of supporting the virtualization environment in the physical network interface device . However, in the conventional case, only the virtual machine unit is allocated for the processor allocation and the redistribution of the queue for the parallel processing of the received data packet. In other words, processor allocation is made considering only the physical layer of the virtualization environment. Therefore, processor affinity, which is one of the most important factors for improving the processing efficiency in parallel processing, can not be considered, and processor allocation and queue redistribution take place only considering the usage load of the processor. This can serve as a factor to reduce the efficiency of parallel processing.

U.S. Patent Publication No. 2013-0239119

It is an object of the present invention to provide a network interface device that increases the efficiency of parallel processing by processing a packet in units of virtual machine flows, guarantees QoS on a per-virtual machine flow basis, and distributes a load of a server in a virtual network environment, And to provide a packet processing method.

According to an aspect of the present invention, there is provided a network interface device connected to a server in which a plurality of virtual machines are implemented, the network interface device comprising: at least one processor; A plurality of queues to which the one or more processors and at least one queue are connected; A packet receiving unit for receiving a virtual machine packet to be transmitted to a virtual machine through a physical network; A packet analyzer for identifying a virtual machine flow of a virtual machine packet received from the packet receiver; A monitoring unit for monitoring status information including the load relating to the at least one processor and the plurality of queues; A queue manager for dividing the plurality of queues into a plurality of partitions according to the monitored result or dynamically generating a size and a number of the plurality of queues based on virtual environment information received from the plurality of virtual machines; And a scheduler for classifying the virtual machine packets into the identified virtual machine flow units and assigning the virtual machine packets to corresponding queues.

According to another aspect of the present invention, there is provided a method of processing a virtual machine packet for a plurality of virtual machines, the method comprising: receiving a virtual machine packet to be transmitted to a plurality of virtual machines via a physical network; ; Identifying a virtual machine flow of the received virtual machine packet; Monitoring status information including load distribution for one or more processors and a plurality of queues; Dividing the plurality of queues into a plurality of partitions according to the monitored result or dynamically generating a size and a number of the plurality of queues based on virtual environment information received from the plurality of virtual machines; And dividing the virtual machine packet into the identified virtual machine flow units and assigning the divided virtual machine packets to a corresponding queue.

According to the present invention, a load of a server having a virtualized environment including a plurality of virtual machines is reduced. In addition, by processing packets in units of virtual machine flow, the degree of affinity between the virtual machine packet and the processor is increased to improve the efficiency of parallel processing. In addition, the load of the virtual switch can be distributed to the network interface card to increase the efficiency of the virtual network processing. In addition, it is possible to implement a scalable communication process in which QoS of virtual machine flow units between end points of a virtual machine is ensured by performing queuing and processing in units of virtual machine flows.

BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic diagram of a whole system including a network interface device according to the present invention;
2 is a diagram illustrating an example of a method of dynamically setting resources of a network interface apparatus according to the present invention.
3 is a diagram illustrating a configuration of an embodiment of a network interface apparatus according to the present invention,
FIG. 4 illustrates an example of a virtual machine flow-based queue allocation of a network interface apparatus according to the present invention.
5 is a diagram illustrating another example of a virtual machine flow-based queue allocation of a network interface apparatus according to the present invention.
6 is a diagram illustrating an example of a virtual machine packet used in the present invention,
7 is a flowchart illustrating an example of a packet processing method for a virtual network environment according to the present invention.

The conventional flow identification method analyzes the traffic attributes of all packets of the received packet and classifies them according to a predetermined network communication policy. For example, a flow can be classified according to a communication policy set as an element of attributes of a received packet, such as a transmission node address, a destination address, a session, and an application layer. A typical NIC identifies a flow by analyzing traffic characteristics of an upper layer of a packet received from the network, and parallelizes the identified flow through a multiprocessor. The NIC in the present invention can efficiently transmit packets generated in a virtual machine network environment to a destination virtual machine encapsulated in a normal network packet frame using a conventional technique such as various tunneling The flow is identified according to the virtualization environment network layer information so that it can be delivered, and the identified flow is processed in parallel through the multiple processors.

The virtualization environment network layer means a network layer formed of a virtual machine, and the virtualization environment network layer information means network layer information formed of a virtual machine encapsulated in a physical network frame for packet transmission in a network layer formed as a virtual machine . Hereinafter, the identified packet based on the virtual environment network layer information used in the present embodiment is referred to as a virtual machine packet. The virtual machine packet is recognized in the physical network by the general communication protocol and is encapsulated in the physical network frame so that smooth transmission can be achieved. Also, the flow classified by using the virtualization environment of the virtual machine packet, that is, the network layer information between the virtual machines, is called a virtual machine flow. The virtual machine flow is described as a flow of a service end created in a virtual machine in a communication service structure.

Hereinafter, a network interface apparatus and a packet processing method according to the present invention will be described in detail with reference to the accompanying drawings.

FIG. 1 is a diagram showing a schematic structure of an entire system including a network interface device according to the present invention.

Referring to FIG. 1, a network interface device is implemented as a network interface card (NIC) 100. However, the network interface device is not necessarily limited to the network interface card 100, and may be implemented in various forms, such as hardware or software, inside or outside the server. Hereinafter, the network interface device will be referred to as an NIC for convenience of explanation.

The server 120 includes a plurality of virtual machines 150, 152, and 154, a virtual switch 140, and a connection slot 130. The virtual switch 140 transfers the virtual machine packet received via the NIC 100 to the destination virtual machine. The connection slot 130 is an interface for connecting the NIC 100 and the server 120, and may be implemented as a Peripheral Component Interconnect Express (PCIe), for example. In this case, the NIC 100 can be detached and attached to the PCIe slot.

The NIC 100 analyzes the traffic characteristics of the upper layer in the virtualization environment with respect to the virtual machine packets received from the network 110, identifies the virtual machine flows, and parallelizes the identified virtual machine flows through the multiple processors. The virtual machine flow can be defined as a specific traffic in a virtual environment classified according to a traffic attribute in a virtual network frame in which a physical network frame of a network packet in which a virtual machine packet is encapsulated is removed. The virtual machine flow can be classified and identified according to various preset policies. For example, the packet analyzer 310 can identify the TCP flow of the virtual machine as a virtual machine flow. The structure of the virtual machine packet will be described with reference to FIG.

The NIC 100 includes a plurality of queues and a plurality of processors for parallel processing of received virtual machine packets, and the size and number of queues are fixed or dynamically changed according to the server virtualization environment, .

2 is a diagram illustrating an example of a method for dynamically setting resources of a NIC according to the present invention

Referring to FIGS. 1 and 2, when the NIC 100 is attached to the connection slot 130 of the server 120 and connected to the server 120 (S200), the NIC 100 transmits And virtual environment information including the number of virtual machines is received (S210). The NIC 100 dynamically sets resources such as the size and number of queues and the creation of a queue group according to the received virtual environment information (S220).

For example, when the NIC 100 receives the virtual environment information of four virtual machines from the server 120, the NIC may allocate three queues for each virtual machine. The number of queues allocated to each virtual machine and the size of each queue can be variously set according to predetermined rules.

3 is a diagram showing a configuration of an embodiment of the NIC according to the present invention.

3, the NIC 100 includes a packet receiving unit 300, a packet analyzing unit 310, a memory 320, a plurality of queues 330, a plurality of processors 340, a scheduler 350, (360) and a queue management unit (370). The connection line between each component including the packet receiving unit 300 is only one example for helping understanding of the present invention and the connection between the queue management unit 370 and the monitoring unit 360 and the connection between the scheduler 350 and the plurality of queues And a connection between the first and second terminals 330 and 330 may be established.

The packet receiving unit 300 decapsulates a packet encapsulated by a virtual machine through various methods such as various conventional tunnels to be recognized as a general Ethernet frame in an external network, and removes a header part corresponding to a physical network And restores a data packet frame in a virtualized environment.

The packet analyzer 310 identifies the virtual machine flow of the decapsulated virtual machine packet. In order to identify the virtual machine flow, it is necessary to interpret not only the data link layer (vL2 layer) in the virtualization environment but also the network layer (vL3 layer) and above. For this, the packet analyzing unit 310 analyzes the virtual machine link layer (vL2 layer) of the decapsulated virtual machine packet through the DPI (Deep Packet Insepection) process to the virtual application layer (vL7 layer) Lt; / RTI > The analysis of the virtual machine packet for identifying the virtual machine flow is not limited to analyzing both the virtual data link layer and the virtual application layer, and the scope of the analysis may vary according to the virtual machine flow identification policy.

The memory 320 stores a virtual machine packet, virtual machine flow information identified by the packet analyzer 310, and the like, and also stores and manages a flow table indicating a mapping relationship between the virtual machine flow and the queue.

In one embodiment, the packet receiving unit 300 stores the decapsulated virtual machine packet in the memory 320, and informs the packet analyzing unit 310 of the fact that the virtual machine packet is stored. Then, the packet analyzer 310 performs a virtual machine flow identification for the corresponding virtual machine packet stored in the memory. That is, the packet analyzer 310 recognizes the reception of the new virtual machine packet, identifies the virtual machine flow characteristic of the corresponding virtual machine packet according to the preset policy, stores the information and informs the scheduler 350 of the stored information .

The scheduler 350 allocates the identified virtual machine flows to the corresponding virtual machine flow queues and allocates the virtual machine flow queues to the multiple processors 340 in parallel. More specifically, the scheduler 350 refers to the flow table stored in the memory 320 to retrieve the virtual machine flow mapped virtual machine flow queue, and delivers the virtual machine packet stored in the memory 320 to the retrieved queue . If there is no virtual machine flow corresponding to the table, the scheduler 350 allocates the virtual machine flow to a specific queue through various conventional methods, and stores the mapping relationship between the virtual machine flow and the queue in the flow table.

The scheduler 350 may queue virtual machine packets by virtual machine flow unit by virtual machine. For example, when establishing a mapping relationship between a virtual machine flow and a queue, the first flow and the second flow of the same nature (e.g., same QoS priority) towards the first virtual machine and the second virtual machine are the same Lt; / RTI > Although the present invention does not exclude this case, it is desirable to allocate the virtual machine flows to different groups of queues for each virtual machine in order to increase the efficiency of parallel processing. 4, the first flow for the first virtual machine is allocated to the queue of the first group 400 as a unit of virtual machine flow, and the first flow for the first virtual machine is assigned to the queue of the first group 400. In other words, The second flow for the second virtual machine is assigned to the queue of the second group 410 as a virtual machine flow unit.

For example, when the new virtual machine packet is loaded into the memory 320 and the virtual machine flow information of the corresponding virtual machine packet is received, the scheduler 350 refers to the flow table to determine whether the virtual machine flow is a certain virtual machine flow queue And loads the virtual machine packet loaded in the memory 320 into the searched queue. If information on the identified virtual machine flow can not be found in the flow table, the scheduler 350 may allocate the virtual machine packet to one of the queues belonging to the virtual machine according to a predetermined policy. Here, the predetermined policy may vary according to the embodiment. For example, a policy for selecting a virtual machine flow queue considering flow affinity may be selected, and a queue having the smallest load among the virtual machine queues corresponding to virtual machine packets may be selected Policy, and a policy for selecting a queue allocated to a processor with the lowest utilization rate.

The plurality of queues 330 are each mapped to at least one or more virtual machine flows. When queuing in units of virtual machine flows, the processor affinity is increased, thereby increasing the efficiency of parallel processing. The plurality of queues 330 may be divided into groups including at least one queue for each virtual machine. The plurality of queues 330 may be divided into at least two or more partitions as shown in FIG.

The scheduler 350 may be a processor selected from among a plurality of processors. For example, a specific processor 350 among all processors 370 may be designated as a scheduler, a degree of load of each processor may be monitored through a monitoring unit 360, and a scheduler 350 may be selected as a processor with the least load have. In addition, various methods for selecting a scheduler can be applied. When a scheduler is designated, a control unit (not shown) generates an interrupt signal whenever scheduling is required and transmits an interrupt signal to the processor designated by the scheduler. Upon receiving the interrupt signal, the processor stops the operation and finishes operation as a scheduler And then perform the previous operation again.

The plurality of processors 340 parallel-processes the virtual machine packets stored in each queue and transmits them to the virtual machine of the server. The plurality of processors 340 are connected to at least one or more queues.

For example, the plurality of processors 340 are connected to the queue in consideration of the flow affinity. In other words, caches that store virtual machine packets with the same or similar virtual machine flow attributes are tied and connected to the processor.

As another example, a plurality of processors may be connected to a queue for each virtual machine. 4, the first processor is connected to the first to third queues allocated to the first virtual machine, the second processor is connected to the fourth to sixth queues allocated to the second virtual machine, The processor may be coupled to seventh and eighth queues allocated to the third virtual machine.

As another example, the first processor is associated with a fourth queue assigned to a second virtual machine with first to third queues allocated to the first virtual machine, wherein the second processor is allocated to the second virtual machine Lt; RTI ID = 0.0 > and / or < / RTI > That is, the processor may be coupled to all or a portion of the queues allocated to at least two or more virtual machines.

The monitoring unit 360 monitors various states including the load of the processor 340 and the queue 330. [

The queue management unit 370 divides the queues into a plurality of partitions as shown in FIG. 5 according to the monitoring result, processes the divided partitions by scheduling each partition, or combines or divides a plurality of queues into one or the number of queues allocated to the virtual machine To adjust the size and number of cues. The queue manager can dynamically set the number and size of queues for each virtual machine according to the virtualization environment of the server,

4 is a diagram illustrating an example of a virtual machine flow based queue allocation of a NIC according to the present invention.

Referring to FIG. 4, the queues 330 are divided into virtual machines. For example, the first to third queues 400 are allocated to the first virtual machine, the fourth to sixth queues are allocated to the second virtual machine, the seventh and eighth queues are allocated to the third virtual machine do. The scheduler performs queuing by referring to the virtual machine flow for each virtual machine.

For example, in the case of identifying the virtual machine flows directed to the first virtual machine in accordance with the priority, the scheduler sets the priority of the virtual machine packets based on the priority in the first to third queues 400 allocated to the first virtual machine, Are classified and stored.

5 is a diagram illustrating another example of a virtual machine flow-based queue allocation of a NIC according to the present invention.

Referring to FIG. 5, the queues 330 are divided into at least two partitions 520 and 530. Schedulers 500 and 510 are allocated to each of the partitions 520 and 530. For example, a first scheduler 500 is assigned to the first partition 520, and a second scheduler 510 is assigned to the second partition 530. Each of the schedulers 500 and 510 performs a scheduling operation in parallel for the allocated partitions independently. The scheduler may be a processor selected by a predetermined method among a plurality of processors 370 as described above.

For example, as shown in FIG. 3, when the load distribution of the queue measured by the monitoring unit falls below a predetermined threshold value during the scheduling performed by one scheduler, the redistribution of the queue or the processor reallocation may be determined. Or the statistical amount of virtual machine packets received from the network and the processor capability performed by the total processor in the NIC so that redistribution of the queues or processor reallocation can be determined if the load of the processor is below a certain threshold. When the queue is redistributed or the processor is reassigned, if the queue is divided into a plurality of partitions as shown in FIG. 5 and additional scheduler designation is required, a processor with the least load can be designated as an additional scheduler.

Caches belonging to each partition can be grouped 540 based on virtual machines and caches in group 540 can be classified based on virtual machine flows. In this case, a hierarchical structure of the flow unit queues for each group is generated by the partition-virtual machine group. At this time, the plurality of queues are divided into a plurality of queue groups each including at least one queue for each virtual machine, and the scheduler determines, for the virtual machine packets, whether the destination of the virtual machine packet is one of the plurality of queue groups And allocates the queue in the selected queue group based on the virtual machine flow.

6 is a diagram showing an example of a virtual machine packet used in the present invention.

Referring to FIG. 6, a virtual machine packet includes a physical network frame 610, a tunneling field 620, a virtual network frame 630, and a data field 600.

The physical network frame 610 includes information indicating a layer of a conventional physical network such as L2, IP, and TCP. The tunneling field 620 indicates tunneling information and the like. The virtual network frame 630 includes information about each layer (vL2 to vL7, etc.) in the virtual network environment. The data field 600 includes data.

The structure of the virtual machine packet shown in FIG. 6 is only one example for facilitating understanding of the present invention, and the present invention is not limited thereto, and various structures of virtual machine packets for a virtual network environment can be defined and used.

Also, the structure of the virtual machine packet stored in the memory and the structure of the virtual machine packet stored in the queue may be the same or different according to the embodiment. For example, various design changes can be made, such as changing the virtual machine packet of FIG. 6 received from the network to an optimal structure that can be processed in the virtualization environment, or deleting some or all unnecessary fields in the virtualization environment of the fields of the virtual machine packet And store them in a queue.

7 is a flowchart illustrating an example of a packet processing method for a virtual machine network environment according to the present invention.

7, when receiving a virtual machine packet (S700), the network device of the present invention analyzes a virtual machine packet through a DPI process or the like to identify a destination virtual machine and a virtual machine flow to which a virtual machine packet is to be delivered (S710). The network device of the present invention stores a virtual machine packet in a corresponding queue on a virtual machine flow basis for at least one or more queues assigned for each virtual machine (S720). Then, the network device of the present invention processes virtual machine packets stored in each queue through a plurality of processors and transmits them to the virtual machine (S730).

The present invention can also be embodied as computer-readable codes on a computer-readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored. Examples of the computer-readable recording medium include various types of ROM, RAM, CDROM, magnetic tape, floppy disk, optical data storage, and the like. The computer-readable recording medium may also be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner.

The present invention has been described with reference to the preferred embodiments. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is defined by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.

100: network interface card 110: network
120: server 130: connection slot
140: Virtual Switch 150: Virtual Machine
300: packet receiver 310: packet analyzer
320: memory 330: queue
340: Processor 350: Scheduler
360: Monitoring section 370: Queue management section
600: Data 610: Physical network frame
620: Tunneling 630: Virtual network frame

Claims (16)

A network interface device connected to a server in which a plurality of virtual machines are implemented,
One or more processors;
A plurality of queues to which the one or more processors and at least one queue are connected;
A packet receiving unit for receiving a virtual machine packet to be transmitted to a virtual machine through a physical network; And
And a packet analyzer for identifying a virtual machine flow of the virtual machine packet received from the packet receiver,
Wherein the virtual machine packet is encapsulated in a physical network frame to include traffic information in a virtual machine network environment,
The packet receiver decapsulates a physical network packet encapsulated in the virtual machine packet to remove a header part corresponding to the physical network, restores the virtual machine network frame, And identifies the virtual machine flow based on the information.
The method according to claim 1,
And a scheduler for dividing the virtual machine packets into the identified virtual machine flow units and assigning the virtual machine packets to corresponding queues.
The method according to claim 1,
Dividing the plurality of queues into a plurality of partitions according to state information including the load relating to the at least one processor and the plurality of queues or dividing the plurality of queues into a plurality of partitions based on virtual network environment information received from the plurality of virtual machines, And a queue manager for dynamically generating a size and a number of the network interface devices
delete 3. The method of claim 2,
Further comprising a flow table storing a mapping relationship between the virtual machine flow identified by the packet analysis unit and the plurality of queues,
The scheduler refers to the flow table for the received packet, and if there is no information about the virtual machine flow and the mapped queue of the received packet identified by the packet analyzing unit, the scheduler determines one of the queues allocated to the destination virtual machine And updates the flow table. The network interface apparatus according to claim 1,
3. The method of claim 2,
Wherein the network interface device further comprises a monitoring unit for monitoring status information including the load on the one or more processors and the plurality of queues,
Wherein the scheduler calculates a load distribution of a queue measured by the monitoring unit is less than or equal to a predetermined threshold value or a quantity of virtual machine packets received from a network and a processor capability performed by an overall processor in the network interface apparatus, Lt; RTI ID = 0.0 > reassigning < / RTI &
The method according to claim 6,
Wherein the scheduler selects a processor with the smallest load among the plurality of processors identified by the monitoring unit as an additional scheduler, when a redistribution of a queue or an additional scheduler designation is required at the time of processor reallocation,
Wherein the selected processor halts an ongoing task and performs the ongoing task again after completing the operation as a scheduler.
3. The method of claim 2,
Wherein the plurality of queues are divided into a plurality of queue groups including at least one queue for each virtual machine, and wherein the scheduler transmits, for a virtual machine packet, one of the plurality of queue groups based on a destination virtual machine of the virtual machine packet And allocates the queue in the selected queue group based on the virtual machine flow.
A method for processing a virtual machine packet for a plurality of virtual machines, the virtual machine packet including one or more processors and a plurality of queues to which the one or more processors and at least one or more queues are connected,
Receiving a virtual machine packet to be transmitted to a plurality of virtual machines via a physical network; And
Identifying a virtual machine flow of the received virtual machine packet,
Wherein the virtual machine packet is encapsulated in a physical network frame to include traffic information in a virtual machine network environment,
In the step of receiving the virtual machine packet, when receiving the encapsulated physical network packet, the virtual machine packet is decapsulated to remove a header part corresponding to the physical network, restoring the virtual machine network frame, Further comprising identifying virtual machine flows based on machine network layer information. ≪ RTI ID = 0.0 > 8. < / RTI >
10. The method of claim 9,
And dividing the virtual machine packet into the identified virtual machine flow units and assigning the divided virtual machine packets to the corresponding queues. The virtual machine packet processing method for a plurality of virtual machines
10. The method of claim 9,
Dividing the plurality of queues into a plurality of partitions according to state information including the load relating to the at least one processor and the plurality of queues or dividing the plurality of queues into a plurality of partitions based on virtual network environment information received from the plurality of virtual machines, The method comprising the steps of: dynamically generating a size and a number of virtual machine packets for a plurality of virtual machines
delete 11. The method of claim 10,
Wherein the step of identifying the virtual machine flow generates a flow table storing a mapping relationship between the identified virtual machine flow and the plurality of queues,
Wherein the step of allocating the virtual machine packet to the queue refers to the flow table for the received packet, and if there is no information about the virtual machine flow and the mapped queue of the received packet, one of the queues allocated to the destination virtual machine , And updates the flow table. The virtual machine packet processing method for a plurality of virtual machines
11. The method of claim 10,
Further comprising the step of monitoring status information including the load on the one or more processors and the plurality of queues between identifying the virtual machine flow and allocating the virtual machine packet to a queue,
The monitoring step may calculate the load distribution of the monitored queue to be less than or equal to a predetermined threshold value or the amount of virtual machine packets received from the network and the processor capability performed by the total processor in the network interface device, And if it is equal to or less than the threshold value, reallocates the queue or reassigns the processor.
15. The method of claim 14,
Wherein the scheduler selects a processor with the smallest load among the plurality of processors identified in the monitoring step as an additional scheduler when reallocation of a queue or designation of an additional scheduler at the time of processor reallocation is required in the monitoring step,
Wherein the selected processor halts an ongoing task and performs the ongoing task again after completing the operation as a scheduler.
11. The method of claim 10,
Wherein the plurality of queues are divided into a plurality of queue groups including at least one queue for each virtual machine, a destination of the virtual machine packets is selected and allocated to one of the plurality of queue groups based on a virtual machine, And a queue in the selected queue group is allocated based on the machine flow. The virtual machine packet processing method for a plurality of virtual machines
KR1020150144474A 2015-10-16 2015-10-16 Network interface apparatus and method for processing virtual machine packets KR101639797B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150144474A KR101639797B1 (en) 2015-10-16 2015-10-16 Network interface apparatus and method for processing virtual machine packets

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150144474A KR101639797B1 (en) 2015-10-16 2015-10-16 Network interface apparatus and method for processing virtual machine packets

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020160086043A Division KR101773528B1 (en) 2016-07-07 2016-07-07 Network interface apparatus and method for processing virtual machine packets

Publications (1)

Publication Number Publication Date
KR101639797B1 true KR101639797B1 (en) 2016-07-14

Family

ID=56499369

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150144474A KR101639797B1 (en) 2015-10-16 2015-10-16 Network interface apparatus and method for processing virtual machine packets

Country Status (1)

Country Link
KR (1) KR101639797B1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180107706A (en) * 2017-03-22 2018-10-02 정기웅 Method and apparatus for processing packet using multi-core in hierarchical networks
KR101998625B1 (en) * 2018-01-24 2019-07-11 주식회사 오픈시스넷 Load balancing method of session cluster
KR20200044642A (en) * 2018-10-19 2020-04-29 주식회사 구버넷 Packet processing method and apparatus in multi-layered network environment
KR20200082133A (en) * 2018-12-28 2020-07-08 주식회사 에프아이시스 Device and Method for Data Transmission and QoS Guarantee of Virtual Machines in Multicore-based Network Interface Card
US10992601B2 (en) 2018-10-19 2021-04-27 Gubernet Inc. Packet processing method and apparatus in multi-layered network environment
CN113141312A (en) * 2020-01-20 2021-07-20 浙江宇视科技有限公司 Data processing method, device, system, electronic equipment and storage medium
WO2022181903A1 (en) * 2021-02-26 2022-09-01 엘지전자 주식회사 Vehicular display device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130239119A1 (en) 2012-03-09 2013-09-12 Microsoft Corporation Dynamic Processor Mapping for Virtual Machine Network Traffic Queues
KR20140081871A (en) * 2011-12-23 2014-07-01 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 Optimization of resource utilization in a collection of devices
KR20140106912A (en) * 2013-02-27 2014-09-04 주식회사 시큐아이 Apparatus and method for processing packet

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140081871A (en) * 2011-12-23 2014-07-01 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 Optimization of resource utilization in a collection of devices
US20130239119A1 (en) 2012-03-09 2013-09-12 Microsoft Corporation Dynamic Processor Mapping for Virtual Machine Network Traffic Queues
KR20140106912A (en) * 2013-02-27 2014-09-04 주식회사 시큐아이 Apparatus and method for processing packet

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180107706A (en) * 2017-03-22 2018-10-02 정기웅 Method and apparatus for processing packet using multi-core in hierarchical networks
KR102091152B1 (en) * 2017-03-22 2020-03-19 정기웅 Method and apparatus for processing packet using multi-core in hierarchical networks
KR101998625B1 (en) * 2018-01-24 2019-07-11 주식회사 오픈시스넷 Load balancing method of session cluster
KR20200044642A (en) * 2018-10-19 2020-04-29 주식회사 구버넷 Packet processing method and apparatus in multi-layered network environment
KR102112270B1 (en) * 2018-10-19 2020-05-19 주식회사 구버넷 Packet processing method and apparatus in multi-layered network environment
US10992601B2 (en) 2018-10-19 2021-04-27 Gubernet Inc. Packet processing method and apparatus in multi-layered network environment
KR20200082133A (en) * 2018-12-28 2020-07-08 주식회사 에프아이시스 Device and Method for Data Transmission and QoS Guarantee of Virtual Machines in Multicore-based Network Interface Card
KR102145183B1 (en) * 2018-12-28 2020-08-18 주식회사 에프아이시스 Device and Method for Data Transmission and QoS Guarantee of Virtual Machines in Multicore-based Network Interface Card
CN113141312A (en) * 2020-01-20 2021-07-20 浙江宇视科技有限公司 Data processing method, device, system, electronic equipment and storage medium
WO2022181903A1 (en) * 2021-02-26 2022-09-01 엘지전자 주식회사 Vehicular display device
WO2022181899A1 (en) * 2021-02-26 2022-09-01 엘지전자 주식회사 Signal processing device and vehicle display device having same

Similar Documents

Publication Publication Date Title
KR101583325B1 (en) Network interface apparatus and method for processing virtual packets
KR101639797B1 (en) Network interface apparatus and method for processing virtual machine packets
CN108337188B (en) Traffic and load aware dynamic queue management
US7460558B2 (en) System and method for connection capacity reassignment in a multi-tier data processing system network
US7512706B2 (en) Method, computer program product, and data processing system for data queuing prioritization in a multi-tiered network
JP2022532730A (en) Quality of service in virtual service networks
CN105119993B (en) Virtual machine deployment method and device
EP3286966A1 (en) Resource reallocation
US20140122743A1 (en) Shared interface among multiple compute units
US10560385B2 (en) Method and system for controlling network data traffic in a hierarchical system
US10382344B2 (en) Generating and/or receiving at least one packet to facilitate, at least in part, network path establishment
US20220318071A1 (en) Load balancing method and related device
KR101953546B1 (en) Apparatus and method for virtual switching
US9584446B2 (en) Memory buffer management method and system having multiple receive ring buffers
KR20180134219A (en) The method for processing virtual packets and apparatus therefore
CN109076027B (en) Network service request
KR101773528B1 (en) Network interface apparatus and method for processing virtual machine packets
US10568112B1 (en) Packet processing in a software defined datacenter based on priorities of virtual end points
KR102091152B1 (en) Method and apparatus for processing packet using multi-core in hierarchical networks
WO2015199366A1 (en) Method for scheduling in multiprocessing environment and device therefor
KR20190069032A (en) The method for identifying virtual flow and apparatus therefore
KR102112270B1 (en) Packet processing method and apparatus in multi-layered network environment
KR20150114911A (en) Scheduling method and apparatus in multi-processing environment
US10992601B2 (en) Packet processing method and apparatus in multi-layered network environment
JP5803655B2 (en) Program for data transfer, information processing method and transfer device, program for managing assignment of transfer processing, information processing method and management device, and information processing system

Legal Events

Date Code Title Description
A107 Divisional application of patent
GRNT Written decision to grant