CN110166373A - Method, apparatus, medium and system of the source physical machine to purpose physical machine hair data - Google Patents
Method, apparatus, medium and system of the source physical machine to purpose physical machine hair data Download PDFInfo
- Publication number
- CN110166373A CN110166373A CN201910425941.2A CN201910425941A CN110166373A CN 110166373 A CN110166373 A CN 110166373A CN 201910425941 A CN201910425941 A CN 201910425941A CN 110166373 A CN110166373 A CN 110166373A
- Authority
- CN
- China
- Prior art keywords
- physical machine
- central processing
- network interface
- processing unit
- source
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4633—Interconnection of networks using encapsulation techniques, e.g. tunneling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
- H04L47/125—Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Computer And Data Communications (AREA)
Abstract
The present invention provides method, apparatus from a provenance physical machine to purpose physical machine hair data, storage medium and system, there are multiple central processing units in the physical machine of source, this method comprises: encapsulation step, generic route encapsulation will be carried out by the processed data of correspondence central processing unit in multiple central processing units, and forms the tunnel of transmission data;Judgment step judges the network interface for tunnel in active physical machine whether is cached in corresponding central processing unit, if so, then entering sending step, otherwise enters and determines step;It determines step, according to the mark of the source network protocol address in tunnel, the purpose internet protocol address in tunnel and corresponding central processing unit, determines network interface;Network interface is cached in corresponding central processing unit by caching step, and enters sending step;Data after encapsulation are sent to purpose physical machine via network interface by sending step.By the invention it is possible to realize the load balancing of data transmission, while the performance of data transmission can be improved.
Description
Technical field
The present invention relates to the methods communicated between multiple physical machines more particularly to source physical machine to send out to purpose physical machine
Method, apparatus, medium and the system of data.
Background technique
In the environment of cloud computing, the mutual transmission of data, and these are realized by tunnel using the user of physical machine
Physical machine is isolated by tunnel, because tunnel is built upon between one-to-one physical machine.The number of user's transmission
According to being all located inside tunnel.
Due to the importance of user network, equivalent route (Equal-Cost Multipath Routing, ECMP) this
The standby management of double first line of a couplet calamities just becomes a kind of indispensable means.The data (message) of user can pass through two network interfaces of source physical machine
One of be sent to purpose physical machine, when one of network interface break down when, can be sent by another network interface.
In all normal situation of two network interfaces, for the tunnel established between a source physical machine and the physical machine of a mesh,
It needs to select corresponding network interface.Specifically, to the source IP (that is, source physical machine IP) and destination IP in tunnel (that is, purpose physical machine
IP Hash calculation) is carried out.However, the tunnel established between the two is fixed for specified source physical machine and purpose physical machine
, that is, the source IP and destination IP in tunnel are fixed, therefore the result of Hash calculation is all identical network interface.Therefore, even if source
There are two network interfaces for physical machine tool, and the data between specified source physical machine and purpose physical machine are transmitted, can not be realized negative
It carries balanced.
Summary of the invention
In order to solve the existing problems, the present invention provides method of the provenance physical machine to purpose physical machine hair data, source
There are multiple central processing units in physical machine, method includes:
Encapsulation step will carry out general road by the processed data of correspondence central processing unit in multiple central processing units
By encapsulating, and form the tunnel of transmission data;
Judgment step judges the network interface for tunnel in active physical machine whether is cached in corresponding central processing unit, such as
Fruit has, then enters sending step, otherwise enters and determines step;
Step is determined, according to the source network protocol address in tunnel, the purpose internet protocol address in tunnel and corresponding center
The mark of processor, determines network interface;
Network interface is cached in corresponding central processing unit by caching step, and enters sending step;
Data after encapsulation are sent to purpose physical machine via network interface by sending step.
Wherein it is determined that step further comprises:
Finding step searches multiple network interfaces of source physical machine using equivalent route;
Calculate step, to the mark of source network protocol address, purpose internet protocol address and corresponding central processing unit into
Row Hash calculation, to determine the network interface in multiple network interfaces.
Wherein, source network protocol address is the internet protocol address of source physical machine, and purpose internet protocol address is purpose object
The internet protocol address of reason machine.
Wherein, different tunnels can be formed between source physical machine and different purpose physical machines,
Wherein, in each central processing unit in multiple central processing units, the difference for different tunnels can be cached
Network interface.
The present invention also provides a provenance physical machines to the device of purpose physical machine hair data, has multiple centers in the physical machine of source
Processor, device include:
Encapsulation unit will carry out general road by the processed data of correspondence central processing unit in multiple central processing units
By encapsulating, and form the tunnel of transmission data;
Judging unit judges the network interface for tunnel in active physical machine whether is cached in corresponding central processing unit;
Determination unit, when judging unit judge in corresponding central processing unit it is uncached have network interface when, determination unit according to
The mark of the source network protocol address in tunnel, the purpose internet protocol address in tunnel and corresponding central processing unit, determines net
Mouthful;
The network interface that determination unit determines is cached in corresponding central processing unit by cache unit;
Transmission unit, when judging unit, which is judged to correspond to, is cached with network interface in central processing unit or cache unit caches
After network interface, the data after encapsulation are sent to purpose physical machine via network interface.
Wherein it is determined that unit further comprises:
Searching unit searches multiple network interfaces of source physical machine using equivalent route;
Computing unit, to the mark of source network protocol address, purpose internet protocol address and corresponding central processing unit into
Row Hash calculation, to determine the network interface in multiple network interfaces.
Wherein, source network protocol address is the internet protocol address of source physical machine, and purpose internet protocol address is purpose object
The internet protocol address of reason machine.
Wherein, different tunnels can be formed between source physical machine and different purpose physical machines,
Wherein, in each central processing unit in multiple central processing units, the difference for different tunnels can be cached
Network interface.
The present invention also provides a kind of computer-readable storage medium, storage medium has the instruction being stored therein, when
Instruction is performed, so that computer executes source physical machine to the method for purpose physical machine hair data, instruction includes:
Encapsulation instruction will carry out general road by the processed data of correspondence central processing unit in multiple central processing units
By encapsulating, and form the tunnel of transmission data;
Decision instruction judges the network interface for tunnel in active physical machine whether is cached in corresponding central processing unit;
Determine instruction, when judge in corresponding central processing unit it is uncached have network interface when, according to the source network agreement in tunnel
The mark of address, the purpose internet protocol address in tunnel and corresponding central processing unit, determines network interface;
The calculated network interface of computing unit is cached in corresponding central processing unit by cache instruction;
Transmission unit will be sealed when judging to be cached with network interface in corresponding central processing unit or after having cached network interface
Data after dress are sent to purpose physical machine via network interface.
The present invention also provides a kind of systems, comprising:
Memory, the instruction that the one or more processors for storing by system execute, and
Processor is one of processor of system, for executing source physical machine as above to purpose physical machine hair data
Method.
By the invention it is possible to realize the load balancing of data transmission, while the performance of data transmission can be improved.
Detailed description of the invention
Fig. 1 shows the frame of the system according to an embodiment of the present invention for source physical machine to purpose physical machine hair data
Figure;
Fig. 2 shows source physical machine according to an embodiment of the present invention to purpose physical machine hair data method flow chart;
Fig. 3 shows the flow chart of the determination step in Fig. 2;
Fig. 4 shows the structure chart that source physical machine according to an embodiment of the present invention sends out the device of data to purpose physical machine;
Fig. 5 shows the structure chart of the determination unit in Fig. 4;
Fig. 6 shows the structure chart of the communication system comprising device shown in Fig. 4.
Specific embodiment
Embodiments of the present invention are illustrated by particular specific embodiment below, those skilled in the art can be by this specification
Revealed content is understood other advantages and efficacy of the present invention easily.Although description of the invention will combine preferred embodiment
It introduces together, but this feature for not representing the invention is only limitted to the embodiment.On the contrary, being invented in conjunction with embodiment
The purpose of introduction is to be possible to the other selections extended or transformation to cover based on claim of the invention.In order to mention
For that will include many concrete details in depth understanding of the invention, being described below.The present invention can also be thin without using these
Section is implemented.In addition, in order to avoid confusion or obscuring emphasis of the invention, some details will be omitted in the de-scription.It needs
Illustrate, in the absence of conflict, the feature in embodiment and embodiment in the present invention can be combined with each other.
It should be noted that in the present specification, similar label and letter indicate similar terms in following attached drawing, because
This does not need then to carry out further defining reconciliation to it in subsequent attached drawing once being defined in a certain Xiang Yi attached drawing
It releases.
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to implementation of the invention
Mode is described in further detail.
Embodiment provided by the present invention can mobile terminal, terminal or similar arithmetic unit (such as
ECU (Electronic Control Unit, electronic control unit)), execute in system.For operating in system, Fig. 1 is
The hardware block diagram of system according to an embodiment of the present invention for from source physical machine to purpose physical machine hair data.Such as Fig. 1 institute
Show, system 100 may include that (processor 101 may include but unlimited one or more (one is only shown in figure) processors 101
In central processor CPU, image processor GPU, digital signal processor DSP, Micro-processor MCV or programmable logic device
The processing unit of FPGA etc.), the input/output interface 102 for being interacted with user, memory 103 for storing data, with
And the transmitting device 104 for communication function.It will appreciated by the skilled person that structure shown in FIG. 1 is only to illustrate,
It does not cause to limit to the structure of above-mentioned electronic device.For example, system 100 may also include than shown in Fig. 1 more or more
Few component, or with the configuration different from shown in Fig. 1.
Input/output interface 102 can connect one or more displays, touch screen etc., pass for showing from system 100
The data sent can also connect keyboard, stylus, Trackpad and/or mouse etc., for inputting such as, selection, creation, editor
Deng user instruction.
Memory 103 can be used for storing the software program and module of application software, for example, in embodiment of the present invention
For source physical machine to purpose physical machine hair data the corresponding program instruction/module of method, processor 101 by operation deposits
The software program and module stored up in memory 103 are realized above-mentioned thereby executing various function application and data processing
For source physical machine to purpose physical machine hair data method.Memory 103 may include high speed random access memory, may also include
Nonvolatile memory, such as one or more magnetic storage device, flash memory or other non-volatile solid state memories.?
In some examples, memory 103 can further comprise the memory remotely located relative to processor 101, these long-range storages
Device can pass through network connection to system 100.The example of above-mentioned network includes but is not limited to internet, intranet, local
Net, mobile radio communication and combinations thereof.
Transmitting device 104 is used to that data to be received or sent via a network.Above-mentioned network specific example may include
The internet that the communication providers of system 100 provide.Under above-mentioned running environment, the present invention provides be used for source physical machine to mesh
Physical machine hair data method.
Fig. 2 shows the streams according to an embodiment of the present invention for the method in source physical machine to purpose physical machine hair data
Cheng Tu, Fig. 4 show the structure of the device 40 according to an embodiment of the present invention for source physical machine to purpose physical machine hair data
Figure, the device 40 is for executing method flow shown in Fig. 2, and device 40 includes encapsulation unit 41, judging unit 42, determines list
Member 43, cache unit 44 and transmission unit 45.
With reference to the accompanying drawing, the embodiment of the present invention is specifically described.
Fig. 6 shows the structure chart of the communication system according to an embodiment of the present invention comprising device 40 shown in Fig. 4.Example
Such as, which is mounted in source physical machine 61, and source physical machine 61 has multiple CPU (central processing unit) (figure does not regard).
As shown in Fig. 2, encapsulation unit 41 will be by the processed number of correspondence CPU in multiple CPU in encapsulation step S21
According to progress generic route encapsulation (Generic Routing Encapsulation, GRE), and form the tunnel of transmission data
(tunnel)。
In this example, corresponding CPU is, for example, CPU1, and 41 pairs of encapsulation unit carry out GRE encapsulation by the processed data of CPU1,
Tunnel A between formation source physical machine 61 and purpose physical machine 62.
In judgment step S22, whether judging unit 42, which judges to cache in CPU1, is used for tunnel A in active physical machine 61
Network interface, if so, then enter sending step S25, otherwise enter determine step S23.
Have multiple network interfaces, such as network interface shown in fig. 6 611 and network interface 612 in source physical machine 61, by network interface 611 or
Network interface 612, source physical machine 61 send or receive data.It should be appreciated that although Fig. 6 shows two network interfaces 611 and network interface 612,
But the quantity of network interface can be any, and it is unrestricted.
For example, the network interface 611 or 612 for tunnel A is not cached in CPU1, then determining step S23, really
Order member 43 is determined according to the source network protocol address of tunnel A, the purpose internet protocol address of tunnel A and the mark of CPU1
Network interface.
Fig. 3 shows the specific flow chart of determining step S23, and Fig. 5 shows the structure chart of determination unit 43, such as Fig. 4 institute
Show, determination unit 43 includes searching unit 431 and computing unit 432.
As shown in Figure 3 and Figure 5, in finding step S231, searching unit 431 utilizes equivalent route (ECMP), finds this
Source physical machine 1 has multiple network interfaces, such as the network interface 611 and 612 in this example.
Then, step S232 is being calculated, computing unit 432 is to the source network protocol address of tunnel A, the purpose net of tunnel A
The mark of network protocol address and CPU1 carry out Hash calculation, to determine a network interface in network interface 611 and 612.
Wherein, the source network protocol address of tunnel A is the internet protocol address IP1 of source physical machine 61, the purpose net of tunnel A
Network protocol address is the internet protocol address IP2 of purpose physical machine 62.The mark of CPU1 is for example exactly the label " 1 " of CPU1.
In this example, computing unit 432 carries out Hash calculation to the label " 1 " of IP1, IP2 and CPU1, so that it is determined that
Network interface 612.
Since the CPU of data per treatment is not fully identical, for tunnel A, although IP1, IP2 be it is fixed,
But the label of CPU is not identical, thus to IP1, after the label of IP2 and CPU have carried out Hash calculation, determining network interface
Also not identical, therefore the data transmitted every time are not all transmitted by the same network interface, in this way, data transmission may be implemented
Load balancing.
Fig. 2 is returned to, in caching step S24, network interface 612 is cached in CPU1 by cache unit 44, and enters sending step
S25。
In sending step S25, the data after above-mentioned encapsulation are sent to purpose physical machine via network interface 612 by transmission unit 45
62。
In addition, being flowed if judging to be cached with the network interface 612 for tunnel A in CPU1 in judgment step S22
Journey is directly entered sending step S25, and the data after above-mentioned encapsulation are sent to purpose object via network interface 612 by transmission unit 45
Reason machine 62.
Wherein, different tunnels can be formed between source physical machine 61 and different purpose physical machines, in this example, source object
Tunnel A is formd between reason machine 61 and purpose physical machine 62.In addition, source physical machine 61 and another object physical machine (scheme not regard) it
Between can form tunnel B (figure do not regard).
In each CPU, the different network interfaces for different tunnels can be cached.For example, having been cached in CPU1 for tunnel
The network interface 612 of road A, for the network interface 611 of tunnel B and the network interface in other tunnels.Equally, use can be cached in other CPU
In the respective network interface of tunnel A, tunnel B and other tunnels.
As can be seen that after having cached the network interface 612 for tunnel A in CPU1, when source physical machine 61 will be transmitted again
When data are to purpose physical machine 61, that is, be using tunnel A come when transmitting data, it, can if the data are handled by CPU1 again
Directly to send data using the network interface 612 for tunnel A cached in CPU1, go to select without repeating Hash calculation
Network interface is selected, therefore the performance of data transmission can be improved.
As above, by the invention it is possible to realize the load balancing of data transmission, while the property of data transmission can be improved
Energy.
The present invention also provides a kind of computer-readable storage medium, storage medium has the instruction being stored therein, when
Instruction is performed, so that computer executes source physical machine to the method for purpose physical machine hair data, instruction includes:
Encapsulation instruction will carry out general road by the processed data of correspondence central processing unit in multiple central processing units
By encapsulating, to form the tunnel of transmission data;
Decision instruction judges the network interface for tunnel in active physical machine whether is cached in corresponding central processing unit;
Determine instruction, when judge in corresponding central processing unit it is uncached have network interface when, according to the source network agreement in tunnel
The mark of address, the purpose internet protocol address in tunnel and corresponding central processing unit, determines network interface;
The calculated network interface of computing unit is cached in corresponding central processing unit by cache instruction;
Transmission unit will be sealed when judging to be cached with network interface in corresponding central processing unit or after having cached network interface
Data after dress are sent to purpose physical machine via network interface.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention
Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the present invention and help to understand one or more of the various inventive aspects,
Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect
Shield the present invention claims features more more than feature expressly recited in each claim.More precisely, such as right
As claim reflects, inventive aspect is all features less than single embodiment disclosed above.Therefore, it then follows tool
Thus claims of body embodiment are expressly incorporated in the specific embodiment, wherein each claim conduct itself
Separate embodiments of the invention.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment
Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment
Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or
Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any
Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed
All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power
Benefit requires, abstract and attached drawing) disclosed in each feature can be by providing identical, equivalent, or similar purpose alternative features come generation
It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention
Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is one of any
Can in any combination mode come using.
Word "comprising" does not exclude the presence of element or step not listed in the claims.Word " one " located in front of the element
Or "one" does not exclude the presence of multiple such elements.The present invention can by means of include several different elements hardware and
It is realized by means of properly programmed terminal device.In the unit claim for listing several terminal devices, these terminals
Several in equipment, which can be, to be embodied by the same item of hardware.
Claims (10)
1. a provenance physical machine has multiple central processing units in the source physical machine to the method for purpose physical machine hair data,
It is characterized in that, which comprises
Encapsulation step will carry out general road by the processed data of the correspondence central processing unit in multiple central processing units
By encapsulating, and form the tunnel for transmitting the data;
Judgment step, judge whether to be cached in the corresponding central processing unit in the source physical machine for the tunnel
Otherwise network interface enters if so, then entering sending step and determines step;
Step is determined, according to the purpose internet protocol address and correspondence of the source network protocol address in the tunnel, the tunnel
The mark of central processing unit determines the network interface;
The network interface is cached in the corresponding central processing unit by caching step, and enters sending step;
The data after encapsulation are sent to the purpose physical machine via the network interface by sending step.
2. the method that source physical machine as described in claim 1 sends out data to purpose physical machine, which is characterized in that the determining step
Suddenly further comprise:
Finding step searches multiple network interfaces of the source physical machine using equivalent route;
Step is calculated, to the mark of the source network protocol address, the purpose internet protocol address and corresponding central processing unit
Know and carry out Hash calculation, to determine the network interface in the multiple network interface.
3. the method that source physical machine as claimed in claim 1 or 2 sends out data to purpose physical machine, which is characterized in that the source
Internet protocol address is the internet protocol address of the source physical machine, and the purpose internet protocol address is the purpose physical machine
Internet protocol address.
4. the method that source physical machine as claimed in claim 1 or 2 sends out data to purpose physical machine, which is characterized in that described
Different tunnels can be formed between source physical machine and different purpose physical machines,
Wherein, it in each central processing unit in the multiple central processing unit, can cache for the different tunnels
Different network interfaces.
5. a provenance physical machine sends out the device of data to purpose physical machine, there are multiple central processing units in the source physical machine,
It is characterized in that, described device includes:
Encapsulation unit will carry out general road by the processed data of the correspondence central processing unit in multiple central processing units
By encapsulating, and form the tunnel for transmitting the data;
Judging unit, judge whether to be cached in the corresponding central processing unit in the source physical machine for the tunnel
Network interface;
Determination unit, when the judging unit judge in the corresponding central processing unit it is uncached have the network interface when, it is described
Determination unit is according to the source network protocol address in the tunnel, the purpose internet protocol address in the tunnel and corresponding center
The mark of processor determines the network interface;
The network interface that the determination unit determines is cached in the corresponding central processing unit by cache unit;
Transmission unit when the judging unit is judged to be cached with the network interface in the corresponding central processing unit or caches
After the unit caches network interface, the data after encapsulation are sent to the purpose physical machine via the network interface.
6. the device that source physical machine as claimed in claim 5 sends out data to purpose physical machine, which is characterized in that described determining single
Member further comprises:
Searching unit searches multiple network interfaces of the source physical machine using equivalent route;
Computing unit, to the mark of the source network protocol address, the purpose internet protocol address and corresponding central processing unit
Know and carry out Hash calculation, to determine the network interface in the multiple network interface.
7. the device that such as source physical machine described in claim 5 or 6 sends out data to purpose physical machine, which is characterized in that the source
Internet protocol address is the internet protocol address of the source physical machine, and the purpose internet protocol address is the purpose physical machine
Internet protocol address.
8. the device that such as source physical machine described in claim 5 or 6 sends out data to purpose physical machine, which is characterized in that described
Different tunnels can be formed between source physical machine and different purpose physical machines,
Wherein, it in each central processing unit in the multiple central processing unit, can cache for the different tunnels
Different network interfaces.
9. a kind of computer-readable storage medium, the storage medium has the instruction being stored therein, when described instruction quilt
When execution, so that the computer executes source physical machine to the method for purpose physical machine hair data, which is characterized in that described instruction
Include:
Encapsulation instruction will carry out general road by the processed data of the correspondence central processing unit in multiple central processing units
By encapsulating, and form the tunnel for transmitting the data;
Decision instruction, judge whether to be cached in the corresponding central processing unit in the source physical machine for the tunnel
Network interface;
Determine instruction, when judge in the corresponding central processing unit it is uncached have the network interface when, according to the source in the tunnel
The mark of internet protocol address, the purpose internet protocol address in the tunnel and corresponding central processing unit, determines the net
Mouthful;
The calculated network interface of the computing unit is cached in the corresponding central processing unit by cache instruction;
Transmission unit, when judging when being cached with the network interface in the corresponding central processing unit or caching the network interface
Later, the data after encapsulation are sent to the purpose physical machine via the network interface.
10. a kind of system characterized by comprising
Memory, the instruction that the one or more processors for storing by system execute, and
Processor is one of the processor of the system, for executing such as source physical machine of any of claims 1-4
To the method for purpose physical machine hair data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910425941.2A CN110166373B (en) | 2019-05-21 | 2019-05-21 | Method, device, medium and system for sending data from source physical machine to destination physical machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910425941.2A CN110166373B (en) | 2019-05-21 | 2019-05-21 | Method, device, medium and system for sending data from source physical machine to destination physical machine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110166373A true CN110166373A (en) | 2019-08-23 |
CN110166373B CN110166373B (en) | 2022-12-27 |
Family
ID=67631630
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910425941.2A Active CN110166373B (en) | 2019-05-21 | 2019-05-21 | Method, device, medium and system for sending data from source physical machine to destination physical machine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110166373B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101106450A (en) * | 2007-08-16 | 2008-01-16 | 杭州华三通信技术有限公司 | Secure protection device and method for distributed packet transfer |
US20110149971A1 (en) * | 2008-08-18 | 2011-06-23 | Zhiqiang Zhu | Method, apparatus and system for processing packets |
CN102970244A (en) * | 2012-11-23 | 2013-03-13 | 上海寰创通信科技股份有限公司 | Network message processing method of multi-CPU (Central Processing Unit) inter-core load balance |
CN103049336A (en) * | 2013-01-06 | 2013-04-17 | 浪潮电子信息产业股份有限公司 | Hash-based network card soft interrupt and load balancing method |
US20130198266A1 (en) * | 2012-01-30 | 2013-08-01 | 5O9, Inc. | Facilitating communication between web-enabled devices |
CN106034084A (en) * | 2015-03-16 | 2016-10-19 | 华为技术有限公司 | Data transmission method and apparatus thereof |
WO2018104769A1 (en) * | 2016-12-09 | 2018-06-14 | Nokia Technologies Oy | Method and apparatus for load balancing ip address selection in a network environment |
-
2019
- 2019-05-21 CN CN201910425941.2A patent/CN110166373B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101106450A (en) * | 2007-08-16 | 2008-01-16 | 杭州华三通信技术有限公司 | Secure protection device and method for distributed packet transfer |
US20110149971A1 (en) * | 2008-08-18 | 2011-06-23 | Zhiqiang Zhu | Method, apparatus and system for processing packets |
US20130198266A1 (en) * | 2012-01-30 | 2013-08-01 | 5O9, Inc. | Facilitating communication between web-enabled devices |
CN102970244A (en) * | 2012-11-23 | 2013-03-13 | 上海寰创通信科技股份有限公司 | Network message processing method of multi-CPU (Central Processing Unit) inter-core load balance |
CN103049336A (en) * | 2013-01-06 | 2013-04-17 | 浪潮电子信息产业股份有限公司 | Hash-based network card soft interrupt and load balancing method |
CN106034084A (en) * | 2015-03-16 | 2016-10-19 | 华为技术有限公司 | Data transmission method and apparatus thereof |
WO2018104769A1 (en) * | 2016-12-09 | 2018-06-14 | Nokia Technologies Oy | Method and apparatus for load balancing ip address selection in a network environment |
Also Published As
Publication number | Publication date |
---|---|
CN110166373B (en) | 2022-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11171969B2 (en) | Systems and methods for real-time configurable load determination | |
CN109408257B (en) | Data transmission method and device for Network On Chip (NOC) and electronic equipment | |
US20160352578A1 (en) | System and method for adaptive paths locator for virtual network function links | |
Shang et al. | The design and implementation of the NDN protocol stack for RIOT-OS | |
US10341264B2 (en) | Technologies for scalable packet reception and transmission | |
US20200076715A1 (en) | Technologies for capturing processing resource metrics as a function of time | |
WO2021114768A1 (en) | Data processing device and method, chip, processor, apparatus, and storage medium | |
KR101679573B1 (en) | Method and apparatus for service traffic security using dimm channel distribution multicore processing system | |
CN115529677A (en) | Information-centric network unstructured data carriers | |
US10469368B2 (en) | Distributed routing table system with improved support for multiple network topologies | |
CN101599910B (en) | Method and device for sending messages | |
CN104601645A (en) | Data packet processing method and device | |
CN110166373A (en) | Method, apparatus, medium and system of the source physical machine to purpose physical machine hair data | |
US11824752B2 (en) | Port-to-port network routing using a storage device | |
US9367329B2 (en) | Initialization of multi-core processing system | |
CN106302259B (en) | Method and router for processing message in network on chip | |
CN105095147B (en) | The Flit transmission methods and device of network-on-chip | |
CN115866705A (en) | Geographic routing | |
CN111800340B (en) | Data packet forwarding method and device | |
Jung et al. | Gpu-ether: Gpu-native packet i/o for gpu applications on commodity ethernet | |
Luo et al. | A hotspot-pattern-aware routing algorithm for networks-on-chip | |
US20160205042A1 (en) | Method and system for transceiving data over on-chip network | |
Sapio et al. | Cross-platform estimation of network function performance | |
CN114567679B (en) | Data transmission method and device | |
CN110147344A (en) | Method, apparatus, storage medium and the system communicated between multiple physical machines |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |