CN102541803A - Data sending method and computer - Google Patents
Data sending method and computer Download PDFInfo
- Publication number
- CN102541803A CN102541803A CN2011104557896A CN201110455789A CN102541803A CN 102541803 A CN102541803 A CN 102541803A CN 2011104557896 A CN2011104557896 A CN 2011104557896A CN 201110455789 A CN201110455789 A CN 201110455789A CN 102541803 A CN102541803 A CN 102541803A
- Authority
- CN
- China
- Prior art keywords
- cpu
- data
- memory block
- network interface
- interface card
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Computer And Data Communications (AREA)
Abstract
The invention discloses a data sending method and a computer. The data sending method comprises the following steps: each of a plurality of CPUs (central processing unit) stores the data to be sent in data storage areas which correspond to the CPUs, the plurality of CPUs send the data stored in the corresponding data storage areas to a network card, and the network card sends the data. Because each CPU core accesses the respective data storage area and writes in the data, the processing capacity of the multi-core CPUs can be fully utilized, parallel transmission of data packets is realized, the data packet transmission speed can be effectively increased, the data packet transmission efficiency can be improved, and the situation of high-speed transmission of the data packets by Gigabit network cards and the like can be handled.
Description
Technical field
The present invention relates to computing field, and especially, relate to a kind of data transmission method for uplink and computing machine.
Background technology
At present, the zero duplication technology of PCI-Express is mainly realized based on traditional intel PCI-Express.
Usually, in the transmission course of network packet, application program need be carried out alternately with the mode and the network interface card message buffer in the kernel of copy, and this copy process will inevitably reduce performance and speed, can't in big flow scene, use.And present gigabit zero-copy is based on all that the technology of single formation realizes, still; Under the situation of 10,000,000,000 accesses; Packet rate in unit interval will improve about 10 times, and will there be very high performance loss alternately in single transmit queue and buffer zone under the PCI-Express, so; Traditional technology can't be tackled so high data rate, and then become the bottleneck of 10,000,000,000 zero duplication technologies development.
To can't under big traffic requirement, realizing the problem that data are sent efficiently in the correlation technique, effective solution is not proposed as yet at present.
Summary of the invention
To can't under big traffic requirement, realizing the problem that data are sent efficiently in the correlation technique; The present invention proposes a kind of data transmission method for uplink and computing machine; Can effectively improve the speed that packet sends, improve package efficiency, can tackle the scene that ten thousand Broadcoms etc. are given out a contract for a project at a high speed.
Technical scheme of the present invention is achieved in that
According to an aspect of the present invention, a kind of data transmission method for uplink is provided, has been used under many CPU environment, realizing the data transmission.
This method comprises: for each CPU, the data storage that this CPU will send is in the corresponding memory block of this CPU; A plurality of CPU send to network interface card with the data of storing in the respective storage areas, carry out data by network interface card and send.
This method can further comprise: the data map in the memory block that each CPU is corresponding is in the pairing application of these data.
In addition, the memory block that each CPU is corresponding with it is mapped as formation, and this CPU determines whether the data of need sending through corresponding formation being carried out poll, and defining under the situation that data need send, these data is sent to network interface card.
And each CPU corresponding queues adopts the data structure of loop structure.
In addition, a plurality of CPU send to network interface card with the data of storing in the respective storage areas and comprise: a plurality of CPU are sent to the data of storing in the respective storage areas hardware queue of network interface card.
In addition, for each CPU, the memory block that this CPU is corresponding is the memory block of distributing for this CPU in the internal memory, and the memory block of a plurality of CPU is logically independent to each other.
According to a further aspect in the invention, a kind of computing machine is provided, has been used under many CPU environment, realizing the data transmission.
Computing machine according to the present invention comprises: a plurality of CPU, and wherein, for each CPU, the data storage that is used for needs are sent is in the corresponding memory block of this CPU; A plurality of CPU send to network interface card with the data of storing in the respective storage areas; Network interface card is used to send the data from a plurality of CPU.
This computing machine can further comprise: mapping block, the data map that is used for the memory block that each CPU is corresponding is in the pairing application of these data.
In addition, a plurality of CPU are used for the data that respective storage areas is stored are sent to the hardware queue of network interface card.
In addition, for each CPU, the memory block that this CPU is corresponding is the memory block of distributing for this CPU in the internal memory, and the memory block of a plurality of CPU is logically independent to each other.
The present invention visits memory block separately and writes data through making each CPU nuclear; Thereby made full use of the processing power of multi-core CPU, realized the parallel transmission of packet, can effectively improve the speed that packet sends; Improve package efficiency, can tackle the scene that ten thousand Broadcoms etc. are given out a contract for a project at a high speed.
Description of drawings
Fig. 1 is the process flow diagram according to the data transmission method for uplink of the embodiment of the invention;
Fig. 2 is the principle schematic according to the data transmission method for uplink of the embodiment of the invention.
Embodiment
According to embodiments of the invention, a kind of data transmission method for uplink is provided, be used under many CPU environment, realizing the data transmission.
As shown in Figure 1, comprise according to the data transmission method for uplink of the embodiment of the invention:
Step S101, for each CPU, the data storage that this CPU will send is in the corresponding memory block of this CPU;
Step S103, a plurality of CPU send to network interface card with the data of storing in the respective storage areas, carry out data by network interface card and send.
This method may further include: the data map in the memory block that each CPU is corresponding is in the pairing application of these data; Thereby the data in the memory block (for example change; Be modified) situation under; Can directly obtain embodying, make the operator can see these data variation clear, intuitively in application layer.
In addition, the memory block that each CPU is corresponding with it is mapped as formation, and this CPU determines whether the data of need sending through corresponding formation being carried out poll, and defining under the situation that data need send, these data is sent to network interface card.
Wherein, each CPU corresponding queues adopts the data structure of loop structure, and algorithm is nothing lock (lock-free) algorithm, thereby has avoided locking to Effect on Performance.
In addition, a plurality of CPU can be sent to the data of storing in the respective storage areas hardware queue of network interface card, thereby send the data to network interface card.
In addition, alternatively, can be the memory block of distributing for this CPU in the internal memory, and the memory block of a plurality of CPU be logically independent to each other for the corresponding memory block of each CPU.
For example; For the application scenarios of ten thousand Broadcoms, the present invention can send the characteristics of formation to ten thousand Broadcom pilositys, for each CPU nuclear distributes the formation of a network interface card hardware description symbol; And distribute the formation of corresponding software description symbol, dispose ten thousand Broadcoms and automatically message is sent to respective queue and get on.
For each formation, all be that it has distributed independently data buffer, like this, each CPU nuclear only need be visited own corresponding queues when handling.These Memory Allocation all are to distribute according to the position of CPU nuclear; Through above-mentioned configuration; Make scheme of the present invention can make full use of the handling property of CPU, for example, can make full use of the numa framework of x86 multinuclear; Guarantee that each CPU nuclear all only needs the local internal memory of visit, thereby promote internal storage access efficient.
And; Mode through memory-mapped is mapped to the data buffer formation in the application program goes; Thereby the modification that drives buffer zone can be directly reflected into application program, thereby has avoided the unnecessary memory copy, has further improved the efficient that data are sent.
In addition, on each cpu nuclear, all can run a transmission task, the transmit queue of periodic polling oneself, the hardware transmit queue that message in the formation is added to network interface card gets on, and, data is sent to network interface card that is.
Through such scheme of the present invention, realized the zero-copy technology of giving out a contract for a project to big flow scene such as ten thousand up-to-date Broadcoms; In addition, the present invention goes for the multinuclear scene, for example; Can realize by many queue technologies based on the hardware of intel network interface card, compare with past PCI-Express list formation, under multi-core environment; Can better make each CPU nuclear handle the data in the formation; Thereby made full use of the framework of current multinuclear, improved flow and sent performance, made package efficiency significantly improve.
According to embodiments of the invention, a kind of computing machine also is provided, be used under many CPU environment, realizing the data transmission.
Computing machine according to the present invention comprises: a plurality of CPU, and wherein, for each CPU, the data storage that is used for needs are sent is in the corresponding memory block of this CPU; A plurality of CPU send to network interface card with the data of storing in the respective storage areas; Network interface card is used to send the data from a plurality of CPU.
This computing machine may further include: mapping block, the data map that is used for the memory block that each CPU is corresponding is in the pairing application of these data.
In addition, a plurality of CPU are used for the data that respective storage areas is stored are sent to the hardware queue of network interface card.
Alternatively, for each CPU, the memory block that this CPU is corresponding is the memory block of distributing for this CPU in the internal memory, and the memory block of a plurality of CPU is logically independent to each other.
As shown in Figure 2; A plurality of CPU in the computing machine comprise CPU0, CPU1, CPU2 and CPU3, and (quantity of CPU can be more or less; This paper enumerates the scene of varying number CPU no longer one by one); These four CPU have internal memory (for example, the shared drive shown in the figure) separately, are used to store the data that needs send.Through shared drive, can data be sent to network interface card (for example, Intel ten thousand Broadcoms).
In sum; By means of technique scheme of the present invention, visit memory block separately and write data through making each CPU nuclear, thereby made full use of the processing power of multi-core CPU; Realized the parallel transmission of packet; Can effectively improve the speed that packet sends, improve package efficiency, can tackle the scene that ten thousand Broadcoms etc. are given out a contract for a project at a high speed.
The above is merely preferred embodiment of the present invention, and is in order to restriction the present invention, not all within spirit of the present invention and principle, any modification of being done, is equal to replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (10)
1. a data transmission method for uplink is used under many CPU environment, realizing the data transmission, it is characterized in that said method comprises:
For each CPU, the data storage that this CPU will send is in the corresponding memory block of this CPU;
Said a plurality of CPU sends to network interface card with the data of storing in the respective storage areas, carries out data by said network interface card and sends.
2. data transmission method for uplink according to claim 1 is characterized in that, further comprises:
Data map in the memory block that each CPU is corresponding is in the pairing application of these data.
3. data transmission method for uplink according to claim 1; It is characterized in that; The memory block that each CPU is corresponding with it is mapped as formation; This CPU determines whether the data of need sending through corresponding formation being carried out poll, and defining under the situation that data need send, these data is sent to said network interface card.
4. data transmission method for uplink according to claim 3 is characterized in that, each CPU corresponding queues adopts the data structure of loop structure.
5. data transmission method for uplink according to claim 1 is characterized in that, said a plurality of CPU send to network interface card with the data of storing in the respective storage areas and comprise:
Said a plurality of CPU is sent to the data of storing in the respective storage areas hardware queue of said network interface card.
6. according to each described data transmission method for uplink in the claim 1 to 5, it is characterized in that for each CPU, the memory block that this CPU is corresponding is the memory block of distributing for this CPU in the internal memory, and the memory block of said a plurality of CPU is logically independent to each other.
7. a computing machine is used under many CPU environment, realizing the data transmission, it is characterized in that said computing machine comprises:
A plurality of CPU, wherein, for each CPU, the data storage that is used for needs are sent is in the corresponding memory block of this CPU;
Said a plurality of CPU sends to network interface card with the data of storing in the respective storage areas;
Network interface card is used to send the data from said a plurality of CPU.
8. computing machine according to claim 7 is characterized in that, further comprises:
Mapping block, the data map that is used for the memory block that each CPU is corresponding is in the pairing application of these data.
9. computing machine according to claim 7 is characterized in that said a plurality of CPU are used for the data that respective storage areas is stored are sent to the hardware queue of said network interface card.
10. according to each described computing machine in the claim 7 to 9, it is characterized in that for each CPU, the memory block that this CPU is corresponding is the memory block of distributing for this CPU in the internal memory, and the memory block of said a plurality of CPU is logically independent to each other.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011104557896A CN102541803A (en) | 2011-12-31 | 2011-12-31 | Data sending method and computer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011104557896A CN102541803A (en) | 2011-12-31 | 2011-12-31 | Data sending method and computer |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102541803A true CN102541803A (en) | 2012-07-04 |
Family
ID=46348730
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011104557896A Pending CN102541803A (en) | 2011-12-31 | 2011-12-31 | Data sending method and computer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102541803A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102811127A (en) * | 2012-08-23 | 2012-12-05 | 深圳乌托邦系统集成有限公司 | Acceleration network card for cloud computing application layer |
CN102984085A (en) * | 2012-11-21 | 2013-03-20 | 网神信息技术(北京)股份有限公司 | Mapping method and device |
CN105518620A (en) * | 2014-10-31 | 2016-04-20 | 华为技术有限公司 | Network card configuration method and resource management center |
CN105577567A (en) * | 2016-01-29 | 2016-05-11 | 国家电网公司 | Network data packet parallel processing method based on Intel DPDK |
WO2016082463A1 (en) * | 2014-11-24 | 2016-06-02 | 中兴通讯股份有限公司 | Data processing method and apparatus for multi-core processor, and storage medium |
CN106371925A (en) * | 2016-08-31 | 2017-02-01 | 北京中测安华科技有限公司 | High-speed big data detection method and device |
CN108536394A (en) * | 2018-03-31 | 2018-09-14 | 北京联想核芯科技有限公司 | Order distribution method, device, equipment and medium |
WO2019000716A1 (en) * | 2017-06-27 | 2019-01-03 | 联想(北京)有限公司 | Calculation control method, network card, and electronic device |
CN109600321A (en) * | 2017-09-30 | 2019-04-09 | 迈普通信技术股份有限公司 | Message forwarding method and device |
CN109756389A (en) * | 2018-11-28 | 2019-05-14 | 南京知常容信息技术有限公司 | A kind of 10,000,000,000 network covert communications detection systems |
CN111030844A (en) * | 2019-11-14 | 2020-04-17 | 中盈优创资讯科技有限公司 | Method and device for establishing flow processing framework |
CN111240845A (en) * | 2020-01-13 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Data processing method, device and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030009634A1 (en) * | 2001-06-21 | 2003-01-09 | International Business Machines Corporation | Non-uniform memory access (NUMA) data processing system that provides notification of remote deallocation of shared data |
US20030009637A1 (en) * | 2001-06-21 | 2003-01-09 | International Business Machines Corporation | Decentralized global coherency management in a multi-node computer system |
CN1848795A (en) * | 2005-04-15 | 2006-10-18 | 上海艾泰科技有限公司 | Method for realizing large data packet quick retransmission in real-time communication system |
CN101115054A (en) * | 2006-07-26 | 2008-01-30 | 惠普开发有限公司 | Memory-mapped buffers for network interface controllers |
CN101149727A (en) * | 2006-09-19 | 2008-03-26 | 索尼株式会社 | Shared memory device |
CN101217573A (en) * | 2007-12-29 | 2008-07-09 | 厦门大学 | A method to speed up message captures of the network card |
CN101650698A (en) * | 2009-08-28 | 2010-02-17 | 曙光信息产业(北京)有限公司 | Method for realizing direct memory access |
US7843926B1 (en) * | 2005-04-05 | 2010-11-30 | Oracle America, Inc. | System for providing virtualization of network interfaces at various layers |
-
2011
- 2011-12-31 CN CN2011104557896A patent/CN102541803A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030009634A1 (en) * | 2001-06-21 | 2003-01-09 | International Business Machines Corporation | Non-uniform memory access (NUMA) data processing system that provides notification of remote deallocation of shared data |
US20030009637A1 (en) * | 2001-06-21 | 2003-01-09 | International Business Machines Corporation | Decentralized global coherency management in a multi-node computer system |
US7843926B1 (en) * | 2005-04-05 | 2010-11-30 | Oracle America, Inc. | System for providing virtualization of network interfaces at various layers |
CN1848795A (en) * | 2005-04-15 | 2006-10-18 | 上海艾泰科技有限公司 | Method for realizing large data packet quick retransmission in real-time communication system |
CN101115054A (en) * | 2006-07-26 | 2008-01-30 | 惠普开发有限公司 | Memory-mapped buffers for network interface controllers |
CN101149727A (en) * | 2006-09-19 | 2008-03-26 | 索尼株式会社 | Shared memory device |
CN101217573A (en) * | 2007-12-29 | 2008-07-09 | 厦门大学 | A method to speed up message captures of the network card |
CN101650698A (en) * | 2009-08-28 | 2010-02-17 | 曙光信息产业(北京)有限公司 | Method for realizing direct memory access |
Non-Patent Citations (1)
Title |
---|
宋有泉 等: "嵌入式PCI网卡驱动程序的设计与优化", 《计算机工程》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102811127A (en) * | 2012-08-23 | 2012-12-05 | 深圳乌托邦系统集成有限公司 | Acceleration network card for cloud computing application layer |
CN102984085A (en) * | 2012-11-21 | 2013-03-20 | 网神信息技术(北京)股份有限公司 | Mapping method and device |
CN105518620A (en) * | 2014-10-31 | 2016-04-20 | 华为技术有限公司 | Network card configuration method and resource management center |
WO2016065643A1 (en) * | 2014-10-31 | 2016-05-06 | 华为技术有限公司 | Network card configuration method and resource management center |
US10305823B2 (en) | 2014-10-31 | 2019-05-28 | Huawei Technologies Co., Ltd. | Network interface card configuration method and resource management center |
CN105518620B (en) * | 2014-10-31 | 2019-02-01 | 华为技术有限公司 | A kind of network card configuration method and resource management center |
WO2016082463A1 (en) * | 2014-11-24 | 2016-06-02 | 中兴通讯股份有限公司 | Data processing method and apparatus for multi-core processor, and storage medium |
CN105577567B (en) * | 2016-01-29 | 2018-11-02 | 国家电网公司 | Network packet method for parallel processing based on Intel DPDK |
CN105577567A (en) * | 2016-01-29 | 2016-05-11 | 国家电网公司 | Network data packet parallel processing method based on Intel DPDK |
CN106371925A (en) * | 2016-08-31 | 2017-02-01 | 北京中测安华科技有限公司 | High-speed big data detection method and device |
WO2019000716A1 (en) * | 2017-06-27 | 2019-01-03 | 联想(北京)有限公司 | Calculation control method, network card, and electronic device |
CN109600321A (en) * | 2017-09-30 | 2019-04-09 | 迈普通信技术股份有限公司 | Message forwarding method and device |
CN108536394A (en) * | 2018-03-31 | 2018-09-14 | 北京联想核芯科技有限公司 | Order distribution method, device, equipment and medium |
CN109756389A (en) * | 2018-11-28 | 2019-05-14 | 南京知常容信息技术有限公司 | A kind of 10,000,000,000 network covert communications detection systems |
CN111030844A (en) * | 2019-11-14 | 2020-04-17 | 中盈优创资讯科技有限公司 | Method and device for establishing flow processing framework |
CN111240845A (en) * | 2020-01-13 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Data processing method, device and storage medium |
CN111240845B (en) * | 2020-01-13 | 2023-10-03 | 腾讯科技(深圳)有限公司 | Data processing method, device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102541803A (en) | Data sending method and computer | |
US7788334B2 (en) | Multiple node remote messaging | |
CN104050091B (en) | The network equipment and its method to set up of system are accessed based on Non Uniform Memory Access | |
CN102571580A (en) | Data receiving method and computer | |
US11936571B2 (en) | Reliable transport offloaded to network devices | |
TW201234264A (en) | Remote core operations in a multi-core computer | |
CN114095251B (en) | SSLVPN implementation method based on DPDK and VPP | |
US20210075745A1 (en) | Methods and apparatus for improved polling efficiency in network interface fabrics | |
CN102567226A (en) | Data access implementation method and data access implementation device | |
US20210326221A1 (en) | Network interface device management of service execution failover | |
US20190294570A1 (en) | Technologies for dynamic multi-core network packet processing distribution | |
US20220086226A1 (en) | Virtual device portability | |
US20210329354A1 (en) | Telemetry collection technologies | |
EP4184324A1 (en) | Efficient accelerator offload in multi-accelerator framework | |
CN103455371A (en) | Mechanism for optimized intra-die inter-nodelet messaging communication | |
US20240160488A1 (en) | Dynamic microservices allocation mechanism | |
CN102375789B (en) | Non-buffer zero-copy method of universal network card and zero-copy system | |
CN109857545A (en) | A kind of data transmission method and device | |
CN104503948A (en) | Tightly coupled self-adaptive co-processing system supporting multi-core network processing framework | |
CN109964211A (en) | The technology for virtualizing network equipment queue and memory management for half | |
Shim et al. | Design and implementation of initial OpenSHMEM on PCIe NTB based cloud computing | |
CN102495764A (en) | Method and device for realizing data distribution | |
EP4020208A1 (en) | Memory pool data placement technologies | |
WO2024098232A1 (en) | Adaptive live migration of a virtual machine for a physical storage device controller | |
US20220321434A1 (en) | Method and apparatus to store and process telemetry data in a network device in a data center |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20120704 |