CN101778092A - Data transmission method for multiple-client server - Google Patents

Data transmission method for multiple-client server Download PDF

Info

Publication number
CN101778092A
CN101778092A CN200910010075A CN200910010075A CN101778092A CN 101778092 A CN101778092 A CN 101778092A CN 200910010075 A CN200910010075 A CN 200910010075A CN 200910010075 A CN200910010075 A CN 200910010075A CN 101778092 A CN101778092 A CN 101778092A
Authority
CN
China
Prior art keywords
client
server
packet
priority
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN200910010075A
Other languages
Chinese (zh)
Inventor
蒋一
李德宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN200910010075A priority Critical patent/CN101778092A/en
Publication of CN101778092A publication Critical patent/CN101778092A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a data transmission method for a multiple-client server. After receiving a client request, a server allocates a priority from high to low to a client according to the request time; (2) the server provides two data packets to be sent to each client, orders in the priority queue, and arranges the data packet of the client with high priority in the front; (3) the server firstly sends the data packet which is in the front of the priority queue according to the bandwidth condition; (4) after the data packet at the client is successfully sent, the server provides the next data packet for the client, and inserts the data packet into the position matched with the priority of the client in the priority queue, and then sends the data packet prior to other data packets of the client with low priority. The invention carries out user bandwidth guarantee in a strategy way, can stabilize the running state of the server, and furthest utilize the network.

Description

The data transmission method of multiple-client server
Technical field
The present invention relates to the transmission method of data under a kind of network architecture, more particularly, relate to the separate unit server provides data for a plurality of clients method.
Background technology
In current network uses, the data sharing requirement of big data quantity has appearred, and as file-sharing, video is shared; The requirement to server operational efficiency that service is provided accordingly also becomes very high.The service quality of server is the resource-sharing efficient of the whole service network of influence directly, and the stability of service network.It more typically is exactly the server of c/s structure and p2p structure, for example: the total bandwidth of server is 100,000,000, it is 120,000,000 o'clock that but the data of all client-requested are downloaded total amount, if server is not limited additional 20,000,000 bandwidth, client will occur and fight for 100,000,000 bandwidth jointly.The speed of download of the client downloads stream that this moment is all can not ensure that all this will be a fatal impact for the exigent network application of real-time.Serious fights for bandwidth even the phenomenon that whole service network is paralysed can occur.In this state, server section is to the use of bandwidth, and it is of crucial importance just to seem.How can guarantee that server runs well at full capacity, perhaps diplomatic under the normal operation situation user be carried out bandwidth safeguard, thereby ensure the key subjects under the written prior art stablized of service network.
Summary of the invention
The invention provides a kind of separate unit server for a plurality of clients provide the method for data, be intended under certain bandwidth condition, ensure the stable transfer of service network data.
The data transmission method of a kind of multiple-client server of the present invention comprises the step of user end to server request data download.Specifically also comprise the steps:
After the S1 server is received client-requested, be that each client is distributed priority from high to low according to the sequencing of request time;
The S2 server provides two packets that are used to send for each client, and in priority query to the data packet sequencing, for the packet of high priority client in preceding arrangement;
The S3 server preferentially satisfies the packet transmission the preceding of sorting in the priority query according to bandwidth situation;
After a packet of a client of S4 sends successfully, server provides next packet for this client, and this packet is inserted into the position that is complementary with this client first level in priority query, thereby this packet preferably sends than other low priority client desired data bags.
The data transmission method of multiple-client server of the present invention, improvement are that in above-mentioned steps S2, the packet that is used to send is the packet after splitting.In addition, in step S1, also comprise: server is provided with a total counter and a data transmitting counter is set for each client.In step S4, comprise that also the state that data transmitting counter and total counter send according to packet adds one or subtract a counting; Wherein, total counter is used for indicating server to send the next outgoing data bag of described priority query; The data transmitting counter is used to indicate server to provide new packet to be sent for client, and it is sorted in priority query.
The data transmission method of multiple-client server of the present invention is mainly used in the guarantee bandwidth.The major technique means of its bandwidth safeguard are as follows:
1, prioritization: all objects that need ensure are carried out prioritization, and the height order according to priority ensures object step by step.
2, distribute data sends thread separately: for each client provides a thread execution interface, send data separately.
3, priority query: in priority query, sort from high to low; All need use the user of the network bandwidth all the data that will send to be placed Priority Queues, after the ordering, according to putting in order data are sent to network bottom layer.
4, priority sends the double occupy-place means of object: each sends object, and can to produce two packets etc. at most to be sent.
5, network layer readjustment means: after all data sent and finish, upwards notice sent object, and data send, can continue to increase data to bottom and send.
The data transmission method of multiple-client server of the present invention has following beneficial effect.
1, diplomaticly carries out the user bandwidth guarantee, can stablize the running status of server, used the ability of whole service network to greatest extent.
2, bandwidth safeguard can be stablized user's speed of download, and the application higher for network speed requirement provides guarantee.
3, diplomatic bandwidth safeguard, the phenomenon of having avoided bandwidth to fight for occurs, and has improved the stability of whole service network.
Description of drawings
Fig. 1 is a schematic flow sheet of the present invention;
Fig. 2 be in the priority query client according to the effect schematic diagram of priority arrangement;
Fig. 3 be in the priority query client data bag according to the effect schematic diagram of priority arrangement;
Fig. 4 is the schematic diagram that the client data bag was arranged in the priority query after a packet sent successfully among Fig. 3;
Fig. 5 is the schematic diagram that the formation of Fig. 4 medium priority adds the high priority client packet;
Fig. 6 is the schematic diagram of priority query under the another kind of embodiment state;
Fig. 7 is the schematic diagram of priority query under the another kind of state embodiment illustrated in fig. 6.
Embodiment
The data transmission method of multiple-client server of the present invention mainly provides a kind of priority bandwidth safeguard algorithm, is mainly used in server program, when server externally provides data to download, download user is carried out bandwidth safeguard.In order to achieve the above object, the present invention program is as follows as shown in Figure 1:
User end to server request data download; After then server is received client-requested, be that each client is distributed priority from high to low according to the sequencing of request time.So-called prioritization is to participate in the object of prioritization, the priority-level of distributing height not wait for each.The prioritization of current algorithm is, suppose that current priority-level is from natural number 1, add up one by one, according to the order of submitting request to, the client first level of first request is 1 so, for the highest, second is submitted to the client first level of request is 2, and priority only is lower than first client, and the 3rd is submitted to the client first level of request is 3, priority is lower than preceding two clients, by that analogy.When a certain client withdrawed from download, follow-up client first level was adjusted rank in the mode of FIFO stack.
Simultaneously, server is provided with a total counter, and a data transmitting counter is set for each client.(function of total counter and data transmitting counter sees for details hereinafter)
After this, server provides two packets that are used to send for each client, and in priority query to the data packet sequencing, from the packet of high priority client in preceding arrangement.
Then, server preferentially satisfies the packet transmission the preceding of sorting in the priority query according to bandwidth situation.
Afterwards, after a packet of a client sends successfully, server provides next packet for this client, and with this packet insertion priority query, insertion position and this client first level are complementary, so this packet preferably sends than other low priority client desired data bags.
The present invention is in order to prevent that server from reaching the bandwidth that occurs after the service ability limit and fighting for phenomenon, diplomatic the user carried out bandwidth safeguard.According to the order of request msg, carry out prioritization, first requesting users priority is the highest, successively ordering.When data were downloaded, server preferentially ensured the speed of download of the client that priority is high, when the client data download total amount when has afterwards exceeded the providing capability of server, can not allow customer impact afterwards arrive user's download before.So just reached the order ground of bandwidth safeguard, in user's data download amount, when exceeding the total bandwidth of server, server is only understood running at full capacity, the phenomenon that bandwidth is fought for can not occur, and whole stability of network has also been played powerful guarantee.
In addition, the above-mentioned packet that is used to send mostly is meant the packet of a plurality of small data quantities that the big file of a big data quantity constitutes after fractionation under the situation.
Specific embodiment is with reference to shown in Figure 1, and the present invention realizes the process of bandwidth safeguard, wherein sends packet and mainly goes on foot the circulation realization of the 8th step the 4th, makes a concrete analysis of as follows:
The first step receives request of data, and each request of Synchronous Processing.
Second step, the client that request msg is downloaded is distributed priority, distribute the priority that does not just wait according to sequencing.
In the 3rd step, for each client is distributed a data transmitting counter, record client oneself is present in network layer, is using Internet resources to send the number of packet to the destination.The data transmitting counter can be used for indicating server to provide new packet to be sent for client, and it is sorted in priority query.For example, a kind of optimal way, the technology device initial value of each client is 1, and minimum allows for-1, and just each client allows to exist simultaneously two packets to use Internet resources to send to the destination at most; And 1 expression does not send packet, 1 data of 0 expression wrap in transmission, and 2 data of-1 expression wrap in transmission.In addition, server program is equipped with total counter, writes down all clients and is present in network layer, is using the packet number of Internet resources.The value of total counter changes according to the counter of client, and minimum is 0, when being not 0, can indicate server to send next outgoing data bag in the priority query.For example, when 10 clients are downloaded simultaneously,, be present in the outside packet that sends of network layer simultaneously and mostly be most 10 by total counter control.When reaching 10, the counter of all clients can not continue to reduce, and neither can continue to the network layer Data transmission.Be sent to the destination up to network layer readjustment notification data bag, total counter returns to greater than till 0.Under the aforesaid way, counter is provided with to subtract a mode, also can adopt as required certainly to add a counter, and the initial value of counter is provided with as required.
For the use of Internet resources, priority high person preferentially use.For example: have ten clients, client 1 priority is the highest, and client 10 priority are minimum.Can exist, each client of client 1-10 has a packet to be present in network layer, and promptly all client count devices are 0, and total counter is 0 situation.When Internet resources were inadequate, also allowing to exist client 1 counter was-1, and client 2-9 counter is 0, and client 10 counters are 1, and total counter is 0 situation, i.e. client 1 priority height has taken the transmission route of the low client 10 of priority.This has just guaranteed that the high user of priority can not be subjected to fighting for of priority ground user, has also just reached the purpose of bandwidth safeguard when resource is inadequate.
The 4th step, the data transmission interface of client, transmission speed according to the client requirement, the reading of data source of regularly circulating, each packet sends object, allowing the simultaneous packet that does not send at most is two, and two packets may reside in the priority query, also can be in network layer simultaneously and outwards send.
The 5th step, all packets that need send are all added in the priority query, arrange.The main effect of priority query is that program internal control resource is fought for, make each client use Internet resources according to our set strategy, the user priority that priority is high uses Internet resources, need use 120,000,000 bandwidth altogether as top said all clients, but but have only 100,000,000 bandwidth, so according to our strategy, the user that Qing Qiu priority is high will use 100,000,000 resources jointly first, and low employed 20,000,000 resources of user of priority just can not be given and distribution, thereby reach a kind of purpose of the high user of priority being carried out bandwidth safeguard.
If it is as follows that the packet after the priority query is arranged: client 1, priority are 1; Client 2, priority are 2; Client 3, priority are 3; Client 123 is added packet in priority query, no matter the order of adding how, last result as shown in Figure 2.
All only can read the highest packet of priority in the time of each outwards reading of data, promptly be arranged in the packet (as shown in Figure 2) of whole formation top.Be certain at first take " client 1 packet " when reading, send to network layer; When if Internet resources are relatively nervous, each client can occur and read two piece of data bags from data source and place priority query to line up, put in order (as shown in Figure 3).
The 6th step sent the total counter control that data are subjected to server, and total counter read the highest data packet delivery of priority and sends for network layer to the destination greater than 0 o'clock; When counter only allowed to send three packets, upwards the situation of figure will be with " client 1 packet 1 ", and " client 1 packet 2 ", " client 2 packets 1 " take out and are sent to network layer, and remaining data bag rank results is as shown in Figure 4.
Client 1 was added packet once more and will be occurred being illustrated in fig. 5 shown below in formation this moment, presentation of results the advantage of high-priority users in priority query.
The 7th step, the 8th step, be sent to the destination according to the address of appointment in the packet, the packet of same address sends according to the order that thread passes to network layer; Not sending a packet all needs the sender of notification data, " send and to finish "; And the record value of recovery counter.
Each packet all is to send according to the 4th order that went on foot for the 8th step, carries out according to the requirement of client circulation, orderly all request contents of finishing.
With reference to concrete a kind of embodiment whole flow process is described:
One, suppose to have received in order 1,2,3 three client-requested of client, the priority of distribution is 1,2,3 one by one; The data download amount of each client is data 1 each seconds 1,000,000, and the transmitting capacity of server is data each seconds 3,000,000.Distribute counter for each client, the server counter changes according to the client count device, is 3 when not sending data.According to top transmission flow is exactly client 1, client 2, client 3, each self-organizing a packet in priority query, add.When sending first,,, will pass to network layer so each packet all only can be done stop slightly in formation because the quantity of technology device is 3.Because the transmitting capacity of server equates with the request data package total amount of client, so the packet of each client can not be detained in priority query, will be sent to network layer as long as put into priority query, the transmission speed of this moment can be protected.The counter result is as follows under the initial condition:
Total counter: 3
Client 1 counter: 1
Client 2 counters: 1
Client 3 counters: 1
Continue to send, with client 1 packet 1, client 2 packets 1,2, three packets of client 2 packets are sent to network layer.
Two, suppose in the superincumbent process of transmitting, because the variation of network state, cause the transmitting capacity of server, directly be reduced to 2,000,000 from the data of each second 3,000,000.At this moment, the client downloads total amount is 3,000,000 per seconds, the transmitting capacity of server is 2,000,000 per seconds, and clearly the transmitting capacity deficiency of server need be carried out the distribution of resource and be used.By looking for ours " resource fights for processing policy ", should be by client 1, client 2 is used 1,000,000 resources respectively, and client 3 is Resources allocation not, and next we come this situation of analyzing and processing according to our handling process.
According to client 1, client 2, the order of client 3 sends data one by one.Here need to illustrate, each client all has the thread interface of the timing reading of data of oneself, add in priority query according to the downloading request reading of data, as: per second is downloaded 1,000,000 data, just have timing in 1 second executive's interface, read 1,000,000 data encapsulation from data source each second and become 1 packet, add in the priority query, but allow to exist two packets that do not send at most, and check the transmission speed of oneself, when first packet does not send by the time, will think that the transmission speed of oneself is slow, thereby read out second packet, in priority query, rank, fight for resource with remaining client.Suppose that first transmission speed of finding data of client 1 is slow, will encapsulate second data and wrap in the priority query and line up, form rank results as shown in Figure 6.
Three, this moment, the total counter displayed record was worth 3, and client 1,2,3 counter displayed record values are 1.The reading of data thread can be with client 1 packet 1, client 1 packet 2, client 2 packets 1, three packets send to network layer, total counter displayed record value 0, client 1 counter displayed record value-1, client 2 counter displayed record value 0, client 3 counter displayed record value 1 illustrate client 1, have seized the Internet resources of client 3.Calculate that after 1 second, client 2 and client 3 can find that all the transmission speed of oneself is slack-off according to the time, thereby second packet that produces oneself fought for result's (being illustrated in fig. 7 shown below) of resource.This moment, counter status was as follows:
The counter result:
Total counter: 0
Client 1 counter: 0
Client 2 counters :-1
Client 3 counters: 1
Formation remaining data bag: client 3 packets 1, client 3 packets 2 all are not sent out.The rest may be inferred, and client 3 can be to client 1 in the fighting for afterwards, and client 2 makes a difference, and recovers up to server bandwidth.This has just realized our strategy of fighting for, and the user of low priority makes a difference to high-priority users never, thereby has reached the order ground of bandwidth safeguard.
The above; only be the preferable embodiment of the present invention; but protection scope of the present invention is not limited thereto; anyly be familiar with those skilled in the art in the technical scope that the present invention discloses; be equal to replacement or change according to technical scheme of the present invention and inventive concept thereof, all should be encompassed within protection scope of the present invention.

Claims (4)

1. the data transmission method of a multiple-client server comprises it is characterized in that the step of user end to server request data download, also comprises the steps:
(S1) after server is received client-requested, be that each client is distributed priority from high to low according to the sequencing of request time;
(S2) server provides two packets that are used to send for each client, and in priority query to the data packet sequencing, wherein, before being arranged at the packet of high priority client;
(S3) server preferentially satisfies the packet transmission the preceding of sorting in the priority query according to bandwidth situation;
(S4) after a client packet sends successfully, server provides next packet for this client, and this packet is inserted into the position that is complementary with this client first level in described priority query, thereby lower priority client desired data bag preferably sends.
2. according to the data transmission method of the described multiple-client server of claim 1, it is characterized in that in step (S2), the packet that is used to send is the packet after splitting.
3. according to the data transmission method of claim 1 or 2 described multiple-client servers, it is characterized in that,
In the step (S1), also comprise: server is provided with a total counter and a data transmitting counter is set for each client;
In the step (S4), also comprise: the state that described data transmitting counter and total counter send according to packet adds one or subtract a counting;
Described total counter is used for indicating server to send the next outgoing data bag of described priority query;
Described data transmitting counter is used to indicate server to provide new packet to be sent for client, and it is sorted in described priority query.
4. according to the data transmission method of the described multiple-client server of claim 3, it is characterized in that server distributes the transmission thread separately for the transmission of each packet: for each client provides a thread execution interface.
CN200910010075A 2009-01-13 2009-01-13 Data transmission method for multiple-client server Pending CN101778092A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910010075A CN101778092A (en) 2009-01-13 2009-01-13 Data transmission method for multiple-client server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910010075A CN101778092A (en) 2009-01-13 2009-01-13 Data transmission method for multiple-client server

Publications (1)

Publication Number Publication Date
CN101778092A true CN101778092A (en) 2010-07-14

Family

ID=42514422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910010075A Pending CN101778092A (en) 2009-01-13 2009-01-13 Data transmission method for multiple-client server

Country Status (1)

Country Link
CN (1) CN101778092A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102377735A (en) * 2010-08-12 2012-03-14 汉王科技股份有限公司 Multimedia advertisement system and method for controlling playing of multimedia advertisement
CN103023980A (en) * 2012-11-21 2013-04-03 中国电信股份有限公司云计算分公司 Method and system for processing user service request by cloud platform
CN105471943A (en) * 2014-09-03 2016-04-06 鸿富锦精密工业(深圳)有限公司 Server and method for distributing customer premise equipment to update firmware thereof
CN108124284A (en) * 2017-12-06 2018-06-05 青岛真时科技有限公司 A kind of Bluetooth data transfer method and apparatus
CN108735271A (en) * 2017-04-17 2018-11-02 中国科学院微电子研究所 A kind of ECG detecting data management system
CN109885393A (en) * 2019-01-10 2019-06-14 华为技术有限公司 Read-write requests processing method, device, electronic equipment and storage medium
CN113726682A (en) * 2021-08-30 2021-11-30 北京天空卫士网络安全技术有限公司 Data transmission method and device based on speed limit strategy
CN113726682B (en) * 2021-08-30 2024-05-31 北京天空卫士网络安全技术有限公司 Data transmission method and device based on speed limiting strategy

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102377735A (en) * 2010-08-12 2012-03-14 汉王科技股份有限公司 Multimedia advertisement system and method for controlling playing of multimedia advertisement
CN103023980A (en) * 2012-11-21 2013-04-03 中国电信股份有限公司云计算分公司 Method and system for processing user service request by cloud platform
CN103023980B (en) * 2012-11-21 2016-03-23 中国电信股份有限公司 A kind of method and system of cloud platform processes user service request
CN105471943A (en) * 2014-09-03 2016-04-06 鸿富锦精密工业(深圳)有限公司 Server and method for distributing customer premise equipment to update firmware thereof
CN108735271A (en) * 2017-04-17 2018-11-02 中国科学院微电子研究所 A kind of ECG detecting data management system
CN108124284A (en) * 2017-12-06 2018-06-05 青岛真时科技有限公司 A kind of Bluetooth data transfer method and apparatus
CN109885393A (en) * 2019-01-10 2019-06-14 华为技术有限公司 Read-write requests processing method, device, electronic equipment and storage medium
US11899939B2 (en) 2019-01-10 2024-02-13 Huawei Technologies Co., Ltd. Read/write request processing method and apparatus, electronic device, and storage medium
CN113726682A (en) * 2021-08-30 2021-11-30 北京天空卫士网络安全技术有限公司 Data transmission method and device based on speed limit strategy
CN113726682B (en) * 2021-08-30 2024-05-31 北京天空卫士网络安全技术有限公司 Data transmission method and device based on speed limiting strategy

Similar Documents

Publication Publication Date Title
CN104396200B (en) Ensure predictable and quantifiable networking performance
US9471348B2 (en) Applying policies to schedule network bandwidth among virtual machines
US8462802B2 (en) Hybrid weighted round robin (WRR) traffic scheduling
US9986563B2 (en) Dynamic allocation of network bandwidth
CN103412786B (en) High performance server architecture system and data processing method thereof
CN101778092A (en) Data transmission method for multiple-client server
US8149846B2 (en) Data processing system and method
US20110199899A1 (en) Rate-Adaptive Bundling of Data in a Packetized Communication System
WO2007016708A2 (en) Routing under heavy loading
WO2009073312A2 (en) Network bandwidth detection, distribution and traffic prioritization
MX2015006471A (en) Method and apparatus for controlling utilization in a horizontally scaled software application.
EP4006735B1 (en) Fine grain traffic shaping offload for a network interface card
CN109347757A (en) Message congestion control method, system, equipment and storage medium
Cheng et al. Application-aware SDN routing for big data networking
CN104717189A (en) Network data package sending method and device
JP2012018602A (en) Server, financial information distribution method and program
CN102355422B (en) Multicore, parallel and lock-free quality of service (QOS) flow control method
Griwodz State replication for multiplayer games
US9128771B1 (en) System, method, and computer program product to distribute workload
CN102611924A (en) Flow control method and system of video cloud platform
Banerjee et al. Framework on service based resource selection in cloud computing
Banerjee et al. Experience-based efficient scheduling algorithm (EXES) for serving requests in cloud using SDN controller
Shen et al. Rendering differential performance preference through intelligent network edge in cloud data centers
CN115242727B (en) User request processing method, device, equipment and medium
Chen et al. CQRD: A switch-based approach to flow interference in Data Center Networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20100714