A kind of socket implementation method of internet of things oriented platform
Technical field
The present invention is a kind of design and optimization scheme of communication module of Socket (socket) server based on platform of internet of things, belongs to the communication technical field of Internet of Things.
Background technology
Along with the development of technology of Internet of things, transducer, RFID (radio-frequency (RF) identification, Radio Frequency Identification) are widely used.The data of being collected by sensing layer will be transferred to application layer by communication layers, and the service object of these data---user also will conduct interviews to it by communication layers.Therefore, as the platform of internet of things of whole system maincenter, should tackle the mass data that sensing layer is collected, can process again the access of a large number of users to various application on platform, communication layers task is therein most important.Design perfect communication layers and can tackle easily data and the access request of magnanimity, otherwise may will there is catastrophic consequence in whole platform.
As a kind of important process communication mechanism, Socket is widely used in the communication scenes of a series of C/S (client/server, Client/Server) pattern.In order to tackle the connection of enormous amount, server end need to be introduced each connection of multithreading parallel processing.If each new connection is processed by the new thread of dynamic creation, systematic function will greatly be weakened.So depositing in advance some arises at the historic moment with the Thread Pool Technology that creates thread, but the size of thread pool whether fixing be a good problem to study all the time, the thread pool of static scale has been saved the expense that creates and destroy thread, but be difficult to reply for the linking number of super its capacity far away, by the popularization in pond, too much idle thread still can take no small system resource simply; The thread pool of Dynamic Scale still needs constantly to generate new thread when in the face of a large amount of connection, has greatly increased system loading.Therefore should analyze according to the difference of system environment of living in, determine to adopt which kind of form and how jumbo thread pool.
Both at home and abroad about the research of dynamic thread pool, mainly concentrate on three aspects at present: (1) creates some threads, instead of carry out a request in batches and just create a thread in the time that burst connection is larger, but will control the upper limit of pond thread number; (2) number of Optimization Work thread, uses the Principle of Statistics prediction number of users of peak period.This strategy is relatively simple, and reliability is higher; (3) provide multiple thread pools at a server, adopt different thread pool processing according to different tasks and priority.
Summary of the invention
Technical problem: the present invention uses Thread Pool Technology to build socket server, design system operation support scheme, and the working mechanism of thread pool is analysed in depth, calculate the expense of dynamic thread pool in the time facing the request that exceedes its capacity, the scheme that proposes to use Buffer Pool store excess connection request instead of dynamically generate additional thread, then master-plan is optimized, tackles common emergency case.
Technical scheme: although dynamic thread pool is widely applied, its limitation is the same with advantage obvious, when the connection request that exceedes in a large number its capacity occurs simultaneously, thread pool dynamically generates with the expense of destroying thread and will can not ignore.Therefore before the online Cheng Chi of socket server scheme proposing, add Buffer Pool to receive excess thread, in the time there is idle thread in thread pool, from Buffer Pool, taken out new connection request.
Socket server has been used for two network communications between program, and server must provide IP (procotol, Internet Protocol) address and the port numbers of oneself, and client connects to the corresponding port request of this address.Detailed process is:
Server end is set up the connection of ServerSocket monitoring client, after receiving request, connect, and take out message and process, result after treatment is returned, user end to server sends connection request, after connecting just to server transmission or from server receipt message.
1. Communication Module Design
This part is the core of socket server, completes the most important communication function of server.The communication module composition basic design scheme of server end is:
In order to meet the requirement of duplex communication, it is server and client side's transceiving data simultaneously, need to set up transmission, receive two threads, what in thread pool, deposit is receiving thread, after connecting, this thread takes out Socket connection, receipt message is also sent to upper strata by receipt message queue, sets up send-thread simultaneously, monitors and sends message queue (with the interface on upper strata, deposit the result that handle on upper strata), take out result and send.Meanwhile, receiving thread is in blocked state, and client-requested disconnects, and closes Socket and connects.In addition, too much for preventing linking number, the connection that exceedes pond center line number of passes is put into Buffer Pool, by the time occur that idle thread takes out again.
2. thread pool design
Define foundation and destruction that two main expenses in 1. threadings are threads, and the maintenance of thread.If the first expense is C1, the second expense is C2.
C1: the foundation of thread and the expense of destruction, major part is the time for thread storage allocation;
C2: the expense of the maintenance of thread, the context switch timing of thread;
N: thread pool size;
R: the Thread Count of current operation.
C1 > > C2 in actual conditions, the use that table 1 has contrasted thread pool is whether for the impact of systematic function.
Table 1 system is introduced the expense contrast before and after thread pool
|
Thread pool expense |
There is no the expense of thread pool |
Obtain the systematic function promoting |
0≤r≤n |
C2·n |
C1·r |
C1·r-C2·n |
r>n |
C2·n+C1·(r-n) |
C1·r |
C1·n-C2·n |
Table 1 has been considered two kinds of situations.In the time that the Thread Count of surviving in pond is less than pond big or small, (0≤r≤n), in the situation that having thread pool, the expense of system only limits to switch between each thread, i.e. C2n.And system need to create and destroy thread to each new connection while thering is no thread pool, expense is C1r, and the first scheme will make systematic function promote C1r-C2n.
In the second situation, number of tasks has exceeded the maximum of thread pool, and now thread pool is necessary for unnecessary task creation new thread, and expense is C2n+C1 (r-n), is still C1r and there is no the expense of thread pool.Front a kind of scheme makes systematic function promote C1n-C2n.
The key issue of the first scheme is the problem that arranges of thread pool size.If pond center line number of passes is too much, system need to consume a large amount of processing and cache resources in order to safeguard these idle threads, Thread Count is crossed and is caused at least constantly dynamically generating new thread and destroy after task finishes, and the cost of paying may exceed the resource that task itself consumes.Below the best Thread Count of how to confirm (n) is discussed.
Define the thread of surviving in 2. actual environment thread pools and constantly changing, establishing r is the Thread Count of survival, and f (r) is its distribution law, and the expectation of thread pool size n is:
If best thread pool size is n
*, its expectation is:
N represents the set of thread pool center line number of passes,
represent the value upper bound of E (n).(1) is rewritten into following form, the probability density function that p (r) is the Thread Count of surviving in thread pool:
In order to obtain the maximum of E (n), (3) are carried out to differentiate, obtain (4):
Rewrite (4), establishing and maintaining active thread is ξ=C with the ratio that creates new thread
2/ C
1:
Because thread pool size is integer, so this value is determined by following formula:
This formula shows n
*relevant with ξ, the expense of switching when thread is during much smaller than the expense of thread creation and destruction, and the capacity of thread pool is larger.(5) (6) also show n
*relevant with system present load p (r).
The Socket design and optimization scheme of a kind of internet of things oriented platform of the present invention is in the communication layers of platform of internet of things, utilizing Buffer Pool, thread pool to set up multi-thread concurrent connects, a kind of efficient Socket server design scheme is proposed, in reply a large amount of user and request of data, reduce to greatest extent the use of overhead and buffer memory; Use Thread Pool Technology to build socket server, design system operation support scheme, and the working mechanism of thread pool is analysed in depth, calculate the expense of dynamic thread pool in the time facing the request that exceedes its capacity, the scheme that proposes to use Buffer Pool store excess connection request instead of dynamically generate additional thread, then master-plan is optimized, tackles common emergency case.
Described use Thread Pool Technology has built socket server, comprise Communication Module Design and thread pool design, although dynamic thread pool is widely applied, its limitation is the same with advantage obvious, when the connection request that exceedes in a large number its capacity occurs simultaneously, thread pool dynamically generates with the expense of destroying thread and will can not ignore.Therefore before the online Cheng Chi of socket server scheme proposing, add Buffer Pool to receive excess thread, in the time there is idle thread in thread pool, from Buffer Pool, taken out new connection request.
Socket server has been used for two network communications between program, and server must provide IP address and the port numbers of oneself, and client connects to the corresponding port request of this address.Detailed process is:
Server end is set up the connection of ServerSocket monitoring client, after receiving request, connect, and take out message and process, result after treatment is returned, user end to server sends connection request, after connecting just to server transmission or from server receipt message.
Communication Module Design is the core of socket server, completes the most important communication function of server.The communication module composition basic design scheme of server end is:
In order to meet the requirement of duplex communication, it is server and client side's transceiving data simultaneously, need to set up transmission, receive two threads, what in thread pool, deposit is receiving thread, after connecting, this thread takes out Socket connection, receipt message is also sent to upper strata by receipt message queue, sets up send-thread simultaneously, monitors and sends message queue (with the interface on upper strata, deposit the result that handle on upper strata), take out result and send.Meanwhile, receiving thread is in blocked state, and client-requested disconnects, and closes Socket and connects.In addition, too much for preventing linking number, the connection that exceedes pond center line number of passes is put into Buffer Pool, by the time occur that idle thread takes out again.
Thread pool design is determining of thread pool size.Foundation and destruction that two main expenses in thread pool design are threads, and the maintenance of thread.If the first expense is C1, the second expense is C2.C1 major part is the time for thread storage allocation, and C2 is the context switch timing of thread.C1 > > C2 in actual conditions.The thread of surviving in actual environment thread pool is constantly changing, and establishing r is the Thread Count (being thread pool size) of survival, and f (r) is its distribution law, and the expectation of thread pool size n is:
If best thread pool size is n
*, its expectation is:
N represents the set of thread pool center line number of passes,
represent the value upper bound of E (n).(1) is rewritten into following form, the probability density function that p (r) is the Thread Count of surviving in thread pool
In order to obtain the maximum of E (n), (3) are carried out to differentiate, obtain (4):
Rewrite (4), establishing and maintaining active thread is ξ=C with the ratio that creates new thread
2/ C
1:
Because thread pool size is integer, so this value is determined by following formula:
This formula shows n
*relevant with ξ, the expense of switching when thread is during much smaller than the expense of thread creation and destruction, and the capacity of thread pool is larger.Formula (5) (6) also shows n
*relevant with system present load p (r).
When system load ability is assessed, suppose that p (r) is for being uniformly distributed, number of users is 4000, is about 400ms again because system creation is destroyed thread, and context switching is about 20ms,
have
n
*=3600.
Described system operation support scheme is that agreement is made in system use by oneself, and client and server transmits and receive data to wrap and communicates, and data package size is for being about 100Bytes.Form with packet after system processing messages is returned to message.In order to reach duplex, set two threads of send and receive, and send-thread is the sub-thread of receiving thread.The message that server is received is put into receipt message queue and is transferred to operation layer processing.When operation layer is disposed, each thread removes to send message queue and takes out one's own result and then send.
Described common emergency case is 1) task 2 of mark different clients how) may have while receiving data and block 3) Socket still carries out read-write operation 4 after closing) unexpected client disconnects four kinds of situations.
The task of described how mark different clients, solution is: for this situation, because each task belongs to different clients, therefore can be subordinated in the Socket object of this client and obtain its IP address and port numbers, the IP of client and the Mark field of port numbers are deposited in setting, are worth IP and port numbers for connecting.In While circulation, the thread polling message queue (MessageQueue) at each task place element topmost, once find that corresponding Mark just takes out message wherein.Otherwise thread suspension.
When described reception data, may have that to block be in whole system; client is clicked interface; program sends packet to server; then complete corresponding function by server; therefore the feature of task is low to the occupancy of CPU; but the I/O that often can block (I/O, Input/Output) operation.If user does not click interface for a long time, the worker thread in thread pool will be taken all the time by this user, cannot carry out any task.If all threads are all in blocked state in thread pool, newly adding of task cannot be processed.And when transmitting terminal data volume is excessive, receiving end possibly cannot receive completely.
Solution is: definition nIdx, (nIdx is for depositing the byte number of altogether reading for tri-variablees of nTotalLen and nReadLen, nTotalLen is that byte number and the nReadLen that altogether will read are the byte numbers of reading in once circulating), the value of nTotalLen can be determined by the field in packet header.While circulation continuous is jumped out to reading inlet flow end.For fear of the mode of this unsound use CPU of busy waiting, second while circulation is here responsible for there is no the thread suspension of data input, while having input, wakes up again.
After described Socket closes, still carry out read-write operation, solution is: because closing of Socket is subject to receiving thread control, therefore may before send-thread has moved, just Socket have been closed by receiving thread.Set up informing mechanism, after allowing send-thread be finished, send a notice to receiving thread, then receiving thread is closed Socket can address this problem, add protocol fields PacSeq (PacSeq is the numbering of the request bag received), for example receiving thread is received the bag of PacSeq=x, after send-thread is beamed back this request bag results needed, send notice to receiving thread, receiving thread is receiving that this notice keeps Socket connection status always.
Described unexpected client disconnects, solution was: client sent a heartbeat packet every 5 minutes, server end creates a daemon thread by receiving thread and is responsible for monitoring this heartbeat packet and carries out countdown, receiving thread receives that heartbeat packet do not process, just send notice and to daemon thread, timer is reset, then continue to keep being connected with client.If server was not received in 5 minutes, daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread.Because in the countdown stage, three threads are all in suspended state, so occupying system resources not.Client certain hour after the request of sending is confiscated feedback or is occurred that extremely qualifying server disconnects.After disconnection, resend connection request.
Beneficial effect: the present invention is directed to the massive demand that Internet of Things brings, utilize socket to design server com-munication module, further investigate the impact on systematic function that arranges of thread pool, use buffer queue processing to exceed the thread of thread pool capacity, the problem that analytical system may run into is also optimized.Finally systematic function is carried out to emulation, result shows that thread pool coordinates Buffer Pool can in the time of the connection in the face of excessive to a certain extent, greatly reduce overhead, the task little for expense should not adopt short connection mode, although and find to exceed the linking number of thread pool capacity and use Buffer Pool can reduce overhead when little, once but meeting or exceeding certain threshold value, the reaction time of system will sharply rise.
Brief description of the drawings
Fig. 1 is Socket process of establishing,
Fig. 2 is communication module operation flow graph,
Fig. 3 avoids Socket to close abnormal flow chart,
Fig. 4 is heartbeat inspecting flow chart,
Fig. 5 is the socket connection speed comparison diagram of experiment 1,
Fig. 6 is experiment 2 system BT performance block diagrams,
Fig. 7 is experiment 2 system DT performance block diagrams,
Fig. 8 is experiment 2 system DQL performance block diagrams,
Fig. 9 is experiment 2 system DBT performance block diagrams,
Figure 10 is the comparison diagram of experiment 3 SRTs.
Embodiment
System operation supporting way
System is used makes agreement by oneself, and client and server transmits and receive data to wrap and communicates, and data package size is for being about 100Bytes.Form with packet after system processing messages is returned to message.In order to reach duplex, set two threads of send and receive, and send-thread is the sub-thread of receiving thread.The message that server is received is put into receipt message queue and is transferred to operation layer processing.When operation layer is disposed, each thread removes to send message queue and takes out one's own result and then send.Here will there will be four kinds of situations, solution is as follows:
Situation one: the how task of mark different clients.
Solution: for this situation, because each task belongs to different clients, therefore can be subordinated in the Socket object of this client and obtain its IP address and port numbers, set Mark field, be worth IP and port numbers for connecting.In While circulation, the thread polling message queue (MessageQueue) at each task place element topmost, once find that corresponding Mark just takes out message wherein.Otherwise thread suspension.
Situation two: may have obstruction while receiving data
Solution: in whole system, client is clicked interface, program sends packet to server, then completes corresponding function by server, and therefore the feature of task is low to the occupancy of CPU, but the I/O that often can block operation.If user does not click interface for a long time, the worker thread in thread pool will be taken all the time by this user, cannot carry out any task.If all threads are all in blocked state in thread pool, newly adding of task cannot be processed.And when transmitting terminal data volume is excessive, receiving end possibly cannot receive completely.
Address this problem and can define nIdx, tri-variablees of nTotalLen and nReadLen, deposit respectively the byte number of altogether reading, the byte number of reading in the byte number that altogether will read and once circulation, and the value of nTotalLen can be determined by the field in packet header.While circulation continuous is jumped out to reading inlet flow end.For fear of the mode of this unsound use CPU of busy waiting, second while circulation is here responsible for there is no the thread suspension of data input, while having input, wakes up again.
After closing, situation three: Socket still carries out read-write operation
Solution: because closing of Socket is subject to receiving thread control, therefore may just Socket have been closed by receiving thread before send-thread has moved.Set up informing mechanism, send a notice to receiving thread after allowing send-thread be finished, then receiving thread is closed Socket can address this problem.As shown in Figure 3, add protocol fields PacSeq, the request bag of receiving is numbered, for example receiving thread is received the bag of PacSeq=x, after send-thread is beamed back this request bag results needed, send notice to receiving thread, receiving thread is receiving that this notice keeps Socket connection status always.
Situation four: unexpected client disconnects
Solution: this problem can be solved by algorithm as shown in Figure 4: client sent a heartbeat packet every 5 minutes, server end creates a daemon thread by receiving thread and is responsible for monitoring this heartbeat packet and carries out countdown, receiving thread receives that heartbeat packet do not process, just send notice and to daemon thread, timer is reset, then continue to keep being connected with client.If server was not received in 5 minutes, daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread.Because in the countdown stage, three threads are all in suspended state, so occupying system resources not.Client certain hour after the request of sending is confiscated feedback or is occurred that extremely qualifying server disconnects.After disconnection, resend connection request.
We use LoadRunner software to carry out emulation testing.LoadRunner is the load testing instrument of a kind of prognoses system behavior and performance.Software energy generating virtual user, with the mode Reality simulation user's of Virtual User business operation behavior.In experiment, built a socket server, basic configuration is double-core 2.4GHz, 4GB internal memory.Database adopts Oracle 11G.
Experiment 1 has been investigated and has been set up socket and connect the required time, experiment 2 is analyzed at linking number 4000, and thread pool capacity is 3600 o'clock, performance when system dynamically connects generation new thread for excess, experiment 3 is introduced Buffer Pool on the basis of experiment 2, and then observe system performance changes.
The test of experiment 1:socket connection speed
Socket need to spend the more time in the time connecting, if server adopts short connection mode, connection itself is taken time and may not be ignored.For checking socket connection speed, generate 100 Virtual User server is carried out to connection request, client-side program carries out two actions altogether:
1) attempt connecting with server.
2) after successful connection, send the feedback information of a packet display server.Server end also carries out two actions: monitor client connection request 1..2. after successful connection, beam back the packet that client is sent.
The ruuning situation of Virtual User in experiment as shown in Figure 5.Fig. 5 shows the speed that user connects, and the longitudinal axis is the number of Virtual User, and transverse axis is running time.Blue line represents the Virtual User quantity of moving, and green line represents the Virtual User quantity being finished.Here find out that starting to be connected to last user from first user is finished and has altogether spent 5 seconds, and most of user sent data after successful connection since the 4th second, that is to say and connect the time of having spent a whole set of operation 4/5ths.
This experiment shows that the body expense of attending to the basic or the fundamental in office is little and in situation that similar tasks may frequently occur, server and client should not used short connection mode, be that each completing should keep connecting a period of time afterwards, with the overhead of avoiding frequent disconnection to connect and brought again.
Experiment 2: the each performance parameter contrast test of server
In order to verify that linking number exceedes after thread pool capacity, system dynamically generates the impact that new thread causes performance, at thread pool and monitor between the accept method that is connected and set up Buffer Pool, unnecessary connection is deposited in pond, in the time that thread pool has vacant thread, from pond, fech connection moves.
Investigate respectively and adding with do not add Buffer Pool in the situation that, the performance of system when exceeding the connection of thread pool capacity.Carry out altogether four experiments, thread pool is set to 3600, and each linking number is made as respectively 3600,3800,4000,4200.System is that the request dynamic that exceedes thread pool capacity generates new thread in the time not adding Buffer Pool.In order to understand internal memory and the hard disk situation of server, observe the parameter in table 2:
Table 2 server performance parameter
This experiment content is identical with experiment 1, but in order to strengthen server end pressure, whole process iteration is carried out, after client complete operation, disconnect, and then request connection, carry out same operation.Whole process continues five minutes.Experimental result as shown in Figure 6.Fig. 6 is the system performance of network throughput (BT) under these conditions, in the time that equaling thread pool capacity, linking number have or not Buffer Pool little on network throughput impact, in the time that linking number rises to 3800 because the system that has Buffer Pool need not create thread in real time, resource is mainly used in transfer of data, and therefore network throughput is than high without the system of Buffer Pool.In the time that linking number is raised to 4,000, both throughputs remain basically stable, nearly exceed 25% and reach after 4200 connections without the network throughput of Buffer Pool system than the network throughput that has Buffer Pool system, the system that Buffer Pool is described cannot promote facing treatment effeciency when multi-link too much.
What Fig. 7 showed to Fig. 9 is the variation of system disk operation conditions before and after Buffer Pool adds, can find out and add DT, DQL, the DBT expense of the front system of Buffer Pool to increase along with the increase of linking number, performance in the time connecting for 4200 has risen respectively 26.6% than initial 3600 connections, 20.3%, 45.9%.This is mainly because system is constantly generating in real time and destroying thread, and the total number of threads of operation is also more and more, has consumed a large amount of memory sources, and system has to use disk as virtual memory, and the burden of hard disk also increases thereupon.After reviewing and adding Buffer Pool, although throughput of system is not fully up to expectations during in the face of 4200 connections, but the basic held stationary of overhead, this is because the Thread Count of operation reduces, the expense that thread switches to be reduced, and owing to depositing the request that exceedes thread pool capacity in Buffer Pool, generate and destroy thread without dynamic, having greatly reduced system loading.
As can be seen here, for exceeding thread pool capacity but the few connection of quantity should put it into Buffer Pool instead of dynamically generate thread and process, otherwise can cause systematic function sharply to decline, because the expense of the establishment of thread and the destruction expense that context switches when safeguarding thread.Can strengthen disposal ability although rashly increase Thread Count in order to increase linking number, probably cause systematic function to decline.In whole process, should avoid the operation to hard disk, because the slow running efficiency of system of the very big floor mop of this operation meeting as far as possible.
Experiment 3: system response time contrast test
Experiment 2 shows the connection that exceeds thread pool capacity to be placed in cache pool, and wait idle thread appears to process and can reduce overhead.But system is slower to new coupled reaction in the time that linking number is excessive, new user may need to wait for that the long period could successful connection, is the flex point in searching system reaction time, repeat above-mentioned experiment content, constantly increase connection request number, in the observing system reaction time, result as shown in figure 10.The concurrent connection number that Figure 10 transverse axis representative system is born, the reaction time of longitudinal axis representative system to connection request, can find out that connection request number was at 3600 o'clock, have or not Buffer Pool little on systematic function impact, because now do not have new thread to generate.Be to have the SRT of Buffer Pool shorter at 3800 o'clock in number of request, because it is fewer than the time that creates new thread to be newly connected to the time of waiting in pond.But connection request number reaches at 4000 o'clock, have the reaction time of the system of Buffer Pool sharply to increase to 497ms, and this growth is nonlinear, be not directly proportional to linking number, and almost do not change without the SRT of Buffer Pool, maintain 422ms.This is only can increase overhead because create new thread, and the processing speed of each task is not reduced.In the time that linking number reaches 4200, the SRT that has Buffer Pool is 885ms, has been much higher than the SRT 442ms without Buffer Pool.
Inventive point 1, use Thread Pool Technology build socket server
Communication Module Design
In order to meet the requirement of duplex communication, it is server and client side's transceiving data simultaneously, need to set up transmission, receive two threads, what in the thread pool of Fig. 2, deposit is receiving thread, after connecting, this thread takes out Socket connection, receipt message is also sent to upper strata by receipt message queue, sets up send-thread simultaneously, monitors and sends message queue (with the interface on upper strata, deposit the result that handle on upper strata), take out result and send.Meanwhile, receiving thread is in blocked state, and client-requested disconnects, and closes Socket and connects.In addition, too much for preventing linking number, the connection that exceedes pond center line number of passes is put into Buffer Pool, by the time occur that idle thread takes out again.
Thread pool design
Foundation and destruction that two main expenses in thread pool design are threads, and the maintenance of thread.If the first expense is C1, the second expense is C2.C1 major part is the time for thread storage allocation, and C2 is the context switch timing of thread.C1 > > C2 in actual conditions.The thread of surviving in actual environment thread pool is constantly changing, and establishing r is the Thread Count (being thread pool size) of survival, and f (r) is its distribution law, and the expectation of thread pool size n is:
If best thread pool size is n
*, its expectation is:
N represents the set of thread pool center line number of passes,
represent the value upper bound of E (n).(1) is rewritten into following form, the probability density function that p (r) is the Thread Count of surviving in thread pool
In order to obtain the maximum of E (n), (3) are carried out to differentiate, obtain (4):
Rewrite (4), establishing and maintaining active thread is ξ=C with the ratio that creates new thread
2/ C
1:
Because thread pool size is integer, so this value is determined by following formula:
This formula shows n
*relevant with ξ, the expense of switching when thread is during much smaller than the expense of thread creation and destruction, and the capacity of thread pool is larger.(5) (6) also show n
*relevant with system present load p (r).
When system load ability is assessed, suppose that p (r) is for being uniformly distributed, number of users is 4000, is about 400ms again because system creation is destroyed thread, and context switching is about 20ms,
have
n
*=3600.
Inventive point 2, how to solve the problem of the task of mark different clients
For this situation, because each task belongs to different clients, therefore can be subordinated in the Socket object of this client and obtain its IP address and port numbers, set Mark field, be worth IP and port numbers for connecting.In While circulation, the thread polling message queue MessageQueue at each task place element topmost, once find that corresponding Mark just takes out message wherein.Otherwise thread suspension.
Inventive point 3, solution may have the problem of obstruction while receiving data
In whole system, client is clicked interface, and program sends packet to server, then completes corresponding function by server, and therefore the feature of task is low to the occupancy of CPU, but the I/O that often can block operation.If user does not click interface for a long time, the worker thread in thread pool will be taken all the time by this user, cannot carry out any task.If all threads are all in blocked state in thread pool, newly adding of task cannot be processed.And when transmitting terminal data volume is excessive, receiving end possibly cannot receive completely.
Address this problem and can define nIdx, tri-variablees of nTotalLen and nReadLen, deposit respectively the byte number of altogether reading, the byte number of reading in the byte number that altogether will read and once circulation, and the value of nTotalLen can be determined by the field in packet header.While circulation continuous is jumped out to reading inlet flow end.For fear of the mode of this unsound use CPU of busy waiting, second while circulation is here responsible for there is no the thread suspension of data input, while having input, wakes up again.
Inventive point 4, solution Socket still carry out the problem of read-write operation after closing
Because closing of Socket is subject to receiving thread control, therefore may before having moved, send-thread just Socket be closed by receiving thread.Set up informing mechanism, send a notice to receiving thread after allowing send-thread be finished, then receiving thread is closed Socket can address this problem.As shown in Figure 3, add protocol fields PacSeq, the request bag of receiving is numbered, for example receiving thread is received the bag of PacSeq=x, after send-thread is beamed back this request bag results needed, send notice to receiving thread, receiving thread is receiving that this notice keeps Socket connection status always.
The problem that inventive point 5, solution unexpected client disconnect
This problem can be solved by algorithm as shown in Figure 4: client sent a heartbeat packet every 5 minutes, server end creates a daemon thread by receiving thread and is responsible for monitoring this heartbeat packet and carries out countdown, receiving thread receives that heartbeat packet do not process, just send notice and to daemon thread, timer is reset, then continue to keep being connected with client.If server was not received in 5 minutes, daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread.Because in the countdown stage, three threads are all in suspended state, so occupying system resources not.Client certain hour after the request of sending is confiscated feedback or is occurred that extremely qualifying server disconnects.After disconnection, resend connection request.