CN103338156A - Thread pool based named pipe server concurrent communication method - Google Patents
Thread pool based named pipe server concurrent communication method Download PDFInfo
- Publication number
- CN103338156A CN103338156A CN201310240673XA CN201310240673A CN103338156A CN 103338156 A CN103338156 A CN 103338156A CN 201310240673X A CN201310240673X A CN 201310240673XA CN 201310240673 A CN201310240673 A CN 201310240673A CN 103338156 A CN103338156 A CN 103338156A
- Authority
- CN
- China
- Prior art keywords
- thread pool
- message
- thread
- client
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention discloses a thread pool based named pipe server concurrent communication method which comprises the following steps: 1, allowing a read thread at a server terminal to receive an operation request instruction from a client; 2, packing the received operation requests by the read thread to generate an input information which is saved in an input information buffer zone; 3, taking one piece of input information by a business logic thread pool , and performing corresponding operation according to the operation type defined in the information, after the completion of operation, packing the corresponding operation results by the thread pool so as to generate a piece of output information which is saved in an output information buffer zone; and 4, when a write thread at the server terminal detects the output information in the output information buffer zone, acquiring the output information from the queue, and feeding the corresponding operation result to the corresponding client according to the communication mark in the information. According to the invention, with two threads, data reading and sending between the client and the server can be realized, so as to ensure the reliability and real-time performance of the system.
Description
Technical field
The present invention relates to a kind of server concurrent communication method based on client/server, belong to computer software technical field.
Background technology
The all right double-layer structure based on client/server (Client/Server) of present most of application systems software, adopt this architectural framework, can take full advantage of the advantage of two ends hardware environment, the task reasonable distribution is arrived client and server, thereby reduce the overall overhead of system.Under the certain situation of hardware configuration, the quality of server software performance is most important to the whole software application system performance.
The performance of server is embodied in the following aspects:
The one, the speed of server end customer in response end, i.e. Tong Xin real-time; The 2nd, how many clients are the concurrent performance of server namely support visit simultaneously simultaneously; The 3rd, the fail safe of server.
Based on the software systems of client/server, from communication mode, following several way of realization can be arranged:
1. based on the communication mechanism of TCP or UDP, realize mutual between client and the server.
2. based on named pipes, the realization client is communicated by letter with server end.
Most at present server end application software all adopt the socket mode, and select the transport layer protocol of TCP or UDP for use, communicate with client.
Wherein TCP is connection-oriented network communication mode, after namely must connecting with target machine earlier, just can carry out follow-up communication.TCP connects to be needed through three-way handshake, and closing connection then needs through 4-Way Handshake.The TCP packets of information need be added the header of 20 bytes before data, overhead is big, thereby the TCP transmission is slower, and requires more to system resource.
UDP is a kind of connectionless network communication mode, namely transmits data preceding source end and terminal and does not connect.The header of UDP message has only 8 bytes, and overhead is little, transmit faster than TCP, but can not guarantee the order of data, and have the possibility of packet loss.
This shows, adopt the communication means of socket, is that TCP or UDP respectively have advantage, can't ignore shortcoming but have.Therefore, reliable for the transmission that guarantees data, have very fast transmission speed again simultaneously, the present invention adopts the communication mode of basic named pipes, need not additionally to add datagram header, and transmission speed is more; And than ICP/IP protocol, better transmission security and reliability are arranged.
Summary of the invention
Technical problem to be solved by this invention provides the network interaction method between a kind of client and the server, can improve real-time, concurrency and the fail safe of server.
For solving the problems of the technologies described above, the invention provides a kind of named pipes server concurrent communication method based on thread pool, it is characterized in that, may further comprise the steps:
1) client is sent request;
2) thread of reading of server end receives the operation requests order that connects from client;
3) read thread the operation requests packing that receives is generated an input message, and be saved in the input message buffer;
4) when the input message buffer is not sky, the service logic thread pool therefrom takes out an input message, and according to the action type that defines in the message, carries out corresponding operation; After operation was finished, thread pool was given birth to an output message with corresponding operating result packing, and it is saved in the output message buffering area;
5) thread of writing when server end monitors the output message buffering area when having output message, then obtains output message from this formation, then according to the communication identifier in the message, corresponding operating result is turned back to clients corresponding;
6) client receives and replys.
Aforesaid a kind of named pipes server concurrent communication method based on thread pool, it is characterized in that: described input message buffer and output message buffering area are a kind of data structure of formation, take the various solicit operations of sequential processes of first in first out.
Aforesaid a kind of named pipes server concurrent communication method based on thread pool is characterized in that: in input message, comprise named pipes kernel example, the solicit operation type corresponding with client; In output message, comprise the named pipes kernel example corresponding with client, operation return results.
Aforesaid a kind of named pipes server concurrent communication method based on thread pool is characterized in that: thread pool is dynamically adjusted self capacity of thread pool according to the message number of input message buffer.Suppose that the thread pool heap(ed) capacity is Tmax, message number is N, and the heap(ed) capacity that buffering area can store is Nmax, and then the capacity of thread pool is T=(Tmax/Nmax) * N.
Aforesaid a kind of named pipes server concurrent communication method based on thread pool is characterized in that: when the thread pool initialization, according to the CPU number N of server hardware
1, kernel number N
2, calculate the heap(ed) capacity Tmax of thread pool.Tmax=2*N
1+N
2+1。
The present invention reaches the beneficial effect fruit that has: the named pipes server concurrent communication method based on thread pool of the present invention, and the present invention has guaranteed reliability and the real-time of system respectively with data read and transmission between two threads realization clients and the server; Adopt the input and output caching technology, can effectively avoid the slow system jams that causes of telecommunication speed; Business Logic is handled the business logical operation of magnanimity by thread pool, and dynamically adjusts number of threads according to request amount, the optimization system performance, thereby improved the concurrency of system greatly, also reduced simultaneously since thread creation with destroy the cpu load that increases.
Description of drawings
Fig. 1 network service physical structure figure.
Fig. 2 system architecture diagram.
Fig. 3 reads the thread flow chart.
Fig. 4 writes the thread flow chart.
Fig. 5 thread pool flow chart.
Embodiment
At the software application system of C/S framework, adopt the communication mode of named pipes, realize the network interaction between client and the server.Network service physical structure figure, as shown in Figure 1.
At the server end of software application system, high-performance server system proposed by the invention mainly is made of communication interface layer, message format layer and Business Logic, and system architecture as shown in Figure 2.
Wherein interface layer is by reading thread and write thread and form at server end, the reading and send the employing asynchronous system of message data, namely adopt respectively two independently thread realize the transmission with server message of reading of client message.Wherein read thread and be mainly used in being responsible for monitoring from the named pipes connection request of client, and receive the request command of each client, then with request command and corresponding client communication sign, generate and import message and be saved in the input message buffer; Writing main regularly the obtaining from the output message buffering area of thread needs message to be sent, then it is transmitted to each clients corresponding.
The message format layer comprises output message buffering area and input message format layer.The input and output message format is a kind of data structure of formation, namely takes the various solicit operations of sequential processes of first in first out.The input block is mainly used in cushioning the request command of sending from client, and output buffer is mainly used in waiting to send out the answer message of giving client again.Adopt the input and output caching technology, can effectively avoid the slow system jams that causes of telecommunication speed.
The form of message: in input message, should comprise named pipes kernel example, the solicit operation type corresponding with client; In output message, should comprise the named pipes kernel example corresponding with client, operation return results.
Business Logic is responsible for by a thread pool, and it mainly gets corresponding information order from the input message buffer, then according to different information orders, carries out different logic business operations, and the result that needs are returned is kept at the output message buffering area then.Owing to adopted the mode of thread pool, thereby both improved the concurrency of transaction, also reduced simultaneously because the cpu load that thread creation and destruction increase.
Under the actual motion environment, the operation requests number of client is dynamic, is current state low when high, therefore sometimes during the operating load of server end and light, Shi Erchong.In order to improve the adaptive ability of system, thread pool can be according to the message number of input message buffer, dynamically adjusts self capacity (being number of threads) of thread pool.Descend owing to the unrestricted increase of thread causes systematic function in order to prevent simultaneously, when the thread pool initialization, should be according to the CPU number of server hardware, the heap(ed) capacity that the kernel number calculates thread pool.
Fig. 3-5, the flow chart of each thread of description the present invention.For more detailed implementation to system is described, below in conjunction with accompanying drawing, elaborate the specific embodiment of the present invention.
Be example with the once request of client/reply, data flow and transaction concrete steps are as follows:
1. client is sent request;
2. the thread of reading of server end receives the operation requests order that connects from client;
3. read thread with the operation requests that receives, packing generates an input message, and is saved in the input message buffer;
4. when the input message buffer was not sky, the service logic thread pool therefrom took out an input message, and according to the action type that defines in the message, carries out corresponding operation.After operation was finished, thread pool was with corresponding operating result, and an output message is given birth in packing, and it is saved in the output message buffering area;
5. the thread of writing when server end monitors the output message buffering area when having output message, then obtains output message from this formation, then according to the communication identifier in the message, with corresponding operating result, turns back to clients corresponding;
6. client receives and replys.
Claims (5)
1. the named pipes server concurrent communication method based on thread pool is characterized in that, may further comprise the steps:
1) client is sent request;
2) thread of reading of server end receives the operation requests order that connects from client;
3) read thread the operation requests packing that receives is generated an input message, and be saved in the input message buffer;
4) when the input message buffer is not sky, the service logic thread pool therefrom takes out an input message, and according to the action type that defines in the message, carries out corresponding operation; After operation was finished, thread pool was given birth to an output message with corresponding operating result packing, and it is saved in the output message buffering area;
5) thread of writing when server end monitors the output message buffering area when having output message, then obtains output message from this formation, then according to the communication identifier in the message, corresponding operating result is turned back to clients corresponding;
6) client receives and replys.
2. a kind of named pipes server concurrent communication method based on thread pool according to claim 1, it is characterized in that: described input message buffer and output message buffering area are a kind of data structure of formation, take the various solicit operations of sequential processes of first in first out.
3. a kind of named pipes server concurrent communication method based on thread pool according to claim 2 is characterized in that: in input message, comprise named pipes kernel example, the solicit operation type corresponding with client; In output message, comprise the named pipes kernel example corresponding with client, operation return results.
4. a kind of named pipes server concurrent communication method based on thread pool according to claim 1, it is characterized in that: thread pool is according to the message number of input message buffer, dynamically adjust self capacity of thread pool, suppose that the thread pool heap(ed) capacity is Tmax, message number is N, the heap(ed) capacity that buffering area can store is Nmax, and then the capacity of thread pool is T=(Tmax/Nmax) * N.
5. a kind of named pipes server concurrent communication method based on thread pool according to claim 4 is characterized in that: when the thread pool initialization, according to the CPU number N of server hardware
1, kernel number N
2, calculate the heap(ed) capacity Tmax of thread pool, Tmax=2*N
1+ N
2+ 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310240673.XA CN103338156B (en) | 2013-06-17 | 2013-06-17 | A kind of name pipeline server concurrent communication method based on thread pool |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310240673.XA CN103338156B (en) | 2013-06-17 | 2013-06-17 | A kind of name pipeline server concurrent communication method based on thread pool |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103338156A true CN103338156A (en) | 2013-10-02 |
CN103338156B CN103338156B (en) | 2016-08-24 |
Family
ID=49246256
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310240673.XA Active CN103338156B (en) | 2013-06-17 | 2013-06-17 | A kind of name pipeline server concurrent communication method based on thread pool |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103338156B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103888454A (en) * | 2014-03-14 | 2014-06-25 | 浪潮电子信息产业股份有限公司 | Network communication module processing frame based on queue |
CN104199729A (en) * | 2014-08-27 | 2014-12-10 | 深圳市九洲电器有限公司 | Resource management method and system |
CN104702627A (en) * | 2015-04-01 | 2015-06-10 | 南京天溯自动化控制系统有限公司 | Packet classification-based synchronous concurrent communication method and system |
WO2016155238A1 (en) * | 2015-03-27 | 2016-10-06 | 中兴通讯股份有限公司 | File reading method in distributed storage system, and server end |
CN106095597A (en) * | 2016-05-30 | 2016-11-09 | 深圳市鼎盛智能科技有限公司 | Client data processing method and processing device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101043389A (en) * | 2007-04-20 | 2007-09-26 | 北京航空航天大学 | Control system of grid service container |
CN101202704A (en) * | 2007-09-07 | 2008-06-18 | 深圳市同洲电子股份有限公司 | Method and system for transmitting data |
CN101968748A (en) * | 2010-09-17 | 2011-02-09 | 北京星网锐捷网络技术有限公司 | Multithreading data scheduling method, device and network equipment |
CN102043675A (en) * | 2010-12-06 | 2011-05-04 | 北京华证普惠信息股份有限公司 | Thread pool management method based on task quantity of task processing request |
CN102929619A (en) * | 2012-10-19 | 2013-02-13 | 南京国电南自美卓控制系统有限公司 | Process automation software development system across hardware platform |
-
2013
- 2013-06-17 CN CN201310240673.XA patent/CN103338156B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101043389A (en) * | 2007-04-20 | 2007-09-26 | 北京航空航天大学 | Control system of grid service container |
CN101202704A (en) * | 2007-09-07 | 2008-06-18 | 深圳市同洲电子股份有限公司 | Method and system for transmitting data |
CN101968748A (en) * | 2010-09-17 | 2011-02-09 | 北京星网锐捷网络技术有限公司 | Multithreading data scheduling method, device and network equipment |
CN102043675A (en) * | 2010-12-06 | 2011-05-04 | 北京华证普惠信息股份有限公司 | Thread pool management method based on task quantity of task processing request |
CN102929619A (en) * | 2012-10-19 | 2013-02-13 | 南京国电南自美卓控制系统有限公司 | Process automation software development system across hardware platform |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103888454A (en) * | 2014-03-14 | 2014-06-25 | 浪潮电子信息产业股份有限公司 | Network communication module processing frame based on queue |
CN104199729A (en) * | 2014-08-27 | 2014-12-10 | 深圳市九洲电器有限公司 | Resource management method and system |
WO2016029778A1 (en) * | 2014-08-27 | 2016-03-03 | 深圳市九洲电器有限公司 | Resource management method and system |
CN104199729B (en) * | 2014-08-27 | 2018-07-10 | 深圳市九洲电器有限公司 | A kind of method for managing resource and system |
WO2016155238A1 (en) * | 2015-03-27 | 2016-10-06 | 中兴通讯股份有限公司 | File reading method in distributed storage system, and server end |
CN106161503A (en) * | 2015-03-27 | 2016-11-23 | 中兴通讯股份有限公司 | File reading in a kind of distributed memory system and service end |
CN104702627A (en) * | 2015-04-01 | 2015-06-10 | 南京天溯自动化控制系统有限公司 | Packet classification-based synchronous concurrent communication method and system |
CN104702627B (en) * | 2015-04-01 | 2017-12-26 | 南京天溯自动化控制系统有限公司 | A kind of synchronous concurrent communication method and system based on message classification |
CN106095597A (en) * | 2016-05-30 | 2016-11-09 | 深圳市鼎盛智能科技有限公司 | Client data processing method and processing device |
CN106095597B (en) * | 2016-05-30 | 2017-09-26 | 深圳市鼎盛智能科技有限公司 | Client data processing method and processing device |
Also Published As
Publication number | Publication date |
---|---|
CN103338156B (en) | 2016-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10609125B2 (en) | Method and system for transmitting communication data | |
CN103338156A (en) | Thread pool based named pipe server concurrent communication method | |
AU2016201513B2 (en) | Low latency fifo messaging system | |
WO2012065520A1 (en) | System and method for file transmission | |
US7826350B1 (en) | Intelligent network adaptor with adaptive direct data placement scheme | |
WO2018049210A1 (en) | Multicast apparatuses and methods for distributing data to multiple receivers in high-performance computing and cloud-based networks | |
CN111277616A (en) | RDMA (remote direct memory Access) -based data transmission method and distributed shared memory system | |
US11303737B2 (en) | Method and device for data transmission | |
US8060644B1 (en) | Intelligent network adaptor with end-to-end flow control | |
CN102546612A (en) | Remote procedure call implementation method based on remote direct memory access (RDMA) protocol in user mode | |
CN105426326A (en) | High-concurrency queue storage method and system | |
US20200112628A1 (en) | Heartbeat in failover cluster | |
CN102375789A (en) | Non-buffer zero-copy method of universal network card and zero-copy system | |
US11784946B2 (en) | Method for improving data flow and access for a neural network processor | |
CN102917068A (en) | Self-adaptive large-scale cluster communication system and self-adaptive large-scale cluster communication method | |
US11444882B2 (en) | Methods for dynamically controlling transmission control protocol push functionality and devices thereof | |
US11048555B2 (en) | Method, apparatus, and computer program product for optimizing execution of commands in a distributed system | |
US8589587B1 (en) | Protocol offload in intelligent network adaptor, including application level signalling | |
CN102546659A (en) | Durable TCP (transmission control protocol) connection method oriented to remote procedure call | |
US20220217098A1 (en) | Streaming communication between devices | |
US20150012663A1 (en) | Increasing a data transfer rate | |
CN104580328A (en) | Virtual machine migration method, device and system | |
CN102780642A (en) | Multichannel network message transmission method | |
US20240069755A1 (en) | Computer system, memory expansion device and method for use in computer system | |
Tu et al. | RDMA Based Performance Optimization on Distributed Database Systems: A Case Study with GoldenX |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: 210032 Jiangsu province Nanjing city Pukou high tech Zone Huidalu No. 9 Patentee after: Nanjing Guodian Nanzi 710086 Automation Co. Ltd. Address before: Nanjing City, Jiangsu province 210032 Spark Road, Pukou hi tech Development Zone No. 8 Patentee before: Nanjing Guodian Nanzi Meizhuo Control System Co.,Ltd. |