CN103338156B - A kind of name pipeline server concurrent communication method based on thread pool - Google Patents

A kind of name pipeline server concurrent communication method based on thread pool Download PDF

Info

Publication number
CN103338156B
CN103338156B CN201310240673.XA CN201310240673A CN103338156B CN 103338156 B CN103338156 B CN 103338156B CN 201310240673 A CN201310240673 A CN 201310240673A CN 103338156 B CN103338156 B CN 103338156B
Authority
CN
China
Prior art keywords
message
thread pool
client
thread
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310240673.XA
Other languages
Chinese (zh)
Other versions
CN103338156A (en
Inventor
廖环宇
吴胜华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Guodian Nanzi 710086 Automation Co. Ltd.
Original Assignee
NANJING GUODIAN NANZI MEIZHUO CONTROL SYSTEM CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NANJING GUODIAN NANZI MEIZHUO CONTROL SYSTEM CO Ltd filed Critical NANJING GUODIAN NANZI MEIZHUO CONTROL SYSTEM CO Ltd
Priority to CN201310240673.XA priority Critical patent/CN103338156B/en
Publication of CN103338156A publication Critical patent/CN103338156A/en
Application granted granted Critical
Publication of CN103338156B publication Critical patent/CN103338156B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of name pipeline server concurrent communication method based on thread pool, comprise the following steps: 1) thread of reading of server end receives even from the operation requests order of client;2) read thread and the operation requests packing received is generated an input message, and be saved in input message buffer;3) service logic thread pool is taken out an input message, and according to the action type defined in message, carries out the operation of correspondence;After having operated, thread pool is by raw for the packing of a corresponding operating result output message, and is saved into output message relief area;4) when server end write thread monitor output message relief area there is output message time, then from this queue obtain output message, then according to the communication identifier in message, corresponding operating result is returned to correspondence client.The present invention realizes the digital independent between client and server and transmission with two threads respectively, it is ensured that the reliability of system and real-time.

Description

A kind of name pipeline server concurrent communication method based on thread pool
Technical field
The present invention relates to a kind of server concurrent communication method based on client/server, belong to computer software technical field.
Background technology
The all right double-layer structure based on client/server (Client/Server) of current most of application systems software, use this architectural framework, the advantage of two ends hardware environment can be made full use of, task is reasonably allocated to client and server, thus reduces the overall overhead of system.In the case of hardware configuration is certain, the quality of server software performance is most important to whole software application system performance.
The performance of server is embodied in the following aspects:
One is the speed of server end customer in response end, the real-time i.e. communicated;Two is the concurrency performance of server, supports that how many clients access simultaneously the most simultaneously;Three is the safety of server.
Software system based on client/server, from communication mode from the point of view of, can have following several way of realization:
1. communication mechanism based on TCP or UDP, it is achieved mutual between client and server.
2. based on name pipeline, it is achieved the communication of client and server.
Most server end application softwaries all uses socket mode, and selects the transport layer protocol of TCP or UDP, communicates with client.
Wherein TCP is connection-oriented network communication mode, i.e. first must set up with target machine after being connected, just can carry out follow-up communication.TCP sets up connection to be needed through three-way handshake, and closing connection then needs through 4-Way Handshake.TCP message bag needs to add the header of 20 bytes before data, and overhead is big, thus TCP transmission is relatively slow, and requires more to system resource.
UDP is a kind of connectionless network communication mode, and before i.e. transmitting data, source is not set up with terminal and is connected.The header of UDP message only has 8 bytes, and overhead is little, transmits relatively TCP faster, but it cannot be guaranteed that the order of data, and there is the probability of packet loss.
As can be seen here, using the communication means of socket, either TCP or UDP is respectively arranged with advantage, but has and cannot ignore shortcoming.Therefore, reliable in order to ensure the transmission of data, there is again very fast transmission speed simultaneously, the present invention uses the communication mode of Chinese base noun pipeline, it is not necessary to additionally add datagram header, and transmission speed is more;And than ICP/IP protocol, there are more preferable transmission security and reliability.
Summary of the invention
The technical problem to be solved is to provide the network interaction method between a kind of client and server, can improve the real-time of server, concurrency and safety.
For solving above-mentioned technical problem, the present invention provides a kind of name pipeline server concurrent communication method based on thread pool, it is characterised in that comprise the following steps:
1) client sends request;
2) thread of reading of server end receives even from the operation requests order of client;
3) read thread and the operation requests packing received is generated an input message, and be saved in input message buffer;
4) when inputting message buffer and not being empty, service logic thread pool is taken out an input message, and according to the action type defined in message, carries out the operation of correspondence;After operation completes, thread pool is by raw for the packing of a corresponding operating result output message, and is saved into output message relief area;
5) when server end write thread monitor output message relief area there is output message time, then from this queue obtain output message, then according to the communication identifier in message, corresponding operating result is returned to correspondence client;
6) client receives and replys.
Aforesaid a kind of name pipeline server concurrent communication method based on thread pool, it is characterised in that: described input message buffer and output message relief area are the data structure of a kind of queue, take the sequential processing various request operation of first in first out.
Aforesaid a kind of name pipeline server concurrent communication method based on thread pool, it is characterised in that: in input message, including the name pipeline kernel execution instance corresponding with client, request action type;In output message, return result including the name pipeline kernel execution instance corresponding with client, operation.
Aforesaid a kind of name pipeline server concurrent communication method based on thread pool, it is characterised in that: thread pool, according to the message number of input message buffer, dynamically adjusts self capacity of thread pool.Assuming that thread pool heap(ed) capacity is Tmax, message number is N, and the heap(ed) capacity that relief area can store is Nmax, then the capacity of thread pool is T=(Tmax/Nmax) * N.
Aforesaid a kind of name pipeline server concurrent communication method based on thread pool, it is characterised in that: when thread pool initializes, according to CPU number N of server hardware1, kernel number N2, calculate the heap(ed) capacity Tmax of thread pool.Tmax=2*N1+N2+1。
The present invention is reached the beneficial effect having: the name pipeline server concurrent communication method based on thread pool of the present invention, the present invention realizes the digital independent between client and server and transmission with two threads respectively, it is ensured that the reliability of system and real-time;Use input into/output from cache technology, can effectively avoid the system jams that telecommunication speed is caused slowly;Business Logic processes the business logical operation of magnanimity by thread pool, and dynamically adjusts number of threads according to request amount, optimizes systematic function, thus substantially increases the concurrency of system, also reduces the cpu load increased due to thread creation with destruction simultaneously.
Accompanying drawing explanation
Fig. 1 network service physical structure.
Fig. 2 system architecture diagram.
Fig. 3 reads thread flow figure.
Fig. 4 writes thread flow figure.
Fig. 5 thread pool flow chart.
Detailed description of the invention
For the software application system of C/S framework, use the communication mode of name pipeline, it is achieved the network interaction between client and server.Network service physical structure, as shown in Figure 1.
In the server end of software application system, high-performance server system proposed by the invention, being mainly made up of communication interface layer, message format layer and Business Logic, system architecture is as shown in Figure 2.
Wherein interface layer is by reading thread and writing thread and form at server end, and the reading of message data uses asynchronous system with sending, and is i.e. respectively adopted the transmission of two independent threads reading and server message to realize client message.Wherein read thread and be mainly used in being responsible for monitoring from the name pipeline connection request of client, and receive the request command of each client, then request command identify with corresponding client communication, generate and input message and be also saved in and input message buffer;Write thread from output message relief area, mainly periodically obtain the message that needs are to be sent, be then forwarded to the client of each correspondence.
Message format layer includes output message relief area and input message format layer.Input and output message buffer, and for the data structure of a kind of queue, i.e. take the sequential processing various request operation of first in first out.Input block is mainly used in the request command that buffering is sent from client, and output buffer is mainly used in multiple replying message to client pending.Use input into/output from cache technology, can effectively avoid the system jams that telecommunication speed is caused slowly.
The form of message: in input message, should include the name pipeline kernel execution instance corresponding with client, request action type;In output message, should include that the name pipeline kernel execution instance corresponding with client, operation return result.
Business Logic is responsible for by a thread pool, and it mainly takes corresponding information order from input message buffer, then according to different information orders, carries out different logic business operations, then will need the result returned, and be saved in output message relief area.Owing to have employed the mode of thread pool, thus both improve the concurrency of issued transaction, also reduce the cpu load increased with destruction due to thread creation simultaneously.
Under actual motion environment, the operation requests number of client is dynamic, in current high time low state, light, the Shi Erchong during operating load of server end the most sometimes.In order to improve the adaptive ability of system, thread pool can dynamically adjust self capacity (i.e. number of threads) of thread pool according to the message number of input message buffer.Simultaneously in order to prevent from causing systematic function to decline, when thread pool initializes due to the unrestricted increase of thread, it should calculate the heap(ed) capacity of thread pool according to the CPU number of server hardware, kernel number.
Fig. 3-5, describe the flow chart of each thread of the present invention.In order in more detail the execution process of system is described, below in conjunction with the accompanying drawings, elaborate the detailed description of the invention of the present invention.
As a example by the once request/reply of client, data stream and issued transaction comprise the following steps that shown:
1. client sends request;
2. the thread of reading of server end receives even from the operation requests order of client;
3. reading the operation requests that thread will receive, packing generates an input message, and is saved in input message buffer;
4., when inputting message buffer and not being empty, service logic thread pool is taken out an input message, and according to the action type defined in message, carries out the operation of correspondence.After operation completes, thread pool is by corresponding operating result, packing one output message of life, and is saved into output message relief area;
5. when server end write thread monitor output message relief area there is output message time, then from this queue, obtain output message, then according to the communication identifier in message, by corresponding operating result, return to the client of correspondence;
6. client receives and replys.

Claims (1)

1. a name pipeline server concurrent communication method based on thread pool, it is characterised in that include with Lower step:
1) client sends request;
2) thread of reading of server end receives the operation requests order from client;
3) read thread and the operation requests packing received is generated an input message, and be saved in input message Relief area;
4) when inputting message buffer and not being empty, service logic thread pool is taken out an input message, And according to the action type defined in message, carry out the operation of correspondence;After operation completes, thread pool is by phase The operating result packing answered generates an output message, and is saved into output message relief area;Described defeated Enter message buffer and output message relief area is the data structure of a kind of queue, take the suitable of first in first out Sequence process various request operation, input message in, including the name pipeline kernel execution instance corresponding with client, Request action type;In output message, including the name pipeline kernel execution instance corresponding with client, operation Return result;Thread pool, according to the message number of input message buffer, dynamically adjusts self holding of thread pool Amount, it is assumed that thread pool heap(ed) capacity is Tmax, and message number is N, and the heap(ed) capacity that relief area can store is Nmax, then the capacity of thread pool is T=(Tmax/Nmax) * N, when thread pool initializes, hard according to server CPU number N of part1, kernel number N2, calculate the heap(ed) capacity Tmax, Tmax=2*N of thread pool1+N2+1;
5) when server end write thread monitor output message relief area there is output message time, then from this team Obtaining output message in row, then according to the communication identifier in message, it is right to be returned to by corresponding operating result The client answered;
6) client receives and replys.
CN201310240673.XA 2013-06-17 2013-06-17 A kind of name pipeline server concurrent communication method based on thread pool Active CN103338156B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310240673.XA CN103338156B (en) 2013-06-17 2013-06-17 A kind of name pipeline server concurrent communication method based on thread pool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310240673.XA CN103338156B (en) 2013-06-17 2013-06-17 A kind of name pipeline server concurrent communication method based on thread pool

Publications (2)

Publication Number Publication Date
CN103338156A CN103338156A (en) 2013-10-02
CN103338156B true CN103338156B (en) 2016-08-24

Family

ID=49246256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310240673.XA Active CN103338156B (en) 2013-06-17 2013-06-17 A kind of name pipeline server concurrent communication method based on thread pool

Country Status (1)

Country Link
CN (1) CN103338156B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103888454A (en) * 2014-03-14 2014-06-25 浪潮电子信息产业股份有限公司 Network communication module processing frame based on queue
CN104199729B (en) * 2014-08-27 2018-07-10 深圳市九洲电器有限公司 A kind of method for managing resource and system
CN106161503A (en) * 2015-03-27 2016-11-23 中兴通讯股份有限公司 File reading in a kind of distributed memory system and service end
CN104702627B (en) * 2015-04-01 2017-12-26 南京天溯自动化控制系统有限公司 A kind of synchronous concurrent communication method and system based on message classification
CN106095597B (en) * 2016-05-30 2017-09-26 深圳市鼎盛智能科技有限公司 Client data processing method and processing device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101043389A (en) * 2007-04-20 2007-09-26 北京航空航天大学 Control system of grid service container
CN101202704A (en) * 2007-09-07 2008-06-18 深圳市同洲电子股份有限公司 Method and system for transmitting data
CN101968748A (en) * 2010-09-17 2011-02-09 北京星网锐捷网络技术有限公司 Multithreading data scheduling method, device and network equipment
CN102043675A (en) * 2010-12-06 2011-05-04 北京华证普惠信息股份有限公司 Thread pool management method based on task quantity of task processing request
CN102929619A (en) * 2012-10-19 2013-02-13 南京国电南自美卓控制系统有限公司 Process automation software development system across hardware platform

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101043389A (en) * 2007-04-20 2007-09-26 北京航空航天大学 Control system of grid service container
CN101202704A (en) * 2007-09-07 2008-06-18 深圳市同洲电子股份有限公司 Method and system for transmitting data
CN101968748A (en) * 2010-09-17 2011-02-09 北京星网锐捷网络技术有限公司 Multithreading data scheduling method, device and network equipment
CN102043675A (en) * 2010-12-06 2011-05-04 北京华证普惠信息股份有限公司 Thread pool management method based on task quantity of task processing request
CN102929619A (en) * 2012-10-19 2013-02-13 南京国电南自美卓控制系统有限公司 Process automation software development system across hardware platform

Also Published As

Publication number Publication date
CN103338156A (en) 2013-10-02

Similar Documents

Publication Publication Date Title
US11842216B2 (en) Data processing unit for stream processing
US20230262131A1 (en) Method and system for reducing connections to a database
US10841245B2 (en) Work unit stack data structures in multiple core processor system for stream data processing
US9535863B2 (en) System and method for supporting message pre-processing in a distributed data grid cluster
CN108647104B (en) Request processing method, server and computer readable storage medium
US9154453B2 (en) Methods and systems for providing direct DMA
CN103338156B (en) A kind of name pipeline server concurrent communication method based on thread pool
CN110915173A (en) Data processing unit for computing nodes and storage nodes
CN110677277B (en) Data processing method, device, server and computer readable storage medium
US10754686B2 (en) Method and electronic device for application migration
CN114201421B (en) Data stream processing method, storage control node and readable storage medium
US10826812B2 (en) Multiple quorum witness
US20200112628A1 (en) Heartbeat in failover cluster
EP4357901A1 (en) Data writing method and apparatus, data reading method and apparatus, and device, system and medium
CN112100146B (en) Efficient erasure correction distributed storage writing method, system, medium and terminal
CN105761039A (en) Method for processing express delivery information big data
US10387274B2 (en) Tail of logs in persistent main memory
US10726047B2 (en) Early thread return with secondary event writes
US20190324817A1 (en) Method, apparatus, and computer program product for optimization in distributed system
US10176144B2 (en) Piggybacking target buffer address for next RDMA operation in current acknowledgement message
Huang et al. Enabling RDMA capability of InfiniBand network for Java applications
Malik et al. Crash me inside the cloud: a fault resilient framework for parallel and discrete event simulation
CN110941490A (en) Medical image processing method based on cloud computing
CN117411834A (en) UDP-based high-speed data reliable transmission method, system, device and storage medium
CN110955533A (en) Techniques for multi-connection messaging endpoints

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 210032 Jiangsu province Nanjing city Pukou high tech Zone Huidalu No. 9

Patentee after: Nanjing Guodian Nanzi 710086 Automation Co. Ltd.

Address before: Nanjing City, Jiangsu province 210032 Spark Road, Pukou hi tech Development Zone No. 8

Patentee before: Nanjing Guodian Nanzi Meizhuo Control System Co.,Ltd.