CN102546437B - Internet of things platform-oriented socket implementation method - Google Patents

Internet of things platform-oriented socket implementation method Download PDF

Info

Publication number
CN102546437B
CN102546437B CN201210038597.XA CN201210038597A CN102546437B CN 102546437 B CN102546437 B CN 102546437B CN 201210038597 A CN201210038597 A CN 201210038597A CN 102546437 B CN102546437 B CN 102546437B
Authority
CN
China
Prior art keywords
thread
pool
socket
server
thread pool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210038597.XA
Other languages
Chinese (zh)
Other versions
CN102546437A (en
Inventor
王堃
于悦
暴建民
胡海峰
郭篁
房硕
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201210038597.XA priority Critical patent/CN102546437B/en
Publication of CN102546437A publication Critical patent/CN102546437A/en
Application granted granted Critical
Publication of CN102546437B publication Critical patent/CN102546437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses an Internet of things platform-oriented socket implementation method. An Internet of things application-oriented server is required to provide high-efficiency and high-reliability service for a great amount of concurrent connection, and a communication layer is required to reduce a system load as much as possible. Therefore, multi-thread concurrent connection is established by utilizing a buffer pool and a thread pool in the communication layer of an Internet of things platform, and a high-efficiency Socket server design scheme is provided for response to a great number of users and data requests and maximal reduction in system overhead and the use of a cache. A simulation result shows that by a Socket server introducing the thread pool and the buffer pool, the creation and destruction time of threads and the blocking time of clients can be shortened, and the dynamic generation and destruction processes of the threads can be reduced, so that the processing capability of the server is enhanced.

Description

A kind of socket implementation method of internet of things oriented platform
Technical field
The present invention is a kind of design and optimization scheme of communication module of Socket (socket) server based on platform of internet of things, belongs to the communication technical field of Internet of Things.
Background technology
Along with the development of technology of Internet of things, transducer, RFID (radio-frequency (RF) identification, Radio Frequency Identification) are widely used.The data of being collected by sensing layer will be transferred to application layer by communication layers, and the service object of these data---user also will conduct interviews to it by communication layers.Therefore, as the platform of internet of things of whole system maincenter, should tackle the mass data that sensing layer is collected, can process again the access of a large number of users to various application on platform, communication layers task is therein most important.Design perfect communication layers and can tackle easily data and the access request of magnanimity, otherwise may will there is catastrophic consequence in whole platform.
As a kind of important process communication mechanism, Socket is widely used in the communication scenes of a series of C/S (client/server, Client/Server) pattern.In order to tackle the connection of enormous amount, server end need to be introduced each connection of multithreading parallel processing.If each new connection is processed by the new thread of dynamic creation, systematic function will greatly be weakened.So depositing in advance some arises at the historic moment with the Thread Pool Technology that creates thread, but the size of thread pool whether fixing be a good problem to study all the time, the thread pool of static scale has been saved the expense that creates and destroy thread, but be difficult to reply for the linking number of super its capacity far away, by the popularization in pond, too much idle thread still can take no small system resource simply; The thread pool of Dynamic Scale still needs constantly to generate new thread when in the face of a large amount of connection, has greatly increased system loading.Therefore should analyze according to the difference of system environment of living in, determine to adopt which kind of form and how jumbo thread pool.
Both at home and abroad about the research of dynamic thread pool, mainly concentrate on three aspects at present: (1) creates some threads, instead of carry out a request in batches and just create a thread in the time that burst connection is larger, but will control the upper limit of pond thread number; (2) number of Optimization Work thread, uses the Principle of Statistics prediction number of users of peak period.This strategy is relatively simple, and reliability is higher; (3) provide multiple thread pools at a server, adopt different thread pool processing according to different tasks and priority.
Summary of the invention
Technical problem: the present invention uses Thread Pool Technology to build socket server, design system operation support scheme, and the working mechanism of thread pool is analysed in depth, calculate the expense of dynamic thread pool in the time facing the request that exceedes its capacity, the scheme that proposes to use Buffer Pool store excess connection request instead of dynamically generate additional thread, then master-plan is optimized, tackles common emergency case.
Technical scheme: although dynamic thread pool is widely applied, its limitation is the same with advantage obvious, when the connection request that exceedes in a large number its capacity occurs simultaneously, thread pool dynamically generates with the expense of destroying thread and will can not ignore.Therefore before the online Cheng Chi of socket server scheme proposing, add Buffer Pool to receive excess thread, in the time there is idle thread in thread pool, from Buffer Pool, taken out new connection request.
Socket server has been used for two network communications between program, and server must provide IP (procotol, Internet Protocol) address and the port numbers of oneself, and client connects to the corresponding port request of this address.Detailed process is:
Server end is set up the connection of ServerSocket monitoring client, after receiving request, connect, and take out message and process, result after treatment is returned, user end to server sends connection request, after connecting just to server transmission or from server receipt message.
1. Communication Module Design
This part is the core of socket server, completes the most important communication function of server.The communication module composition basic design scheme of server end is:
In order to meet the requirement of duplex communication, it is server and client side's transceiving data simultaneously, need to set up transmission, receive two threads, what in thread pool, deposit is receiving thread, after connecting, this thread takes out Socket connection, receipt message is also sent to upper strata by receipt message queue, sets up send-thread simultaneously, monitors and sends message queue (with the interface on upper strata, deposit the result that handle on upper strata), take out result and send.Meanwhile, receiving thread is in blocked state, and client-requested disconnects, and closes Socket and connects.In addition, too much for preventing linking number, the connection that exceedes pond center line number of passes is put into Buffer Pool, by the time occur that idle thread takes out again.
2. thread pool design
Define foundation and destruction that two main expenses in 1. threadings are threads, and the maintenance of thread.If the first expense is C1, the second expense is C2.
C1: the foundation of thread and the expense of destruction, major part is the time for thread storage allocation;
C2: the expense of the maintenance of thread, the context switch timing of thread;
N: thread pool size;
R: the Thread Count of current operation.
C1 > > C2 in actual conditions, the use that table 1 has contrasted thread pool is whether for the impact of systematic function.
Table 1 system is introduced the expense contrast before and after thread pool
Thread pool expense There is no the expense of thread pool Obtain the systematic function promoting
0≤r≤n C2·n C1·r C1·r-C2·n
r>n C2·n+C1·(r-n) C1·r C1·n-C2·n
Table 1 has been considered two kinds of situations.In the time that the Thread Count of surviving in pond is less than pond big or small, (0≤r≤n), in the situation that having thread pool, the expense of system only limits to switch between each thread, i.e. C2n.And system need to create and destroy thread to each new connection while thering is no thread pool, expense is C1r, and the first scheme will make systematic function promote C1r-C2n.
In the second situation, number of tasks has exceeded the maximum of thread pool, and now thread pool is necessary for unnecessary task creation new thread, and expense is C2n+C1 (r-n), is still C1r and there is no the expense of thread pool.Front a kind of scheme makes systematic function promote C1n-C2n.
The key issue of the first scheme is the problem that arranges of thread pool size.If pond center line number of passes is too much, system need to consume a large amount of processing and cache resources in order to safeguard these idle threads, Thread Count is crossed and is caused at least constantly dynamically generating new thread and destroy after task finishes, and the cost of paying may exceed the resource that task itself consumes.Below the best Thread Count of how to confirm (n) is discussed.
Define the thread of surviving in 2. actual environment thread pools and constantly changing, establishing r is the Thread Count of survival, and f (r) is its distribution law, and the expectation of thread pool size n is:
E ( n ) = Σ r = 0 n ( C 1 · r - C 2 · n ) · f ( r ) + Σ r = n + 1 ∞ ( C 1 · n - C 2 · n ) · f ( r ) - - - ( 1 )
If best thread pool size is n *, its expectation is:
E ( n * ) = sup n ∈ N E ( n ) - - - ( 2 )
N represents the set of thread pool center line number of passes, represent the value upper bound of E (n).(1) is rewritten into following form, the probability density function that p (r) is the Thread Count of surviving in thread pool:
E ( n ) = ∫ 0 n ( C 1 · r - C 2 · n ) · p ( r ) dr + ∫ n ∞ ( C 1 · n - C 2 · n ) · p ( r ) dr - - - ( 3 )
In order to obtain the maximum of E (n), (3) are carried out to differentiate, obtain (4):
dE dn = - C 2 + C 1 · ∫ 0 ∞ p ( r ) dr = 0 - - - ( 4 )
Rewrite (4), establishing and maintaining active thread is ξ=C with the ratio that creates new thread 2/ C 1:
∫ n * ∞ p ( r ) dr = ξ ∫ 0 n * p ( r ) dr ≤ 1 - ξ - - - ( 5 )
Because thread pool size is integer, so this value is determined by following formula:
∫ 0 n * p ( r ) dr ≤ 1 - ξ ∫ 0 n * + 1 p ( r ) dr > 1 - ξ - - - ( 6 )
This formula shows n *relevant with ξ, the expense of switching when thread is during much smaller than the expense of thread creation and destruction, and the capacity of thread pool is larger.(5) (6) also show n *relevant with system present load p (r).
The Socket design and optimization scheme of a kind of internet of things oriented platform of the present invention is in the communication layers of platform of internet of things, utilizing Buffer Pool, thread pool to set up multi-thread concurrent connects, a kind of efficient Socket server design scheme is proposed, in reply a large amount of user and request of data, reduce to greatest extent the use of overhead and buffer memory; Use Thread Pool Technology to build socket server, design system operation support scheme, and the working mechanism of thread pool is analysed in depth, calculate the expense of dynamic thread pool in the time facing the request that exceedes its capacity, the scheme that proposes to use Buffer Pool store excess connection request instead of dynamically generate additional thread, then master-plan is optimized, tackles common emergency case.
Described use Thread Pool Technology has built socket server, comprise Communication Module Design and thread pool design, although dynamic thread pool is widely applied, its limitation is the same with advantage obvious, when the connection request that exceedes in a large number its capacity occurs simultaneously, thread pool dynamically generates with the expense of destroying thread and will can not ignore.Therefore before the online Cheng Chi of socket server scheme proposing, add Buffer Pool to receive excess thread, in the time there is idle thread in thread pool, from Buffer Pool, taken out new connection request.
Socket server has been used for two network communications between program, and server must provide IP address and the port numbers of oneself, and client connects to the corresponding port request of this address.Detailed process is:
Server end is set up the connection of ServerSocket monitoring client, after receiving request, connect, and take out message and process, result after treatment is returned, user end to server sends connection request, after connecting just to server transmission or from server receipt message.
Communication Module Design is the core of socket server, completes the most important communication function of server.The communication module composition basic design scheme of server end is:
In order to meet the requirement of duplex communication, it is server and client side's transceiving data simultaneously, need to set up transmission, receive two threads, what in thread pool, deposit is receiving thread, after connecting, this thread takes out Socket connection, receipt message is also sent to upper strata by receipt message queue, sets up send-thread simultaneously, monitors and sends message queue (with the interface on upper strata, deposit the result that handle on upper strata), take out result and send.Meanwhile, receiving thread is in blocked state, and client-requested disconnects, and closes Socket and connects.In addition, too much for preventing linking number, the connection that exceedes pond center line number of passes is put into Buffer Pool, by the time occur that idle thread takes out again.
Thread pool design is determining of thread pool size.Foundation and destruction that two main expenses in thread pool design are threads, and the maintenance of thread.If the first expense is C1, the second expense is C2.C1 major part is the time for thread storage allocation, and C2 is the context switch timing of thread.C1 > > C2 in actual conditions.The thread of surviving in actual environment thread pool is constantly changing, and establishing r is the Thread Count (being thread pool size) of survival, and f (r) is its distribution law, and the expectation of thread pool size n is:
E ( n ) = Σ r = 0 n ( C 1 · r - C 2 · n ) · f ( r ) + Σ r = n + 1 ∞ ( C 1 · n - C 2 · n ) · f ( r ) - - - ( 1 )
If best thread pool size is n *, its expectation is:
E ( n * ) = sup n ∈ N E ( n ) - - - ( 2 )
N represents the set of thread pool center line number of passes, represent the value upper bound of E (n).(1) is rewritten into following form, the probability density function that p (r) is the Thread Count of surviving in thread pool
E ( n ) = ∫ 0 n ( C 1 · r - C 2 · n ) · p ( r ) dr + ∫ n ∞ ( C 1 · n - C 2 · n ) · p ( r ) dr - - - ( 3 )
In order to obtain the maximum of E (n), (3) are carried out to differentiate, obtain (4):
dE dn = - C 2 + C 1 · ∫ 0 ∞ p ( r ) dr = 0 - - - ( 4 )
Rewrite (4), establishing and maintaining active thread is ξ=C with the ratio that creates new thread 2/ C 1:
∫ n * ∞ p ( r ) dr = ξ ∫ 0 n * p ( r ) dr ≤ 1 - ξ - - - ( 5 )
Because thread pool size is integer, so this value is determined by following formula:
∫ 0 n * p ( r ) dr ≤ 1 - ξ ∫ 0 n * + 1 p ( r ) dr > 1 - ξ - - - ( 6 )
This formula shows n *relevant with ξ, the expense of switching when thread is during much smaller than the expense of thread creation and destruction, and the capacity of thread pool is larger.Formula (5) (6) also shows n *relevant with system present load p (r).
When system load ability is assessed, suppose that p (r) is for being uniformly distributed, number of users is 4000, is about 400ms again because system creation is destroyed thread, and context switching is about 20ms, have n *=3600.
Described system operation support scheme is that agreement is made in system use by oneself, and client and server transmits and receive data to wrap and communicates, and data package size is for being about 100Bytes.Form with packet after system processing messages is returned to message.In order to reach duplex, set two threads of send and receive, and send-thread is the sub-thread of receiving thread.The message that server is received is put into receipt message queue and is transferred to operation layer processing.When operation layer is disposed, each thread removes to send message queue and takes out one's own result and then send.
Described common emergency case is 1) task 2 of mark different clients how) may have while receiving data and block 3) Socket still carries out read-write operation 4 after closing) unexpected client disconnects four kinds of situations.
The task of described how mark different clients, solution is: for this situation, because each task belongs to different clients, therefore can be subordinated in the Socket object of this client and obtain its IP address and port numbers, the IP of client and the Mark field of port numbers are deposited in setting, are worth IP and port numbers for connecting.In While circulation, the thread polling message queue (MessageQueue) at each task place element topmost, once find that corresponding Mark just takes out message wherein.Otherwise thread suspension.
When described reception data, may have that to block be in whole system; client is clicked interface; program sends packet to server; then complete corresponding function by server; therefore the feature of task is low to the occupancy of CPU; but the I/O that often can block (I/O, Input/Output) operation.If user does not click interface for a long time, the worker thread in thread pool will be taken all the time by this user, cannot carry out any task.If all threads are all in blocked state in thread pool, newly adding of task cannot be processed.And when transmitting terminal data volume is excessive, receiving end possibly cannot receive completely.
Solution is: definition nIdx, (nIdx is for depositing the byte number of altogether reading for tri-variablees of nTotalLen and nReadLen, nTotalLen is that byte number and the nReadLen that altogether will read are the byte numbers of reading in once circulating), the value of nTotalLen can be determined by the field in packet header.While circulation continuous is jumped out to reading inlet flow end.For fear of the mode of this unsound use CPU of busy waiting, second while circulation is here responsible for there is no the thread suspension of data input, while having input, wakes up again.
After described Socket closes, still carry out read-write operation, solution is: because closing of Socket is subject to receiving thread control, therefore may before send-thread has moved, just Socket have been closed by receiving thread.Set up informing mechanism, after allowing send-thread be finished, send a notice to receiving thread, then receiving thread is closed Socket can address this problem, add protocol fields PacSeq (PacSeq is the numbering of the request bag received), for example receiving thread is received the bag of PacSeq=x, after send-thread is beamed back this request bag results needed, send notice to receiving thread, receiving thread is receiving that this notice keeps Socket connection status always.
Described unexpected client disconnects, solution was: client sent a heartbeat packet every 5 minutes, server end creates a daemon thread by receiving thread and is responsible for monitoring this heartbeat packet and carries out countdown, receiving thread receives that heartbeat packet do not process, just send notice and to daemon thread, timer is reset, then continue to keep being connected with client.If server was not received in 5 minutes, daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread.Because in the countdown stage, three threads are all in suspended state, so occupying system resources not.Client certain hour after the request of sending is confiscated feedback or is occurred that extremely qualifying server disconnects.After disconnection, resend connection request.
Beneficial effect: the present invention is directed to the massive demand that Internet of Things brings, utilize socket to design server com-munication module, further investigate the impact on systematic function that arranges of thread pool, use buffer queue processing to exceed the thread of thread pool capacity, the problem that analytical system may run into is also optimized.Finally systematic function is carried out to emulation, result shows that thread pool coordinates Buffer Pool can in the time of the connection in the face of excessive to a certain extent, greatly reduce overhead, the task little for expense should not adopt short connection mode, although and find to exceed the linking number of thread pool capacity and use Buffer Pool can reduce overhead when little, once but meeting or exceeding certain threshold value, the reaction time of system will sharply rise.
Brief description of the drawings
Fig. 1 is Socket process of establishing,
Fig. 2 is communication module operation flow graph,
Fig. 3 avoids Socket to close abnormal flow chart,
Fig. 4 is heartbeat inspecting flow chart,
Fig. 5 is the socket connection speed comparison diagram of experiment 1,
Fig. 6 is experiment 2 system BT performance block diagrams,
Fig. 7 is experiment 2 system DT performance block diagrams,
Fig. 8 is experiment 2 system DQL performance block diagrams,
Fig. 9 is experiment 2 system DBT performance block diagrams,
Figure 10 is the comparison diagram of experiment 3 SRTs.
Embodiment
System operation supporting way
System is used makes agreement by oneself, and client and server transmits and receive data to wrap and communicates, and data package size is for being about 100Bytes.Form with packet after system processing messages is returned to message.In order to reach duplex, set two threads of send and receive, and send-thread is the sub-thread of receiving thread.The message that server is received is put into receipt message queue and is transferred to operation layer processing.When operation layer is disposed, each thread removes to send message queue and takes out one's own result and then send.Here will there will be four kinds of situations, solution is as follows:
Situation one: the how task of mark different clients.
Solution: for this situation, because each task belongs to different clients, therefore can be subordinated in the Socket object of this client and obtain its IP address and port numbers, set Mark field, be worth IP and port numbers for connecting.In While circulation, the thread polling message queue (MessageQueue) at each task place element topmost, once find that corresponding Mark just takes out message wherein.Otherwise thread suspension.
Situation two: may have obstruction while receiving data
Solution: in whole system, client is clicked interface, program sends packet to server, then completes corresponding function by server, and therefore the feature of task is low to the occupancy of CPU, but the I/O that often can block operation.If user does not click interface for a long time, the worker thread in thread pool will be taken all the time by this user, cannot carry out any task.If all threads are all in blocked state in thread pool, newly adding of task cannot be processed.And when transmitting terminal data volume is excessive, receiving end possibly cannot receive completely.
Address this problem and can define nIdx, tri-variablees of nTotalLen and nReadLen, deposit respectively the byte number of altogether reading, the byte number of reading in the byte number that altogether will read and once circulation, and the value of nTotalLen can be determined by the field in packet header.While circulation continuous is jumped out to reading inlet flow end.For fear of the mode of this unsound use CPU of busy waiting, second while circulation is here responsible for there is no the thread suspension of data input, while having input, wakes up again.
After closing, situation three: Socket still carries out read-write operation
Solution: because closing of Socket is subject to receiving thread control, therefore may just Socket have been closed by receiving thread before send-thread has moved.Set up informing mechanism, send a notice to receiving thread after allowing send-thread be finished, then receiving thread is closed Socket can address this problem.As shown in Figure 3, add protocol fields PacSeq, the request bag of receiving is numbered, for example receiving thread is received the bag of PacSeq=x, after send-thread is beamed back this request bag results needed, send notice to receiving thread, receiving thread is receiving that this notice keeps Socket connection status always.
Situation four: unexpected client disconnects
Solution: this problem can be solved by algorithm as shown in Figure 4: client sent a heartbeat packet every 5 minutes, server end creates a daemon thread by receiving thread and is responsible for monitoring this heartbeat packet and carries out countdown, receiving thread receives that heartbeat packet do not process, just send notice and to daemon thread, timer is reset, then continue to keep being connected with client.If server was not received in 5 minutes, daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread.Because in the countdown stage, three threads are all in suspended state, so occupying system resources not.Client certain hour after the request of sending is confiscated feedback or is occurred that extremely qualifying server disconnects.After disconnection, resend connection request.
We use LoadRunner software to carry out emulation testing.LoadRunner is the load testing instrument of a kind of prognoses system behavior and performance.Software energy generating virtual user, with the mode Reality simulation user's of Virtual User business operation behavior.In experiment, built a socket server, basic configuration is double-core 2.4GHz, 4GB internal memory.Database adopts Oracle 11G.
Experiment 1 has been investigated and has been set up socket and connect the required time, experiment 2 is analyzed at linking number 4000, and thread pool capacity is 3600 o'clock, performance when system dynamically connects generation new thread for excess, experiment 3 is introduced Buffer Pool on the basis of experiment 2, and then observe system performance changes.
The test of experiment 1:socket connection speed
Socket need to spend the more time in the time connecting, if server adopts short connection mode, connection itself is taken time and may not be ignored.For checking socket connection speed, generate 100 Virtual User server is carried out to connection request, client-side program carries out two actions altogether:
1) attempt connecting with server.
2) after successful connection, send the feedback information of a packet display server.Server end also carries out two actions: monitor client connection request 1..2. after successful connection, beam back the packet that client is sent.
The ruuning situation of Virtual User in experiment as shown in Figure 5.Fig. 5 shows the speed that user connects, and the longitudinal axis is the number of Virtual User, and transverse axis is running time.Blue line represents the Virtual User quantity of moving, and green line represents the Virtual User quantity being finished.Here find out that starting to be connected to last user from first user is finished and has altogether spent 5 seconds, and most of user sent data after successful connection since the 4th second, that is to say and connect the time of having spent a whole set of operation 4/5ths.
This experiment shows that the body expense of attending to the basic or the fundamental in office is little and in situation that similar tasks may frequently occur, server and client should not used short connection mode, be that each completing should keep connecting a period of time afterwards, with the overhead of avoiding frequent disconnection to connect and brought again.
Experiment 2: the each performance parameter contrast test of server
In order to verify that linking number exceedes after thread pool capacity, system dynamically generates the impact that new thread causes performance, at thread pool and monitor between the accept method that is connected and set up Buffer Pool, unnecessary connection is deposited in pond, in the time that thread pool has vacant thread, from pond, fech connection moves.
Investigate respectively and adding with do not add Buffer Pool in the situation that, the performance of system when exceeding the connection of thread pool capacity.Carry out altogether four experiments, thread pool is set to 3600, and each linking number is made as respectively 3600,3800,4000,4200.System is that the request dynamic that exceedes thread pool capacity generates new thread in the time not adding Buffer Pool.In order to understand internal memory and the hard disk situation of server, observe the parameter in table 2:
Table 2 server performance parameter
This experiment content is identical with experiment 1, but in order to strengthen server end pressure, whole process iteration is carried out, after client complete operation, disconnect, and then request connection, carry out same operation.Whole process continues five minutes.Experimental result as shown in Figure 6.Fig. 6 is the system performance of network throughput (BT) under these conditions, in the time that equaling thread pool capacity, linking number have or not Buffer Pool little on network throughput impact, in the time that linking number rises to 3800 because the system that has Buffer Pool need not create thread in real time, resource is mainly used in transfer of data, and therefore network throughput is than high without the system of Buffer Pool.In the time that linking number is raised to 4,000, both throughputs remain basically stable, nearly exceed 25% and reach after 4200 connections without the network throughput of Buffer Pool system than the network throughput that has Buffer Pool system, the system that Buffer Pool is described cannot promote facing treatment effeciency when multi-link too much.
What Fig. 7 showed to Fig. 9 is the variation of system disk operation conditions before and after Buffer Pool adds, can find out and add DT, DQL, the DBT expense of the front system of Buffer Pool to increase along with the increase of linking number, performance in the time connecting for 4200 has risen respectively 26.6% than initial 3600 connections, 20.3%, 45.9%.This is mainly because system is constantly generating in real time and destroying thread, and the total number of threads of operation is also more and more, has consumed a large amount of memory sources, and system has to use disk as virtual memory, and the burden of hard disk also increases thereupon.After reviewing and adding Buffer Pool, although throughput of system is not fully up to expectations during in the face of 4200 connections, but the basic held stationary of overhead, this is because the Thread Count of operation reduces, the expense that thread switches to be reduced, and owing to depositing the request that exceedes thread pool capacity in Buffer Pool, generate and destroy thread without dynamic, having greatly reduced system loading.
As can be seen here, for exceeding thread pool capacity but the few connection of quantity should put it into Buffer Pool instead of dynamically generate thread and process, otherwise can cause systematic function sharply to decline, because the expense of the establishment of thread and the destruction expense that context switches when safeguarding thread.Can strengthen disposal ability although rashly increase Thread Count in order to increase linking number, probably cause systematic function to decline.In whole process, should avoid the operation to hard disk, because the slow running efficiency of system of the very big floor mop of this operation meeting as far as possible.
Experiment 3: system response time contrast test
Experiment 2 shows the connection that exceeds thread pool capacity to be placed in cache pool, and wait idle thread appears to process and can reduce overhead.But system is slower to new coupled reaction in the time that linking number is excessive, new user may need to wait for that the long period could successful connection, is the flex point in searching system reaction time, repeat above-mentioned experiment content, constantly increase connection request number, in the observing system reaction time, result as shown in figure 10.The concurrent connection number that Figure 10 transverse axis representative system is born, the reaction time of longitudinal axis representative system to connection request, can find out that connection request number was at 3600 o'clock, have or not Buffer Pool little on systematic function impact, because now do not have new thread to generate.Be to have the SRT of Buffer Pool shorter at 3800 o'clock in number of request, because it is fewer than the time that creates new thread to be newly connected to the time of waiting in pond.But connection request number reaches at 4000 o'clock, have the reaction time of the system of Buffer Pool sharply to increase to 497ms, and this growth is nonlinear, be not directly proportional to linking number, and almost do not change without the SRT of Buffer Pool, maintain 422ms.This is only can increase overhead because create new thread, and the processing speed of each task is not reduced.In the time that linking number reaches 4200, the SRT that has Buffer Pool is 885ms, has been much higher than the SRT 442ms without Buffer Pool.
Inventive point 1, use Thread Pool Technology build socket server
Communication Module Design
In order to meet the requirement of duplex communication, it is server and client side's transceiving data simultaneously, need to set up transmission, receive two threads, what in the thread pool of Fig. 2, deposit is receiving thread, after connecting, this thread takes out Socket connection, receipt message is also sent to upper strata by receipt message queue, sets up send-thread simultaneously, monitors and sends message queue (with the interface on upper strata, deposit the result that handle on upper strata), take out result and send.Meanwhile, receiving thread is in blocked state, and client-requested disconnects, and closes Socket and connects.In addition, too much for preventing linking number, the connection that exceedes pond center line number of passes is put into Buffer Pool, by the time occur that idle thread takes out again.
Thread pool design
Foundation and destruction that two main expenses in thread pool design are threads, and the maintenance of thread.If the first expense is C1, the second expense is C2.C1 major part is the time for thread storage allocation, and C2 is the context switch timing of thread.C1 > > C2 in actual conditions.The thread of surviving in actual environment thread pool is constantly changing, and establishing r is the Thread Count (being thread pool size) of survival, and f (r) is its distribution law, and the expectation of thread pool size n is:
E ( n ) = Σ r = 0 n ( C 1 · r - C 2 · n ) · f ( r ) + Σ r = n + 1 ∞ ( C 1 · n - C 2 · n ) · f ( r ) - - - ( 1 )
If best thread pool size is n *, its expectation is:
E ( n * ) = sup n ∈ N E ( n ) - - - ( 2 )
N represents the set of thread pool center line number of passes, represent the value upper bound of E (n).(1) is rewritten into following form, the probability density function that p (r) is the Thread Count of surviving in thread pool
E ( n ) = ∫ 0 n ( C 1 · r - C 2 · n ) · p ( r ) dr + ∫ n ∞ ( C 1 · n - C 2 · n ) · p ( r ) dr - - - ( 3 )
In order to obtain the maximum of E (n), (3) are carried out to differentiate, obtain (4):
dE dn = - C 2 + C 1 · ∫ 0 ∞ p ( r ) dr = 0 - - - ( 4 )
Rewrite (4), establishing and maintaining active thread is ξ=C with the ratio that creates new thread 2/ C 1:
∫ n * ∞ p ( r ) dr = ξ ∫ 0 n * p ( r ) dr ≤ 1 - ξ - - - ( 5 )
Because thread pool size is integer, so this value is determined by following formula:
∫ 0 n * p ( r ) dr ≤ 1 - ξ ∫ 0 n * + 1 p ( r ) dr > 1 - ξ - - - ( 6 )
This formula shows n *relevant with ξ, the expense of switching when thread is during much smaller than the expense of thread creation and destruction, and the capacity of thread pool is larger.(5) (6) also show n *relevant with system present load p (r).
When system load ability is assessed, suppose that p (r) is for being uniformly distributed, number of users is 4000, is about 400ms again because system creation is destroyed thread, and context switching is about 20ms, have n *=3600.
Inventive point 2, how to solve the problem of the task of mark different clients
For this situation, because each task belongs to different clients, therefore can be subordinated in the Socket object of this client and obtain its IP address and port numbers, set Mark field, be worth IP and port numbers for connecting.In While circulation, the thread polling message queue MessageQueue at each task place element topmost, once find that corresponding Mark just takes out message wherein.Otherwise thread suspension.
Inventive point 3, solution may have the problem of obstruction while receiving data
In whole system, client is clicked interface, and program sends packet to server, then completes corresponding function by server, and therefore the feature of task is low to the occupancy of CPU, but the I/O that often can block operation.If user does not click interface for a long time, the worker thread in thread pool will be taken all the time by this user, cannot carry out any task.If all threads are all in blocked state in thread pool, newly adding of task cannot be processed.And when transmitting terminal data volume is excessive, receiving end possibly cannot receive completely.
Address this problem and can define nIdx, tri-variablees of nTotalLen and nReadLen, deposit respectively the byte number of altogether reading, the byte number of reading in the byte number that altogether will read and once circulation, and the value of nTotalLen can be determined by the field in packet header.While circulation continuous is jumped out to reading inlet flow end.For fear of the mode of this unsound use CPU of busy waiting, second while circulation is here responsible for there is no the thread suspension of data input, while having input, wakes up again.
Inventive point 4, solution Socket still carry out the problem of read-write operation after closing
Because closing of Socket is subject to receiving thread control, therefore may before having moved, send-thread just Socket be closed by receiving thread.Set up informing mechanism, send a notice to receiving thread after allowing send-thread be finished, then receiving thread is closed Socket can address this problem.As shown in Figure 3, add protocol fields PacSeq, the request bag of receiving is numbered, for example receiving thread is received the bag of PacSeq=x, after send-thread is beamed back this request bag results needed, send notice to receiving thread, receiving thread is receiving that this notice keeps Socket connection status always.
The problem that inventive point 5, solution unexpected client disconnect
This problem can be solved by algorithm as shown in Figure 4: client sent a heartbeat packet every 5 minutes, server end creates a daemon thread by receiving thread and is responsible for monitoring this heartbeat packet and carries out countdown, receiving thread receives that heartbeat packet do not process, just send notice and to daemon thread, timer is reset, then continue to keep being connected with client.If server was not received in 5 minutes, daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread.Because in the countdown stage, three threads are all in suspended state, so occupying system resources not.Client certain hour after the request of sending is confiscated feedback or is occurred that extremely qualifying server disconnects.After disconnection, resend connection request.

Claims (5)

1. a socket implementation method for internet of things oriented platform, is characterized in that the communication layers at platform of internet of things, utilizes Buffer Pool, thread pool to set up multi-thread concurrent and connects, and proposes a kind of efficient socket server method for designing, specific as follows:
1) use Thread Pool Technology to build socket server;
2) design system operation support scheme;
3) working mechanism of in-depth analysis thread pool, calculates the expense of dynamic thread pool in the time facing the request that exceedes its capacity, proposes to use Buffer Pool store excess connection request;
4) master-plan is optimized, designs the solution of common emergency case;
Described use Thread Pool Technology builds socket server, comprise modified model Communication Module Design and thread pool design, this method has added Buffer Pool to receive excess thread before thread pool, in the time there is idle thread in thread pool, from Buffer Pool, take out new connection request, socket server has been used for two network communication and Message Processing operations between program, server must provide IP address and the port numbers of oneself, and client connects to the corresponding port request of this address; Detailed process is: server end is set up the connection of server socket monitoring client, after receiving request, connect, and the message of taking out different data format is processed, result after treatment is returned, user end to server sends connection request, after connecting just to server transmission or from server receipt message;
Described system operation support scheme is: system is used and makes agreement by oneself, and client and server transmits and receive data and wrapped message handling task and communicated, and data package size is designed to 100Bytes; Form with packet after system processing messages is returned to message; In order to reach duplex, set two threads of send and receive, and send-thread is the sub-thread of receiving thread; The message that server is received is put into receipt message queue and is transferred to operation layer processing, and when operation layer is disposed, each thread removes to send message queue and takes out one's own result and then send;
The solution of the described common emergency case of design is 1) scheme of the task of mark different clients how; 2) while receiving data, may there is the solution of obstruction; 3) Socket still carries out the improvement solution of read-write operation after closing; 4) the improvement solution that unexpected client disconnects;
Described modified model Communication Module Design, basic design scheme is:
In order to meet the requirement of duplex communication, server and client side's transceiving data simultaneously, needs to set up transmission, receive two threads, what in thread pool, deposit is receiving thread, and after connecting, this thread takes out Socket connection, and receipt message is also processed, be sent to upper strata by receipt message queue subsequently, set up send-thread simultaneously, monitor and send message queue, with the interface on upper strata, deposit the result that handle on upper strata, concurrent carry information; Meanwhile, receiving thread is in blocked state, and client-requested disconnects, and closes Socket and connects; In addition, too much for preventing linking number, the connection that exceedes pond center line number of passes is put into Buffer Pool, by the time occur that idle thread takes out again;
Described thread pool design, key issue is the problem that arranges of thread pool size, best thread pool size n *definite method as follows:
Foundation and destruction that two main expenses in threading are threads, and the maintenance of thread, establishing the first expense is C1, and the second expense is C2, and C1 major part is the time for thread storage allocation, and C2 is the context switch timing of thread; C1>>C2 in actual conditions, supposes that thread pool size is n, and n is definite value, and the Thread Count of current operation is r, and the use of considering respectively thread pool is whether for the impact of systematic function:
1) in the time that the Thread Count of surviving in pond is less than pond big or small, 0≤r≤n, in the situation that having thread pool, the expense of system only limits to switch between each thread, i.e. C2n; And system need to create and destroy thread to each new connection while thering is no thread pool, expense is C1r, uses the scheme of thread pool will make systematic function promote C1r-C2n;
2) in the time that the Thread Count of surviving in pond is greater than pond big or small, r>n, number of tasks has exceeded the maximum of thread pool, and now thread pool is necessary for unnecessary task creation new thread, expense is C2n+C1 (r-n), is still C1r and there is no the expense of thread pool; Use the scheme of thread pool to make systematic function promote C1n-C2n,
The thread of surviving in actual environment thread pool, constantly changing, is established the Thread Count that variable r is current operation, and f (r) is its distribution law, and the expectation of thread pool size n is:
E ( n ) = Σ r = 0 n ( C 1 · r - C 2 · n ) · f ( r ) + Σ r = n + 1 ∞ ( C 1 · n - C 2 · n ) · f ( r ) - - - ( 1 )
If best thread pool size is n *, its expectation is:
E ( n * ) = sup n ∈ N E ( n ) - - - ( 2 )
N represent thread pool center line number of passes the likely set of value, the value upper bound that represents E (n), is rewritten into following form by (1), and p (r) is the probability density function of the Thread Count of surviving in thread pool
E ( n ) = ∫ 0 n ( C 1 · r - C 2 · n ) · p ( r ) dr + ∫ n ∞ ( C 1 · n - C 2 · n ) · p ( r ) dr - - - ( 3 )
In order to obtain the maximum of E (n), (3) are carried out to differentiate, obtain formula (4):
dE dn = - C 2 + C 1 · ∫ n ∞ p ( r ) dr = 0 - - - ( 4 )
Rewrite (4), establishing and maintaining active thread is ξ=C with the ratio that creates new thread 2/ C 1:
∫ n * ∞ p ( r ) dr = ξ , ∫ 0 n * p ( r ) dr ≤ 1 - ξ - - - ( 5 )
Because thread pool size is integer, so n *can be determined by following formula:
∫ 0 n * p ( r ) dr ≤ 1 - ξ , ∫ 0 n * + 1 p ( r ) dr > 1 - ξ - - - ( 6 )
Formula (6) shows n *relevant with ξ, the expense of switching when thread is during much smaller than the expense of thread creation and destruction, and the capacity of thread pool is larger; Formula (5) formula (6) also shows n *relevant with system present load p (r).
2. the socket implementation method of a kind of internet of things oriented platform according to claim 1, the scheme that it is characterized in that the task of described how mark different clients is: for this situation, because each task belongs to different clients, therefore be subordinated in the Socket object of this client and obtain its IP address and port numbers, the IP of client is deposited in setting, and increasing the tag field of port numbers, marker word segment value is continuous IP and port numbers; In the circulation of thread poll, the thread polling message queue at each task place element topmost, once find that corresponding mark just takes out message corresponding to this tag field, otherwise thread suspension.
3. the socket implementation method of a kind of internet of things oriented platform according to claim 1, while it is characterized in that described reception data, may there is obstruction, in whole system, client is clicked interface, program sends packet to server, then complete corresponding function by server, therefore the feature of task is low to the occupancy of CPU, but the I/O that often can block operation; If user does not click interface for a long time, the worker thread in thread pool will be taken all the time by this user, cannot carry out any task; If all threads are all in blocked state in thread pool, newly adding of task cannot be processed, and transmitting terminal data volume when excessive receiving end possibly cannot receive completely,
Its solution is: definition nIdx, and tri-variablees of nTotalLen and nReadLen, nIdx is for depositing the byte number of altogether reading, and nTotalLen is that the byte number that altogether will read and nReadLen have carried out the byte number read in a thread poll circulation; The value of nTotalLen can be determined by the field in packet header, thread poll circulation continuous is jumped out to reading inlet flow end, for fear of the mode of this unsound use CPU of busy waiting, thread poll circulation next time is here responsible for there is no the thread suspension of data input, while having input, wakes up again.
4. the socket implementation method of a kind of internet of things oriented platform according to claim 1, it is characterized in that the improvement solution of still carrying out read-write operation after described Socket closes is: because closing of Socket is subject to receiving thread control, therefore may before send-thread has moved, just Socket have been closed by receiving thread; Set up informing mechanism, send a notice to receiving thread after allowing send-thread be finished, then receiving thread is closed Socket the problem that can solve " Socket still carries out read-write operation after closing "; Corrective measure is for adding protocol fields PacSeq, and PacSeq is the numbering of the request bag received, sends notice to receiving thread after send-thread is beamed back this request bag results needed, and receiving thread kept Socket connection status before receiving this notice always.
5. the socket implementation method of a kind of internet of things oriented platform according to claim 1, it is characterized in that the improvement solution that described unexpected client disconnects is: client sent a heartbeat packet every 5 minutes, the corrective measure of server end is responsible for monitoring this heartbeat packet and is carried out countdown for creating a daemon thread by receiving thread, receiving thread receives that heartbeat packet do not process, just send notice and to daemon thread, timer is reset, then continue to keep being connected with client; If server was not received in 5 minutes, daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread; Because in the countdown stage, three threads are all in suspended state, so occupying system resources not, client certain hour after the request of sending is confiscated feedback or occurred that extremely qualifying server disconnects, and resends connection request after disconnection.
CN201210038597.XA 2012-02-20 2012-02-20 Internet of things platform-oriented socket implementation method Active CN102546437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210038597.XA CN102546437B (en) 2012-02-20 2012-02-20 Internet of things platform-oriented socket implementation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210038597.XA CN102546437B (en) 2012-02-20 2012-02-20 Internet of things platform-oriented socket implementation method

Publications (2)

Publication Number Publication Date
CN102546437A CN102546437A (en) 2012-07-04
CN102546437B true CN102546437B (en) 2014-10-22

Family

ID=46352425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210038597.XA Active CN102546437B (en) 2012-02-20 2012-02-20 Internet of things platform-oriented socket implementation method

Country Status (1)

Country Link
CN (1) CN102546437B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102916953B (en) * 2012-10-12 2016-03-09 青岛海信传媒网络技术有限公司 The method and the device that realize concurrent services is connected based on TCP
CN105843592A (en) * 2015-01-12 2016-08-10 芋头科技(杭州)有限公司 System for implementing script operation in preset embedded system
CN104683460B (en) * 2015-02-15 2019-08-16 青岛海尔智能家电科技有限公司 A kind of communication means of Internet of Things, device and server
CN104735077B (en) * 2015-04-01 2017-11-24 积成电子股份有限公司 It is a kind of to realize the efficiently concurrent methods of UDP using Circular buffer and circle queue
CN104850460A (en) * 2015-06-02 2015-08-19 上海斐讯数据通信技术有限公司 Service program thread management method
CN105323319A (en) * 2015-11-09 2016-02-10 深圳市江波龙科技有限公司 Communication method and system for IOT equipment
US9923821B2 (en) * 2015-12-23 2018-03-20 Intel Corporation Managing communication congestion for internet of things devices
CN105740326B (en) * 2016-01-21 2021-01-15 腾讯科技(深圳)有限公司 Thread state monitoring method and device for browser
CN106656436B (en) * 2016-09-29 2020-05-22 安徽华速达电子科技有限公司 Communication management method and system based on intelligent optical network unit
CN108121598A (en) * 2016-11-29 2018-06-05 中兴通讯股份有限公司 Socket buffer resource management and device
CN106997307A (en) * 2017-02-13 2017-08-01 上海大学 A kind of Socket thread pool design methods towards multiple terminals radio communication
CN107147663A (en) * 2017-06-02 2017-09-08 广东暨通信息发展有限公司 The synchronous communication method and system of a kind of computer cluster
CN107332735A (en) * 2017-07-04 2017-11-07 四川长虹技佳精工有限公司 The network communication method of Auto-reconnect after disconnection
CN108075947B (en) * 2017-07-31 2024-02-27 北京微应软件科技有限公司 Storage device, PC (personal computer) end and maintenance method and system of communication connection connectivity
CN107454177A (en) * 2017-08-15 2017-12-08 合肥丹朋科技有限公司 The dynamic realizing method of network service
CN109428926B (en) * 2017-08-31 2022-04-12 北京京东尚科信息技术有限公司 Method and device for scheduling task nodes
CN107783848A (en) * 2017-09-27 2018-03-09 歌尔科技有限公司 A kind of JSON command handling methods and device based on socket communication
CN108566390B (en) * 2018-04-09 2020-03-17 中国科学院信息工程研究所 Satellite message monitoring and distributing service system
CN109450838A (en) * 2018-06-27 2019-03-08 北京班尼费特科技有限公司 A kind of intelligent express delivery cabinet network communication protocol based on Intelligent internet of things interaction platform
CN109727595B (en) * 2018-12-29 2021-08-03 神思电子技术股份有限公司 Software design method of voice recognition server
CN111859082A (en) * 2020-05-27 2020-10-30 伏羲科技(菏泽)有限公司 Identification analysis method and device
CN111858046A (en) * 2020-07-13 2020-10-30 海尔优家智能科技(北京)有限公司 Service request processing method and device, storage medium and electronic device
CN111917852A (en) * 2020-07-23 2020-11-10 上海珀立信息科技有限公司 Multi-person network synchronization system based on Unity and development method
CN113438247A (en) * 2021-06-29 2021-09-24 四川巧夺天工信息安全智能设备有限公司 Method for processing data interaction conflict in socket channel
CN116755863B (en) * 2023-08-14 2023-10-24 北京前景无忧电子科技股份有限公司 Socket thread pool design method for multi-terminal wireless communication

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527719A (en) * 2009-04-27 2009-09-09 成都科来软件有限公司 Method for parallel analyzing TCP data flow

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527719A (en) * 2009-04-27 2009-09-09 成都科来软件有限公司 Method for parallel analyzing TCP data flow

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
Analysis of Optimal Thread Pool Size;Yibei Ling,et al;《ACM SIGOPS Operating Systems Review》;20000214;第34卷(第2期);正文第4页至第7页 *
Yibei Ling,et al.Analysis of Optimal Thread Pool Size.《ACM SIGOPS Operating Systems Review》.2000,第34卷(第2期),正文第4页至第7页.
刘焱旺.基于Web的实时控制系统的研究与设计.《中国优秀硕士学位论文全文数据库 信息科学辑》.2009,(第5期),第I140-231页.
周凤石.基于Windows Socket的网络通信中的心跳机制原理及其实现.《沙洲职业工学院学报》.2009,第12卷(第3期),第17-21页.
基于Socket的并发服务器的Java语言实现;赵文清;《现代电子技术》;20020228(第2期);全文 *
基于Web的实时控制系统的研究与设计;刘焱旺;《中国优秀硕士学位论文全文数据库 信息科学辑》;20090515(第5期);第I140-231页 *
基于Windows Socket的网络通信中的心跳机制原理及其实现;周凤石;《沙洲职业工学院学报》;20090930;第12卷(第3期);正文第1-4页 *
夏玲.客户端与服务器端的Socket通信.《电脑编程技巧与维护》.2009,(第17期),全文.
客户端与服务器端的Socket通信;夏玲;《电脑编程技巧与维护》;20091023(第17期);全文 *
赵文清.基于Socket的并发服务器的Java语言实现.《现代电子技术》.2002,(第2期),全文.

Also Published As

Publication number Publication date
CN102546437A (en) 2012-07-04

Similar Documents

Publication Publication Date Title
CN102546437B (en) Internet of things platform-oriented socket implementation method
CN102004670B (en) Self-adaptive job scheduling method based on MapReduce
CN101917490B (en) Method and system for reading cache data
CN104636957B (en) A kind of system and method for processing high concurrent request of data
CN102916953A (en) Method and device for realizing concurrent service on basis of TCP (transmission control protocol) connection
CN104901898A (en) Load balancing method and device
CN103516744A (en) A data processing method, an application server and an application server cluster
CN103164287A (en) Distributed-type parallel computing platform system based on Web dynamic participation
CN101652750A (en) Data processing device, distributed processing system, data processing method, and data processing program
CN102929834A (en) Many-core processor and inter-core communication method thereof and main core and auxiliary core
CN107493351A (en) A kind of client accesses the method and device of the load balancing of storage system
CN104811503A (en) R statistical modeling system
CN101383814B (en) Device and method implementing data access based on connection pool
CN102217247B (en) Method, apparatus and system for implementing multiple web application requests scheduling
CN102385536A (en) Method and system for realization of parallel computing
CN114710571A (en) Data packet processing system
CN107577497A (en) A kind of method, system and the relevant apparatus of research and development of software management
CN106570011A (en) Distributed crawler URL seed distribution method, dispatching node, and grabbing node
CN201813401U (en) System for reading buffer data
CN111427674A (en) Micro-service management method, device and system
CN102982001B (en) The method of many-core processor and space access thereof, main core
CN104184685A (en) Data center resource allocation method, device and system
CN115567594A (en) Microservice request processing method, microservice request processing device, computer equipment and storage medium
CN114510361A (en) Internet of things real-time data processing method based on dynamic rules
CN114595083A (en) Message processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20120704

Assignee: Jiangsu Nanyou IOT Technology Park Ltd.

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: 2016320000211

Denomination of invention: Internet of things platform-oriented socket implementation method

Granted publication date: 20141022

License type: Common License

Record date: 20161114

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model
EC01 Cancellation of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract

Assignee: Jiangsu Nanyou IOT Technology Park Ltd.

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: 2016320000211

Date of cancellation: 20180116

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200610

Address after: Room 408, block D, Caiying building, No.99 Tuanjie Road, Jiangbei new district, Nanjing, Jiangsu

Patentee after: Jiangsu Jiangxin Electronic Technology Co., Ltd

Address before: 210003, No. 66, new exemplary Road, Nanjing, Jiangsu

Patentee before: NANJING University OF POSTS AND TELECOMMUNICATIONS

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210507

Address after: 210009 3-1-1902, talent apartment, 32 dingjiaqiao, Gulou District, Nanjing City, Jiangsu Province

Patentee after: Wang Kun

Address before: Room 408, block D, Yingying building, 99 Tuanjie Road, Jiangbei new district, Nanjing City, Jiangsu Province, 211899

Patentee before: Jiangsu Jiangxin Electronic Technology Co., Ltd