CN102497431B - Memory application method and system for caching application data of transmission control protocol (TCP) connection - Google Patents

Memory application method and system for caching application data of transmission control protocol (TCP) connection Download PDF

Info

Publication number
CN102497431B
CN102497431B CN201110415220.7A CN201110415220A CN102497431B CN 102497431 B CN102497431 B CN 102497431B CN 201110415220 A CN201110415220 A CN 201110415220A CN 102497431 B CN102497431 B CN 102497431B
Authority
CN
China
Prior art keywords
cache node
module
application
stream
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110415220.7A
Other languages
Chinese (zh)
Other versions
CN102497431A (en
Inventor
刘灿
刘朝辉
窦晓光
纪奎
邵宗有
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dawning Information Industry Beijing Co Ltd
Dawning Information Industry Co Ltd
Original Assignee
Dawning Information Industry Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dawning Information Industry Beijing Co Ltd filed Critical Dawning Information Industry Beijing Co Ltd
Priority to CN201110415220.7A priority Critical patent/CN102497431B/en
Publication of CN102497431A publication Critical patent/CN102497431A/en
Application granted granted Critical
Publication of CN102497431B publication Critical patent/CN102497431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a method for caching the application data of transmission control protocol (TCP) connection. The method is characterized in that: the TCP connection applies for a buffer block with a fixed length from a static buffer pool under the condition of small application load, and dynamically applies for a buffer block with a fixed length from an operating system under the condition of high application load. Compared with the prior art, the method has the advantages that: upper-layer application is well supported to temporarily store a load for content analysis; when the upper-layer application has a small memory load, resources can be quickly acquired from the static buffer pool; and when the upper-layer application has a heavy memory load, data can be properly buffered to avoid packet loss.

Description

A kind of internal memory application method and system of TCP Connection Cache application data
Technical field
The invention belongs to network safety filed, be specifically related to a kind of internal memory application method and system of TCP Connection Cache application data.
Background technology
Along with the high speed development at networking, networking has brought facility, brings many problems simultaneously.As: pornographic, anti-government's public opinion etc. can be transmitted by network.Therefore, the importance of the monitoring to network is also more and more significant.Most of four layer models that adopt TCP/IP of current network, if will monitor data content in application layer, must check the application load content of packet.Under TCP/IP model, need only in transport layer load analysis.The application connecting based on TCP, the data that can connect each TCP check, determine that whether its content is illegal.
Patent No. CN200580031571.0 (at network element place cache contents and status data) discloses for the method at network element place cache contents and status data.In one embodiment, at network element place data intercept.From packet, determine to specify and apply the application layer messages of the request to specific data to server.Determine the Part I of the specific data in the buffer memory that is included in network element.Send request the message of the Part II that is not included in the data in buffer memory to server application.Reception comprise Part II but do not comprise Part I first response.Send the second response that comprises the first and second parts to client application.In one embodiment, divide into groups at network element place data intercept.From packet, determine the application layer messages of specified session or database connection state information.In network element place buffer status information.
Patent No. CN200680012181.3 (method of distributed and dynamic subscribing certificate thereof) discloses a kind of distributed, comprising: application module (1) and data management system (2); In described application module (1), be provided with Data access module (11) and data buffer (12); In described data management system (2), be provided with subscription management module (21), subscription list module (22), notification module (23) and data storage (24); In addition, in described application module (1), be also provided with dynamic subscription management module (14) and data recordin module (15); In described data management system (2), be also provided with data release module (25), this data release module (25) is connected with described data storage (24); Described dynamic subscription management module (14) is connected with described data recordin module (15), described data buffer (12) and described Data access module (11) respectively, with described subscription management module (21), described notification module (23) and described data release module (25) communication connection; The present invention also comprises a kind of method of dynamic subscribing certificate.Adopt the present invention, can effectively reduce the data volume of Internet Transmission and system processing, alleviate network burden, improve the service behaviour of system.
In above-mentioned tcp uninstalling system, software and hardware does not configure or only configures a small amount of buffering area for buffer memory application data.
The shortcoming of above-mentioned technology is: in tcp uninstalling system, hardware does not configure or only configures a small amount of buffering area for buffer memory application data.For TCP connect load contents do not check, do not support, for upper layer application temporary cache partial data, therefore, can not well coordinate the content analysis of upper layer application, in upper layer application compared with busy, also can only packet loss.
Summary of the invention
The present invention overcomes the deficiencies in the prior art, the buffer memory distribution mechanism of application is provided, and can be the internal memory of each connection static allocation certain capacity, in the time of Out of Memory, obtain memory source by dynamic assignment from OS and be association of activity and inertia, saving resource can meet again application demand as early as possible.
The internal memory application method that the invention provides a kind of TCP Connection Cache application data, it comprises the steps:
(1) initialization, applies for that by multiple yardstick (as three kinds of yardsticks, 5k, 1.5k, 0.5k) several nodes form static pond for stream cache node according to application scale, goes to step (2);
(2) application of stream node goes to step (3); Stream node discharges and goes to step (7);
(3) from static pond, apply for idle node, if applied for successfully, enter step (5), otherwise enter step (4);
(4) from operating system application dynamic flow cache node (size is for satisfying the demands minimum one static state), if applied for successfully, enters step (5), otherwise enter step (6);
(5) return node head pointer, goes to step (11);
(6) return to null pointer, go to step (11);
(7) stream cache node has dynamic application mark to go to step (8), otherwise goes to step (9);
(8), in static pond, the stream cache node number of same scale is less than setting threshold (as: 1k), goes to step
, otherwise go to step (10) (9);
(9) stream cache node is put into static pond, goes to step (11);
(10) stream cache node returns to operating system, goes to step (11);
(11) finish.
The internal memory application method of TCP Connection Cache application data provided by the invention, in step (3), TCP connects the buffer blocks of applying for obtaining regular length len in the freebuf chained list from static cache pool.
The internal memory application method of TCP Connection Cache application data provided by the invention, in step (4), TCP connects the buffer blocks of dynamically applying for obtaining regular length len from operating system.
The internal memory application method of TCP Connection Cache application data provided by the invention, determines that according to the node number (with the predetermined threshold value comparison of same scale in static pond) that is released same scale in the mark of node and static pond node is reclaimed by static pond or operating system reclaims in step (7)~(10).
The internal memory application method of TCP Connection Cache application data provided by the invention, described TCP connection closed or when superseded, (7)~(10) module that also adopts node to reclaim is processed.
The present invention also provides a kind of system of internal memory application of TCP Connection Cache application data, and it comprises as lower module:
(1) initialization module, forms static pond for flowing cache node by the several nodes of multiple yardstick application according to application scale;
(2) application static buffering block module, stream node application revolving die piece (3); Stream node discharges revolving die piece (7);
(3) from static pond, apply for idle node, if applied for successfully, enter module (5), otherwise enter module (4);
(4) application dynamic buffer module, from operating system application dynamic flow cache node, if applied for successfully, enters module (5), otherwise enters module (6);
(5) return node head pointer, revolving die piece (11);
(6) return to null pointer, revolving die piece (11);
(7) stream cache node has dynamic application mark revolving die piece (8), otherwise revolving die piece (9);
(8) in static pond, the stream cache node number of same scale is less than setting threshold, revolving die piece (9), otherwise revolving die piece (10);
(9) stream cache node is put into static pond, revolving die piece (11);
(10) stream cache node returns to operating system, revolving die piece (11);
(11) finish;
Wherein, the yardstick described in initialization module comprises three kinds, is respectively: 5k, 1.5k, 0.5k; Described in application dynamic buffer module, node size is can satisfy the demands minimum one in static state.
The internal memory application system of the TCP Connection Cache application data that the present invention also provides, in module (2), TCP connects the buffer blocks of applying for obtaining regular length len in the freebuf chained list from static cache pool.
The internal memory application system of the TCP Connection Cache application data that the present invention also provides, in module (4), TCP connects the buffer blocks of applying for obtaining regular length len in the freebuf chained list from dynamic cache pool.
The internal memory delivery system of the TCP Connection Cache application data that the present invention also provides, in module (7,8,9,10), according to the stream cache node number in the mark of stream buffer joint (dynamically application or static application) and static buffering pond, determine that buffer joint reclaims to static pond or operating system.
The internal memory application system of TCP Connection Cache application data provided by the invention, described TCP connection closed or when superseded, (7)~(10) module that also adopts node to reclaim is processed.
Compared with prior art, beneficial effect of the present invention is: well supported the temporary load of upper layer application to carry out content analysis, in the time that upper layer application cpu load is large, also can do suitable buffering, avoided packet loss; By static state application and dynamically application combination, take into account the efficiency of application speed and application, when the consumption of system flow cache node is few, directly from static pond, obtain, realize application fast.When the consumption of system flow cache node is large, obtain from operating system, effectively utilized operating-system resources.Static release and dynamic release combination, the stream cache node of applying for from static pond, discharges to static pond, from the dynamic node of operating system application, according to the static node situation of system consumption, determines and discharges to static pond or operating system.Can be divided into following some: 1. avoid applying for continually and discharging stream cache node from operating system the directly application and discharging from the stream cache node pond of priority requisition of the buffer memory of some; 2. in the time that the stream cache node in static pond is inadequate, can from operating system, apply for, meet application demand; 3. stream cache node, in the time discharging, according to the mark of stream cache node (obtaining still operating system from static cache dynamically obtains) and the situation of system to the idle stream cache node static pond, judges and discharges to static pond or operating system.
Brief description of the drawings
Fig. 1 is schematic flow sheet of the present invention.
Embodiment
Referring to the schematic flow sheet of the present invention of Fig. 1, method of the present invention is carried out as follows:
1. for TCP connects the buffer blocks of applying for obtaining regular length len (len saying is the len of regular length later) in the freebuf chained list from static cache pool.
2. step 1 failure, static memory is inadequate, dynamically applies for the buffering area of len, if apply for unsuccessfully, illustrative system resource exhaustion, return to sky, dynamically apply for successfully, the information node of this buffering area is connected in dynamic link table, in information node, record dynamic symbol, step 1 success, is connected to buffer blocks in static chained list, records dynamic symbol in information node;
3.TCP connection closed or when superseded, according to the mark of information node, returns to system buffering area or puts back to static idle chained list.
The present invention first static allocation a slice buffer memory, as cache pool, meets the buffer memory of TCP stream under normal discharge, and in the time that flow is larger, dynamic assignment buffer memory, when flow recovers, after normal level, by certain strategy, the buffer memory of dynamic assignment to be returned to system.Buffering is preserved the data that are uploaded to application, is uploaded to according to demand application.Thereby solve the problem that a kind of Memory Allocation mechanism is provided for tcp Connection Cache application layer data.
Above embodiment is only in order to illustrate that technical scheme of the present invention is not intended to limit, although the present invention is had been described in detail with reference to above-described embodiment, the those of ordinary skill in described field is to be understood that: still can the specific embodiment of the present invention be modified or be replaced on an equal basis, and do not depart from any amendment of spirit and scope of the invention or be equal to replacement, it all should be encompassed in the middle of claim scope of the present invention.

Claims (8)

1. a method for TCP Connection Cache application data, it comprises the steps:
(1) initialization,, goes to step (2) for stream cache node forms static pond by the several nodes of multiple size application according to application scale;
(2) application of stream cache node goes to step (3); Stream cache node discharges and goes to step (7);
(3) from static pond, apply for idle node, if applied for successfully, enter step (5), otherwise enter step (4);
(4), from operating system application dynamic flow cache node, if applied for successfully, enter step (5), otherwise enter step (6);
(5) return node head pointer, goes to step (11);
(6) return to null pointer, go to step (11);
(7) stream cache node has dynamic application mark to go to step (8), otherwise goes to step (9);
(8) in static pond, be less than setting threshold with the stream cache node size phase homogeneous turbulence cache node number of current release, go to step (9), otherwise go to step (10);
(9) stream cache node is put into static pond, goes to step (11);
(10) stream cache node returns to operating system, goes to step (11);
(11) finish;
Wherein, the size that flows cache node described in step (1) comprises three kinds, is respectively: 5KB, 1.5KB, 0.5KB; The size of dynamic flow cache node described in step (4) is the minimum one stream cache node size that can meet application dynamic flow cache node demand in static pond;
In step (7)~(10) according in the stream dynamic application mark of cache node and static pond with the stream cache node size phase homogeneous turbulence cache node number of current release, determine that stream cache node is reclaimed by static pond or operating system reclaims.
2. method according to claim 1, is characterized in that, in step (3), TCP connects the buffer blocks of applying for obtaining regular length len in the freebuf chained list from static pond.
3. method according to claim 1 and 2, is characterized in that, in step (4), TCP connects the buffer blocks of dynamically applying for obtaining regular length len from operating system.
4. method according to claim 1, is characterized in that, described TCP connection closed or when superseded, process step (7)~(10) that also adopt node to reclaim.
5. a system for the internal memory application of TCP Connection Cache application data, it comprises as lower module (1)~(11):
Module (1): initialization module, forms static pond for flowing cache node by the several nodes of multiple size application according to application scale;
Module (2): stream cache node application revolving die piece (3); Stream cache node discharges revolving die piece (7);
Module (3): apply for idle node from static pond, if applied for successfully, enter module (5), otherwise enter module (4);
Module (4): from operating system application dynamic flow cache node, if applied for successfully, enter module (5), otherwise enter module (6);
Module (5): return node head pointer, revolving die piece (11);
Module (6): return to null pointer, revolving die piece (11);
Module (7): stream cache node has dynamic application mark, revolving die piece (8), otherwise revolving die piece (9);
Module (8): in static pond, be less than setting threshold, revolving die piece (9), otherwise revolving die piece (10) with the stream cache node size phase homogeneous turbulence cache node number of current release;
Module (9): stream cache node is put into static pond, revolving die piece (11);
Module (10): stream cache node returns to operating system, revolving die piece (11);
Module (11): finish;
Wherein, the size that flows cache node described in initialization module comprises three kinds, is respectively: 5KB, 1.5KB, 0.5KB; The size of dynamic flow cache node described in module (4) is the minimum one stream cache node size that can meet application dynamic flow cache node demand in static pond;
In module (7)~(10) according in the stream dynamic application mark of cache node and static pond with the stream cache node size phase homogeneous turbulence cache node number of current release, determine that stream buffer joint reclaims to static pond or operating system.
6. system according to claim 5, is characterized in that, in module (3), TCP connects the buffer blocks of applying for obtaining regular length len in the freebuf chained list from static pond.
7. according to the system described in claim 5 or 6, it is characterized in that, in module (4), TCP connects the buffer blocks of dynamically applying for obtaining regular length len from operating system.
8. system according to claim 5, is characterized in that, described TCP connection closed or when superseded, (7)~(10) module that also adopts node to reclaim is processed.
CN201110415220.7A 2011-12-13 2011-12-13 Memory application method and system for caching application data of transmission control protocol (TCP) connection Active CN102497431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110415220.7A CN102497431B (en) 2011-12-13 2011-12-13 Memory application method and system for caching application data of transmission control protocol (TCP) connection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110415220.7A CN102497431B (en) 2011-12-13 2011-12-13 Memory application method and system for caching application data of transmission control protocol (TCP) connection

Publications (2)

Publication Number Publication Date
CN102497431A CN102497431A (en) 2012-06-13
CN102497431B true CN102497431B (en) 2014-10-22

Family

ID=46189216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110415220.7A Active CN102497431B (en) 2011-12-13 2011-12-13 Memory application method and system for caching application data of transmission control protocol (TCP) connection

Country Status (1)

Country Link
CN (1) CN102497431B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761192B (en) * 2014-01-20 2016-08-17 华为技术有限公司 A kind of method and apparatus of Memory Allocation
CN113992731B (en) * 2021-11-02 2024-04-30 四川安迪科技实业有限公司 Abnormal control method and device based on STOMP protocol

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1444812A (en) * 2000-07-24 2003-09-24 睦塞德技术公司 Method and apparatus for reducing pool starvation in shared memory switch
CN1798094A (en) * 2004-12-23 2006-07-05 华为技术有限公司 Method of using buffer area
EP1890425A1 (en) * 2005-12-22 2008-02-20 Huawei Technologies Co., Ltd. A distributed data management system and a method for data dynamic subscribing
CN101069169B (en) * 2004-11-23 2010-10-27 思科技术公司 Caching content and state data at a network element

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1444812A (en) * 2000-07-24 2003-09-24 睦塞德技术公司 Method and apparatus for reducing pool starvation in shared memory switch
CN101069169B (en) * 2004-11-23 2010-10-27 思科技术公司 Caching content and state data at a network element
CN1798094A (en) * 2004-12-23 2006-07-05 华为技术有限公司 Method of using buffer area
EP1890425A1 (en) * 2005-12-22 2008-02-20 Huawei Technologies Co., Ltd. A distributed data management system and a method for data dynamic subscribing

Also Published As

Publication number Publication date
CN102497431A (en) 2012-06-13

Similar Documents

Publication Publication Date Title
Chaczko et al. Availability and load balancing in cloud computing
CN109672627A (en) Method for processing business, platform, equipment and storage medium based on cluster server
CN108196935B (en) Cloud computing-oriented virtual machine energy-saving migration method
CN103338252B (en) Realizing method of distributed database concurrence storage virtual request mechanism
CN103067297B (en) A kind of dynamic load balancing method based on resource consumption prediction and device
CN110138732A (en) Response method, device, equipment and the storage medium of access request
WO2021012663A1 (en) Access log processing method and device
WO2010072083A1 (en) Web application based database system and data management method therof
CN109697122A (en) Task processing method, equipment and computer storage medium
CN103152393A (en) Charging method and charging system for cloud computing
CN104821887A (en) Device and Method for Packet Processing with Memories Having Different Latencies
CN107329811A (en) A kind of power consumption of data center adjusting method and device
CN109327335A (en) A kind of cloud monitoring solution system and method
CN113301590B (en) Virtual resource management and control system facing 5G access network
CN108664116A (en) Adaptive electricity saving method, device and the cpu controller of network function virtualization
CN111865817A (en) Load balancing control method, device and equipment for remote measuring collector and storage medium
CN104360724A (en) Heat dissipation system and heat dissipation method of blade server based on job scheduling
CN103685436B (en) Data acquisition method and terminal equipment
CN114710571B (en) Data packet processing system
CN105681426A (en) Heterogeneous system
CN102497431B (en) Memory application method and system for caching application data of transmission control protocol (TCP) connection
CN103577469B (en) Database connection multiplexing method and apparatus
CN107197039A (en) A kind of PAAS platform service bag distribution methods and system based on CDN
CN111796935B (en) Consumption instance distribution method and system for calling log information
CN110351199A (en) Flow smoothing method, server and forwarding device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220728

Address after: 100193 No. 36 Building, No. 8 Hospital, Wangxi Road, Haidian District, Beijing

Patentee after: Dawning Information Industry (Beijing) Co.,Ltd.

Patentee after: DAWNING INFORMATION INDUSTRY Co.,Ltd.

Address before: 100084 Beijing Haidian District City Mill Street No. 64

Patentee before: Dawning Information Industry (Beijing) Co.,Ltd.

TR01 Transfer of patent right