CN100539713C - A kind of in veneer the method for memorying data flow - Google Patents

A kind of in veneer the method for memorying data flow Download PDF

Info

Publication number
CN100539713C
CN100539713C CNB200410074119XA CN200410074119A CN100539713C CN 100539713 C CN100539713 C CN 100539713C CN B200410074119X A CNB200410074119X A CN B200410074119XA CN 200410074119 A CN200410074119 A CN 200410074119A CN 100539713 C CN100539713 C CN 100539713C
Authority
CN
China
Prior art keywords
veneer
data flow
system parameters
data stream
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB200410074119XA
Other languages
Chinese (zh)
Other versions
CN1744725A (en
Inventor
朱志明
周艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CNB200410074119XA priority Critical patent/CN100539713C/en
Publication of CN1744725A publication Critical patent/CN1744725A/en
Application granted granted Critical
Publication of CN100539713C publication Critical patent/CN100539713C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a kind of in veneer the method for memorying data flow, may further comprise the steps: A. determines influence the data stream type of veneer disposal ability, according to traffic model draw every kind the maximum concurrent processing number that flows of definite categorical data; B. veneer is divided into different memory areas according to determined each data stream type with self EMS memory, and determines the size of the memory block corresponding with this data stream type according to the maximum concurrent processing number of data flow; C. when veneer receives data flow, this data flow is stored in the internal memory in the memory block corresponding to this data stream type.The present invention can reduce the capacity that takies and improve greatly veneer to the veneer storage resources.

Description

A kind of in veneer the method for memorying data flow
Technical field
The present invention relates to the communication system technology, particularly relate to a kind of in veneer the method for memorying data flow.
Background technology
At present, communication system is gradually towards the high integration development, and the disposal ability of communication system has obtained growth at double, has possessed the ability that more loquacity is engaged in and the carrying more business are loaded of handling.And in order to reach the purpose of handling the more business load, communication system then must have higher capacity.
For being based upon the structural communication system of single-plate grade, how reasonable store various kinds of data stream in veneer is the key factor that is related to veneer capacity and even overall system capacity.
Fig. 1 is that prior art one is by dividing the schematic diagram that permanent storage area is implemented in memorying data flow in the veneer in advance.Referring to Fig. 1, prior art one can be implemented in memorying data flow in the veneer by dividing permanent storage area in advance.When this method of application, prior art one marks off a plurality of fixed sizes in advance in the internal memory of veneer storage area is used to store different types of data flows, such as, marking off memory block A storage class is the data flow of A, memory block B storage class is the data flow of B, and memory block C storage class is the data flow of C.After veneer has obtained data flow, veneer according to the type of data flow with the storage of classifying of each data flow.Receiving type when being the data flow of A such as, veneer, is that the data flow of A is stored among the A of memory block with the type, receives type when being the data flow of B at veneer, is that the data flow of B is stored among the B of memory block with the type.
This shows, prior art one is this divide permanent storage area in advance and in veneer the method for classification memorying data flow have following shortcoming:
1, the size of the storage area that marks off is fixed, and the size how many each storage areas is set that does not flow according to different types of data, therefore, when the data flow of a certain particular type a lot, need take a large amount of memory spaces, and the data flow of other type seldom, in the time of only need taking less memory space, the size that the memory block of this specific type of data stream then can occur storing can't satisfy the storage demand of this specific type of data stream of arrival, and the memory block of storing other categorical data stream still has the situation of a large amount of residual memory spaces, thereby wasted the storage resources of veneer, reduced the veneer capacity, and then reduced power system capacity.
2, when the storage area that is used to store certain categorical data stream in the veneer can't satisfy the storage demand of the type data flow, must be by increasing the capacity that the veneer number improves system, to satisfy the storage demand of this type data flow, the number that still increases veneer then will inevitably cause the shortcoming that cost is higher and board integration is low.
Fig. 2 is prior art two is implemented in memorying data flow in the veneer by dynamic assigning memory a schematic diagram.Referring to Fig. 2, in prior art two, need not to divide in advance the operation of internal memory of single board.When veneer obtains a data flow, do not distinguish the type of data flow, in self EMS memory, open up a memory space according to the size of this data flow, and this data flow is stored in the memory space of being opened up.Receive type when being the data flow of A such as, veneer, veneer is opened up a memory space according to the size of this data flow in self EMS memory, and this data flow is stored in the memory space of being opened up.When veneer receives type when being the data flow of B, veneer is that the size of the data flow of B is opened up a memory space once more according to the type in self EMS memory, and is that the data flow of B is stored in this memory space of opening up with the type.
This shows that prior art two is this comes the method for memorying data flow to have following shortcoming by the dynamic assignment internal memory of single board:
1, because when receiving a data flow, veneer all needs to open up a memory space according to the size of data flow in self EMS memory, can consume a large amount of cpu resources and frequently open up the memory space operation, thereby reduce the processing speed of veneer widely, reduce the performance of veneer.
2, because not to the storage of classifying of various data flow, make that internal memory planning is chaotic, be difficult to use inner modes such as integer index to search and locate the data flow of a certain particular type, and be difficult to use the mode that directly reads internal memory to carry out case study and location, thereby increased maintenance difficulties greatly data stored on the boards.
Summary of the invention
In view of this, main purpose of the present invention be to provide a kind of in veneer the method for memorying data flow, make it can reduce the capacity that takies and improve greatly veneer to the veneer storage resources.
In order to achieve the above object, technical scheme of the present invention is achieved in that
A kind of in veneer the method for memorying data flow, may further comprise the steps:
A, determine influence the data stream type of veneer disposal ability, according to traffic model draw every kind the maximum concurrent processing number that flows of definite categorical data, wherein,
Described definite data stream type that influences the veneer disposal ability comprises: one type individual traffic in the traffic model is carried out performance test, judge when this individual traffic increases whether influential to the disposal ability of veneer, if then this data stream type is defined as influencing the data stream type of veneer disposal ability;
Described according to traffic model draw every kind the maximum concurrent processing number of definite categorical data stream comprise: according to the maximum concurrent processing Capability Requirement of system, the numerical value of all types of data flow in the traffic model is decomposed each veneer, then by analyze, the systematic function influence of the individual traffic of the definite type of test institute and verification method draw every kind the maximum concurrent processing number that flows of definite categorical data;
B, veneer are divided into different memory areas according to determined each data stream type with self EMS memory, and determine the size of the memory block corresponding with this data stream type according to the maximum concurrent processing number of data flow;
C, when veneer receives data flow, this data flow is stored in the internal memory in the memory block corresponding to this data stream type.
In step B, described maximum concurrent processing number according to data flow determines that the step of the size of the memory block corresponding with this data stream type comprises: the unit memory space that each the type data flow is required multiply by the maximum concurrent processing number of determined the type data flow in the steps A, obtains the size of the memory block corresponding with this data stream type.
In step B, veneer is carried out described division memory block and is determined the step of memory block size when starting.
Between steps A and step B, further comprise: the system parameters that determined each data stream type is defined as veneer, the maximum concurrent processing number of the data flow that drawn is defined as the numerical value of system parameters, and all system parameterss and numerical value thereof are arranged in the system parameters configuration file this system parameters configuration file of configuration in system then;
Described step B comprises:
B1, veneer obtain system parameters corresponding with self and numerical value thereof from the system parameters configuration file that system disposed;
B2, veneer are divided into different memory areas according to the different system parameter of being obtained with self EMS memory;
B3, veneer are determined the size of the memory block corresponding with this system parameters according to the numerical value of the system parameters of being obtained.
As seen, the method for the present invention's proposition has the following advantages:
1, the present invention and prior art one all are to mark off different memory areas at various data stream type in advance in internal memory of single board, the storage demand of different types of data stream when but prior art one is not considered practical application, but directly be the memory block of a fixed size of data flow division of each type, thereby can't make full use of the resource of veneer.The present invention is then according to the demand of the determined stream of different types of data in actual applications of traffic model to memory space, in the internal memory of veneer, mark off the storage area of all types of data flow of storage, veneer only needs to be stored in the corresponding storage area according to the type of this data flow when receiving data flow.As seen, the internal memory of single board that the present invention divides according to traffic model can rationally satisfy the storage demand of various types of data stream, avoided the storage area of a certain categorical data stream of storage in the prior art one can't satisfy storage demand, but there is the shortcoming of a large amount of residual memory spaces in the storage area of storing other categorical data stream.Therefore, the present invention can make full use of the storage resources of veneer, improves the capacity of veneer greatly, and then improves the capacity of whole system.
2, because the present invention divides internal memory of single board in advance according to the type of data flow, therefore, veneer is when receiving data flow, only need in respective storage areas, to get final product according to its type stores, and need not to carry out dynamic operation of frequently opening up memory headroom in the prior art two, thereby reduced taking to cpu resource.And,, can realize searching or locating the operation of a certain categorical data stream easily, thereby when making full use of the veneer storage resources, reduce maintenance difficulties data stored on the boards because internal memory planning in the present invention is better.
3, the present invention is by making full use of the capacity that the veneer resource improves veneer, and then improves the capacity of system, and need not therefore, to have improved the integrated level of communication system greatly by increasing the capacity that the veneer number improves system, reduced cost.
4, in an area, the present invention is the system parameters configuration file of the traffic model of its current use for system configuration, so according to the system parameters configuration file of this traffic model, the internal memory that veneer marked off is the storage demand that meets current practical business.And the traffic model that uses when this area system changes, when being the storage demand change of all types of data flow, the present invention only need carry out corresponding modify by the system parameters configuration file to traffic model in system, veneer then can be repartitioned internal memory according to amended system parameters configuration file, thereby has adapted to the demand of business development and variation neatly.
5, the present invention only need then can reach the purpose of flexible division internal memory of single board by the system parameters configuration file of traffic model in the update system, thereby makes that the present invention is simple and be easy to realization.
Description of drawings
Fig. 1 is that prior art one is by dividing the schematic diagram that permanent storage area is implemented in memorying data flow in the veneer in advance.
Fig. 2 is prior art two is implemented in memorying data flow in the veneer by dynamic assigning memory a schematic diagram.
Fig. 3 is the flow chart of the present invention's memorying data flow in veneer.
Fig. 4 is the schematic diagram that the present invention divides internal memory of single board.
Embodiment
For making the purpose, technical solutions and advantages of the present invention clearer, the present invention is described in further detail below in conjunction with drawings and the specific embodiments.
In communication system, traffic model is the various typical statistics that is based upon on the service operation basis, be equivalent to unit for second time in the probability combination that occurs of different pieces of information stream type.Such as, having type is the data flow of A, B and C, and wherein, type A is for adhering to number of users, and type B is an excited user number, and Type C is the number of data packets of 200 bytes for average packet length.A kind of traffic model can be: at per second, have 30 users to adhere to, 60 times PDP activates, and 300 average packet length are the packet of 200 bytes.Another kind of traffic model can be: at per second, have 40 users to adhere to, 20 times PDP activates, and 800 average packet length are the packet of 200 bytes.This shows that traffic model has been expressed the number of various types of data stream in practical business, thereby reflected the size of the internal memory of single board memory space that different types of data stream is required.And the present invention divides internal memory of single board according to traffic model just, realizes the flexible configuration to internal memory of single board.
Fig. 3 is the flow chart of the present invention's memorying data flow in veneer.Referring to Fig. 3, the present invention at first defines the system parameters of each veneer with reference to traffic model, and determine the numerical value of each system parameters according to traffic model, then according to system parameters and numerical division internal memory of single board thereof, when veneer receives data flow, according to the storage of classifying of the type of data flow, its specific implementation process may further comprise the steps:
Step 301: the system parameters that defines each veneer according to traffic model.
Whether here, the present invention carries out performance test to the individual traffic in the traffic model, influential to the veneer disposal ability when data flow of determining single the type increases, if it is influential, then this data stream type is defined as the system parameters of veneer, if not influence does not then define.
Such as, the data stream type that the known business disposable plates need be handled comprises adheres to number of users, activate PDP Context number, static PDP Context number, RNC linking number and other data stream type, individual traffic to each type in the traffic model carries out performance test, draw when individual traffic increases, to the influential data stream type of service processing board disposal ability for adhering to number of users, activate PDPContext number and static PDP Context number, therefore, the present invention will adhere to number of users, activate PDPContext number and static PDP Context number and be defined as the system parameters of this service processing board, and will adhere to number of users and be defined as system parameters 1, to activate PDP Context number and be defined as system parameters 2, static PDP Context number is defined as system parameters 3.
Step 302: with defined each system parameter setting in a system parameters form.
Here, the system parameters form can be set to the form shown in the table 1.
System parameters 1 System parameters 2 System parameters 3 System parameters 4 System parameters 5
Service processing board Adhere to number of users Activate PDP Context number Static PDP Context number
Interface board Adhere to number of users Activate the PDP number The RNC linking number
Table 1
Step 303:, define the system parameters configuration file of traffic model according to set system parameters form.
Here, because for same operator, the quantity of its traffic model is clear and definite and limited, therefore, the present invention can be existing each traffic model and defines the system parameters configuration file.
When the system parameters configuration file of definition traffic model, at first what of respective type number of data streams the numerical value that decomposites corresponding each system parameters in the system parameters form according to the numerical value in the traffic model be, defined the system parameters configuration file of this traffic model then by the numerical value of each system parameters that decomposites.
Here, the present invention according to the process that the numerical value in the traffic model decomposites system parameters numerical value is: according to the maximum concurrent processing Capability Requirement of system, the numerical value of all types of data flow in the traffic model is decomposed each veneer, and by the maximum concurrent processing number of analyzing, methods such as the systematic function influence of test individual traffic and checking draw every type of data flow, then with the numerical value of the maximum concurrent processing number of the data flow of gained as the system parameters corresponding with this data stream type.Decompositing veneer according to the numerical value in the traffic model when this specific implementation process and prior art are used the traffic model assessment of system performance, to handle the process of all types of data flow abilities identical.
Numerical value in the traffic model can be exemplified as with the corresponding relation of the system parameters numerical value that decomposites: existing traffic model 1 is: per second, adhering to number of users is 30, activating PDP Context number is 10, requires static PDP Context to activate 2 times of PDP Context; In addition, requiring the RNC linking number is 5.Numerical value according to each definite system parameters of the numerical value in the traffic model 1 is so: for service processing board, allowing the maximum adhesion number of users is 40000 to be that the numerical value of system parameters 1 is 40000, allowing maximum activation PDP Context number is 30000 to be that the numerical value of system parameters 2 is 30000, and allowing maximum static PDP Context number is 60000 to be that the numerical value of system parameters 3 is 60000 etc.; For interface board, allowing the maximum adhesion number of users is 60000 to be that the numerical value of system parameters 1 is 60000, and allowing maximum RNC linking number is 5 to be that the numerical value of system parameters 4 is 5 etc.Existing traffic model 2 is: per second, and adhering to number of users is 30, activating PDP Context number is 30, requires static PDP Context to activate 5 times of PDPContext; In addition, requiring the RNC linking number is 7.Numerical value according to each definite system parameters of the number of each type data flow in the traffic model 2 is so: for service processing board, allowing the maximum adhesion number of users is 30000 to be that the numerical value of system parameters 1 is 30000, allowing maximum activation PDPContext number is 30000 to be that the numerical value of system parameters 2 is 30000, and allowing maximum static PDP Context number is 150000 to be that the numerical value of system parameters 3 is 150000 etc.; For interface board, allowing the maximum adhesion number of users is 60000 to be that the numerical value of system parameters 1 is 60000, and allowing maximum RNC linking number is 7 to be that the numerical value of system parameters 4 is 7 etc.
The present invention defines the system parameters configuration file respectively for each traffic model, such as, the system parameters configuration file of definition traffic model 1 is form shown in the table 2a, the system parameters configuration file of definition traffic model 2 is form shown in the table 2b.
System parameters 1 System parameters 2 System parameters 3 System parameters 4 System parameters 5
Service processing board 40000 30000 60000 0
Interface board 60000 0 0 5
Table 2a
System parameters 1 System parameters 2 System parameters 3 System parameters 4 System parameters 5
Service processing board 30000 30000 150000 0
Interface board 60000 0 0 7
Table 2b
By showing 2a and table 2b as can be seen, in fact the system parameters configuration file of traffic model has provided required each system parameters and the numerical value thereof of veneer in the system.
After all traffic models that operator is used defined the system parameters configuration file, if when traffic model changes, the present invention's traffic model for a change defined the system parameters configuration file, with the variation of flexible adaptation Operator Specific Service.
Step 304: the system parameters configuration file of the traffic model that the current system of configuration uses in system.
Here, for an area, in a period of time, the employed traffic model of operator is fixed as a kind of, and therefore, in the present invention, the system parameters configuration file that only is required to be the traffic model of its current use of system configuration gets final product.Such as, in the Beijing area, what current operator used is traffic model 1, so, the present invention only needs the system parameters configuration file of configuration traffic model 1 in system to get final product.
Step 305: behind single board starting, system is transferred to this veneer with each system parameters and the numerical value thereof corresponding to this veneer in the system parameters configuration file of traffic model of self configuration.
Here, referring to table 1 and table 2a, such as what dispose in the current system is the system parameters configuration file of traffic model 1, so, when service processing board starts, system comprises that with each system parameters and the numerical value thereof corresponding to service processing board in the system parameters configuration file adhering to number of users is 40000, activate PDP Context number and be 30000 and static PDP Context number be 60000 etc. to be transferred to service processing board, when interface board starts, system with in the system parameters configuration file corresponding to each system parameters of interface board comprise adhere to number of users be 60000 and the RNC linking number be 5 etc. to be transferred to interface board.
Step 306: this veneer is according to each system parameters and the numerical division self EMS memory thereof that receive.
Here, because the maximum of all types of data flow that the numerical value of system parameters is veneer can receive, therefore, the various types of data of having represented veneer will receive in practical business flows the size of required memory space.The numerical value of the system parameters that veneer is received is big more, expression is big more corresponding to the required memory space of the data stream type of this system parameters, the numerical value of the system parameters that veneer is received is more little, represents more little corresponding to the required memory space of the data stream type of this system parameters.Therefore, veneer size according to each system parameters numerical value in self EMS memory is opened up a storage area for each data stream type.
Fig. 4 is the schematic diagram that the present invention divides internal memory of single board.Referring to Fig. 4, such as, the system parameters and the numerical value thereof that receive when service processing board comprise that system parameters 1 and numerical value 40000 thereof promptly adhere to number of users 40000, it is several 60000 o'clock of static PDP Context that system parameters 2 and numerical value 30000 thereof promptly activate PDP Context several 30000 and system parameters 3 and numerical value 60000 thereof, this service processing board is opened up 3 memory blocks for 3 system parameterss that receive in self EMS memory, be and adhere to user's data stream (being set at the data flow that type is A), activate the data flow (being set at the data flow that type is B) of PDP Context and the data flow (being set at the data flow that type is C) of static PDP Context and open up a storage area, memory block A respectively, memory block B and memory block C.
And this service processing board is determined the size of this memory block according to the size of the system parameters numerical value of this memory block correspondence when opening up the memory block.Because service processing board is known the size of the required unit of each data flow memory space that self receives, therefore, the size of service processing board this memory block of opening up equals to multiply by i.e. this system parameters numerical value of the type data flow maximum allowable number order corresponding to the required unit memory space of each data flow in the data stream type of this system parameters.Such as, for service processing board, be the data flow of A when opening up memory block A promptly corresponding to type at system parameters 1, the size of memory block A=each type is the numerical value 40000 of the required unit of data flow memory space * system parameters 1 of A.
Step 307: when this veneer obtains data flow, this data flow is stored in the memory block corresponding with this data stream type.
Here, referring to Fig. 4, receiving type when being the data flow of A such as, veneer, is that the data flow of A is stored among the A of memory block with the type, receives type when being the data flow of B at veneer, is that the data flow of B is stored among the B of memory block with the type.
So far, the present invention has finished according to traffic model and has divided internal memory of single board, and in each memory zone that is marked off the reasonable process of memorying data flow.
In a word, the above is preferred embodiment of the present invention only, is not to be used to limit protection scope of the present invention.Within the spirit and principles in the present invention all, any modification of being done, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (4)

1, a kind of in veneer the method for memorying data flow, it is characterized in that this method may further comprise the steps:
A, determine influence the data stream type of veneer disposal ability, according to traffic model draw every kind the maximum concurrent processing number that flows of definite categorical data, wherein,
Described definite data stream type that influences the veneer disposal ability comprises: one type individual traffic in the traffic model is carried out performance test, judge when this individual traffic increases whether influential to the disposal ability of veneer, if then this data stream type is defined as influencing the data stream type of veneer disposal ability;
Described according to traffic model draw every kind the maximum concurrent processing number of definite categorical data stream comprise: according to the maximum concurrent processing Capability Requirement of system, the numerical value of all types of data flow in the traffic model is decomposed each veneer, then by analyze, the systematic function influence of the individual traffic of the definite type of test institute and verification method draw every kind the maximum concurrent processing number that flows of definite categorical data;
B, veneer are divided into different memory areas according to determined each data stream type with self EMS memory, and determine the size of the memory block corresponding with this data stream type according to the maximum concurrent processing number of data flow;
C, when veneer receives data flow, this data flow is stored in the internal memory in the memory block corresponding to this data stream type.
2, method according to claim 1, it is characterized in that, in step B, described maximum concurrent processing number according to data flow determines that the step of the size of the memory block corresponding with this data stream type comprises: the unit memory space that each the type data flow is required multiply by the maximum concurrent processing number of determined the type data flow in the steps A, obtains the size of the memory block corresponding with this data stream type.
3, method according to claim 1 is characterized in that, in step B, veneer is carried out described division memory block and determined the step of memory block size when starting.
4, method according to claim 1, it is characterized in that, between steps A and step B, further comprise: the system parameters that determined each data stream type is defined as veneer, the maximum concurrent processing number of the data flow that drawn is defined as the numerical value of system parameters, and all system parameterss and numerical value thereof are arranged in the system parameters configuration file this system parameters configuration file of configuration in system then;
Described step B comprises:
B1, veneer obtain system parameters corresponding with self and numerical value thereof from the system parameters configuration file that system disposed;
B2, veneer are divided into different memory areas according to the different system parameter of being obtained with self EMS memory;
B3, veneer are determined the size of the memory block corresponding with this system parameters according to the numerical value of the system parameters of being obtained.
CNB200410074119XA 2004-08-31 2004-08-31 A kind of in veneer the method for memorying data flow Expired - Fee Related CN100539713C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB200410074119XA CN100539713C (en) 2004-08-31 2004-08-31 A kind of in veneer the method for memorying data flow

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB200410074119XA CN100539713C (en) 2004-08-31 2004-08-31 A kind of in veneer the method for memorying data flow

Publications (2)

Publication Number Publication Date
CN1744725A CN1744725A (en) 2006-03-08
CN100539713C true CN100539713C (en) 2009-09-09

Family

ID=36139878

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200410074119XA Expired - Fee Related CN100539713C (en) 2004-08-31 2004-08-31 A kind of in veneer the method for memorying data flow

Country Status (1)

Country Link
CN (1) CN100539713C (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101153883B (en) * 2006-09-29 2011-01-12 上海雷迅防雷技术有限公司 Method for data access of surge protector and monitoring system of surge protector
CN101876922B (en) * 2009-10-30 2012-12-19 北京锐安科技有限公司 Method for improving system stability
CN101950273B (en) * 2010-09-26 2012-05-23 中国人民解放军海军工程大学 Dynamic management method of DSP data area
CN102457389A (en) * 2010-10-18 2012-05-16 中兴通讯股份有限公司 Single board monitoring method and system
CN103678160B (en) * 2012-08-30 2017-12-05 腾讯科技(深圳)有限公司 A kind of method and apparatus of data storage
CN104158680B (en) * 2014-06-18 2018-03-06 大唐移动通信设备有限公司 Alert buffer area processing method and equipment
CN107302505B (en) * 2017-06-22 2019-10-29 迈普通信技术股份有限公司 Manage the method and device of caching
CN109656836A (en) * 2018-12-24 2019-04-19 新华三技术有限公司 A kind of data processing method and device
CN110928580B (en) * 2019-10-23 2022-06-24 北京达佳互联信息技术有限公司 Asynchronous flow control method and device

Also Published As

Publication number Publication date
CN1744725A (en) 2006-03-08

Similar Documents

Publication Publication Date Title
CN106209506B (en) A kind of virtualization deep-packet detection flow analysis method and system
US7372857B1 (en) Methods and apparatus for scheduling tasks
CN102236581B (en) Mapping reduction method and system thereof for data center
CN105101231B (en) A kind of deployment of LTE network method and device
CN100539713C (en) A kind of in veneer the method for memorying data flow
CN109446385B (en) Method for establishing network resource equipment map and using method of equipment map
CN106446170A (en) Data querying method and device
CN111507479B (en) Feature binning method, device, equipment and computer-readable storage medium
CN103853838B (en) A kind of data processing method and device
CN106933836A (en) A kind of date storage method and system based on point table
Hua et al. A multi-attribute data structure with parallel bloom filters for network services
CN110134430A (en) A kind of data packing method, device, storage medium and server
CN101187949B (en) A high-capacity data statistical method and system
CN110472109A (en) Mobilism Data Quality Analysis method and plateform system
CN107682446A (en) A kind of message mirror-image method, device and electronic equipment
CN107172193A (en) A kind of load-balancing method and its device based on cluster
CN100488173C (en) A method for carrying out automatic selection of packet classification algorithm
CN112527836A (en) Big data query method based on T-BOX platform
CN103595800B (en) A kind of real-time distribution processor device and method of data
CN108259567B (en) Large data center service subsystem discovery method based on server application logic
CN115834478A (en) Method for realizing PBR high-speed forwarding by using TCAM
CN114912815A (en) Index automatic definition method, system and storage medium based on big data wide table
CN107483508A (en) Message filtering method, device, equipment and storage medium
CN103475686B (en) Communication data distribution system and communication data distribution method for electric analog
CN107295485A (en) Multimedia message accessory management method, device and communication system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090909

Termination date: 20140831

EXPY Termination of patent right or utility model