CN102129396B - Real-time and high-speed inter-thread data exchange method - Google Patents

Real-time and high-speed inter-thread data exchange method Download PDF

Info

Publication number
CN102129396B
CN102129396B CN2011100519715A CN201110051971A CN102129396B CN 102129396 B CN102129396 B CN 102129396B CN 2011100519715 A CN2011100519715 A CN 2011100519715A CN 201110051971 A CN201110051971 A CN 201110051971A CN 102129396 B CN102129396 B CN 102129396B
Authority
CN
China
Prior art keywords
data
cache blocks
thread
cache
inter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2011100519715A
Other languages
Chinese (zh)
Other versions
CN102129396A (en
Inventor
王永炎
骆小芳
罗雄飞
刘洋
王宏安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anjie Zhongke (Beijing) data Technology Co.,Ltd.
Original Assignee
Institute of Software of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Software of CAS filed Critical Institute of Software of CAS
Priority to CN2011100519715A priority Critical patent/CN102129396B/en
Publication of CN102129396A publication Critical patent/CN102129396A/en
Application granted granted Critical
Publication of CN102129396B publication Critical patent/CN102129396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an inter-thread data exchange method, which relates to the technical field of information and comprises the following steps that: 1) a data caching structure is set in a memory shared by a data provision thread and a data consumption thread, wherein the data caching structure comprises a plurality of data cache blocks; 2) the data provision thread puts data into the data cache blocks by calling a data putting interface; and 3) the data consumption thread reads the data from the data cache blocks by calling a data acquisition interface to realize high-speed data exchange. By the method, the real-time of data exchange between the threads and throughput in massive data exchange between the threads are simultaneously ensured, the massive data exchange between the threads in a process can be supported, and the data can be exchanged in an extremely short time in scattered data exchange.

Description

A kind of inter-thread data switching method real-time
Technical field
The present invention relates to technical field of information processing, relate in particular to the method for inter-thread data exchange, can be applied to the software development technique field of various software systems research and development.
Background technology
In areas of information technology, software development relates to a lot of technology, comprises exchanges data, inter-thread data exchange etc. between internal storage data organization and administration, message communicating, index technology, process.The present invention pays close attention to the Data Interchange Technology of cross-thread in the software development.In some specific software systems R﹠D processes, relate to the exchanges data of cross-thread, and the requirement that exchange has higher real-time and high efficiency to data, such as: rdal-time DBMS, relational database management system, middleware etc.
Because the internal storage data between process can not be visited mutually, therefore, exchanges data more complicated between process, usual way comprises shared drive mode, dynamic data exchange (DDE), data pipe, socket socket etc.With respect to exchanges data between process, the exchanges data of cross-thread is just fairly simple, can carry out exchanges data by the mode of direct access memory data, wherein needs the organization and administration of special concern internal storage data and internal storage data synchronization mechanism.If exchange does not have performance requirement to inter-thread data, so can be according to the software systems needs, by variable or memory block swap data, add event (Event), critical section (Critical Section), mutex (Mutex) or semaphore (Semaphone) and realize synchronization mechanism, can realize the exchanges data of cross-thread.Yet, for the very high software systems of some performance requirements, then need to design meticulously internal storage data organization and administration mode and internal storage data synchronization mechanism, realize inter-thread data exchange real-time, satisfy software systems in the demand of cross-thread exchanges data, thereby improve the overall performance of software systems.
In research and development software systems process, can run into the problem that cross-thread carries out exchanges data usually; Jiao Huan data volume is few if desired; just can realize with some simple variablees, internal memory and mutex; but exchange has real-time demand and handling capacity requirement to the higher software systems of some performance requirements to inter-thread data.When carrying out fragmentary exchanges data, data need be finished exchange in the very short time; When carrying out the mass data exchange, exchanges data must have very high handling capacity.In the R﹠D process of these software systems, usually need the developer to expend the exchanges data that designs and realize cross-thread than the long time, and the final effect that realizes might not satisfy software systems to real-time demand and the handling capacity requirement of inter-thread data exchange.
Summary of the invention
The present invention is directed to real-time demand and the performance requirement of the inter-thread data exchange of rdal-time DBMS, relational database management system, middleware, design and proposed a kind of inter-thread data switching method real-time.
The present invention is directed to this class technical matters, a kind of technical solution of inter-thread data exchange general, real-time has been proposed, provided the specific implementation details of inter-thread data exchange assembly, on the one hand, can save developer's time greatly, in addition on the one hand, can satisfy software systems to real-time demand and the handling capacity requirement of inter-thread data exchange.
A kind of inter-thread data switching method real-time may further comprise the steps:
1) in the internal memory that data provide thread and data consumption thread to share a data cached structure is set, described data cached knot inside comprises a plurality of data cache blocks;
2) data provide thread to put into interface by calling data, and data are put into the data cache blocks;
3) the data consumption thread reads data in the data cache blocks by calling data acquisition interface, realizes the fast data exchange.
In the described data cached structure, comprise n+1 cache blocks, wherein n cache blocks put into data for circulation; N+1 cache blocks is called substitution block, replaces with the cache blocks that has data when being used for obtaining data.
Adopt data to put whether to be useful in the data cached structure of event description put into data less than cache blocks; Whether there is the cache blocks that has data to be used for obtaining data in the data cached structure of the desirable event description of data; The operation critical section is used for realizing synchronously when data provide thread and data consumption thread that n cache blocks conducted interviews operation.
When data provide the thread dispatching data to put into interface, at first enter the operation critical section, check whether the current cache piece also has enough spaces to put into data, if the space is enough, then data is put into the current cache piece, leave the operation critical section, data are put into success;
If the current cache piece does not have enough spaces to put into data, the desirable event of trigger data judges whether next cache blocks is empty, if next cache blocks is empty, then the current cache piece is moved to next cache blocks, and data are put into the current cache piece.
If next cache blocks is not empty, wait pending data can put event, reenter the operation critical section, and data are put into cache blocks.
Can put the parameter of time for arranging of event Deng pending data, if wait timeout, then data are put into failure.
When data consumption thread dispatching data acquisition interface, at first wait the desirable event of pending data, enter the operation critical section, if judge the cache blocks that does not have data, the desirable event of the data of then resetting is left the operation critical section, data are obtained failure.
If judge the cache blocks have data, then the data in the substitution block are emptied, and with substitution block outside have the cache blocks of data to replace.If do not have the cache blocks of data outside the substitution block, the desirable event of the data of then resetting.
Last trigger data can be put event, leaves the operation critical section, and data are obtained success.
(3) beneficial effect
According to above technical solution as can be seen, the present invention has following beneficial effect:
1, the present invention proposes the method for interchanging data that obtains data with batch mode, when certain cache blocks has been expired, perhaps wait after the desirable event timeout of pending data, the data consumption thread just can read the data in the cache blocks.And the present invention adopts the mode of cache blocks displacement when obtaining the cache blocks data, reduced the time that the data consumption thread is seized the operation critical section effectively, thereby reduced the stand-by period that data provide thread, improved concurrency, the performance that improves exchanges data greatly can have been arranged.
2, the present invention also can be guaranteed to the real-time of fragmentary exchanges data, and the data consumption thread waited the time-out time of the desirable event of pending data before reading data be the p millisecond, under worst case, and wait timeout, this moment, the delay of exchanges data was the p millisecond.Therefore, when fragmentary exchanges data, the maximum-delay of exchanges data is smaller or equal to the p millisecond.
3, the invention provides two succinct data and put into and data acquisition interface, support various types of software systems, especially can satisfy the demand of the high performance software system with real-time requirement.
Description of drawings
Fig. 1 is exchanges data modular construction synoptic diagram of the present invention;
Fig. 2 puts into operational flowchart for data in the method for interchanging data of the present invention;
Fig. 3 is data acquisition operations process flow diagram in the method for interchanging data of the present invention;
Fig. 4 is exchanges data assembly initialization method process flow diagram.
Embodiment
Below in conjunction with the drawings and specific embodiments method of the present invention is described in detail.
As shown in Figure 1, the inter-thread data exchange assembly of realization the inventive method mainly comprises three parts: 1, data cached structure; 2, data are put into interface; 3, data acquisition interface.Data provide thread to put into interface by calling data, and data are put into buffer memory, and the data consumption thread reads data in the buffer memory by calling data acquisition interface, and inside has a plurality of data cache blocks and realizes the fast data exchange.It should be noted that inter-thread data exchanges assembly and only supports unidirectional exchanges data, if support two-way exchanges data, need to realize two cover inter-thread data exchange assemblies, support the exchanges data of two different directions respectively.
In the data cached structure of inter-thread data exchange assembly, allocate n+1 size in advance and be the cache blocks of blocksize, wherein n and blocksize are configurable parameters.There is n cache blocks to be used for circulation in this n+1 cache blocks and put into data.When data provide the thread dispatching data to put into interface, if the size of data that the remaining space of current cache piece is put into greater than needs, so directly data being put into the current cache piece gets final product, otherwise judge whether next cache blocks is the sky piece, if, then the current cache piece is moved to next cache blocks, and data are put into the current cache piece, if not, need then to wait for that the data consumption thread obtains data, make when next cache blocks is empty piece, again the current cache piece is moved to next cache blocks, and data are put into the current cache piece.And the another one cache blocks is called substitution block, replaces with the cache blocks that has data when then being used for obtaining data.When data-interface is obtained in data consumption thread dispatching, substitution block is empty, suppose in n the cache blocks that the cache blocks that data are arranged the earliest is cache blocks k, cache blocks k will replace with the content of substitution block so, become the cache blocks content that data are arranged the earliest in the substitution block, and cache blocks k becomes the content of substitution block, is sky.Then, the data consumption thread just can directly be visited substitution block and obtained data wherein.
Because data provide thread and data consumption thread accesses shared drive, and inter-thread communication and synchronization mechanism efficiently need be provided.For this reason, the inventive method has defined three variablees: data can be put event, the desirable event of data and operation critical section.Data can put whether have in the event description inter-thread data exchange assembly less than cache blocks, can be used for putting into data; In the desirable event description inter-thread data exchange of the data assembly whether the cache blocks that has data is arranged, can be used for obtaining data; The operation critical section is used for realizing synchronously when data provide thread and data consumption thread that n cache blocks conducted interviews operation.
The initialization procedure of inter-thread data exchange assembly is as follows: at first, according to configuration parameter n, distribute n+1 cache blocks; Then, it is 0 that each cache blocks data with existing size is set, and is the sky cache blocks; The 3rd, last cache blocks is set to substitution block; At last, create data and can put event and the desirable event of data, initialization operation critical section.
When data provide the thread dispatching data to put into interface, at first enter the operation critical section, check whether the current cache piece also has enough spaces to put into data, if the space is enough, then data is put into the current cache piece, leave the operation critical section and return success; Otherwise, the desirable event of trigger data, judge whether next cache blocks is empty, if next cache blocks is empty, then the current cache piece is moved to next cache blocks, data are put into the current cache piece, leave operation critical section and returning success, otherwise, the replacement data can be put event, leave the operation critical section, wait pending data can put event, the parameter of wait timeout time for arranging, if wait timeout then returns failure, otherwise reenter the operation critical section, and data are put into cache blocks, and return success.
When data consumption thread dispatching data acquisition interface, at first wait the desirable event of pending data, the parameter of wait timeout time for arranging, no matter wait timeout or success, all will enter the operation critical section, judge whether to have the cache blocks of data, if have the cache blocks of data, then the data in the substitution block are emptied, and replace with the cache blocks that data are arranged the earliest, if except substitution block, do not have the cache blocks of data, the desirable event of the data of then resetting, last trigger data can be put event, leaves the operation critical section and the pointer of substitution block is returned; The cache blocks that data are if there is no arranged, the desirable event of the data of then resetting, and return null pointer.
1, the implementation method of inter-thread data exchange assembly is as follows:
The definition of cache blocks structure:
Figure BDA0000048848150000041
Figure BDA0000048848150000051
Wherein, wDataLen has put into the length of data in this cache blocks, if be 0, represents that then this cache blocks is empty cache blocks, lpData is the region of memory of store data, and DATA_EXCHANGE_BLOCK_SIZE can be according to the cache blocks size parameter of software systems requirement definition.Attention: the structure of substitution block is consistent with the definition of the structure of cache blocks.
Inter-thread data exchange component definition is as follows:
Figure BDA0000048848150000052
Wherein, four public methods have been defined.CDataExchangePipe and~CDataExchangePipe is constructed fuction and destructor function; PutData is that data are put into interface method, provides thread dispatching for data; GetDataExchangeBlock is the data acquisition interface method, for data consumption thread dispatching.
In addition, eight privately owned member variables have also been defined.M_lDataBufferCount is cache blocks number (not comprising substitution block), i.e. n; M_pBuffer is the big internal memory pointer at m_lDataBufferCount cache blocks and 1 substitution block place; M_ppDataBuffers is the cache blocks pointer data, and m_lDataBufferCount+1 element is the substitution block pointer; M_hDataAvailableEvent is the desirable event handlers of data; M_hBufferAvailableEvent is that data can be put event handler; M_CriticalSection is the operation critical section; M_lFirstIndex is cache blocks the earliest; M_lCurrentIndex is the current cache piece.
Referring to Fig. 4, inter-thread data exchange assembly initialization method of the present invention is achieved as follows:
The destructor function of CDataExchangePipe is initial method, is defined as follows:
CDataExchangePipe(long?lDataBufferCount=1000);
Parameter l DataBufferCount wherein is the number of cache blocks in the inter-thread data exchange assembly.
The specific implementation code of initial method is as follows:
Figure BDA0000048848150000061
Wherein, the 1st row arranges the number m_lDataBufferCount of cache blocks; The 2nd row distributes a big memory block, and size is the size of m_lDataBufferCount+1 cache blocks; The 3rd row distributes array of pointers, can deposit m_lDataBufferCount+1 block pointer, and a preceding m_lDataBufferCount pointer is the cache blocks pointer, and last pointer is the substitution block pointer; 4~10 are about to the initialization array of pointers, point to the relevant position of big memory block, and the size of data of cache blocks and substitution block are set to 0; 11st, 12 row are created the desirable event m_hDataAvailableEvent of data respectively and data can be put event m_hBufferAvailableEvent; 13rd, 14 row initialization m_lFirstIndex and m_lCurrentIndex are zero, and namely cache blocks and current cache piece all are set to first cache blocks (empty piece) the earliest; The 15th row initialization operation critical section m_CriticalSection.
3, data are put into the interface method realization
Data are put into interface method and are defined as follows:
BOOL?PutData(void*pData,WORD?wDataLen,DWORD?nTimeOut);
Wherein, pData is the memory address of the deposit data that need put into, the size of data of wDataLen for putting into, and nTimeOut is the wait timeout time.The rreturn value of this interface is Boolean type, and rreturn value is TRUE, and success is put in expression, otherwise failure is put in expression.
It is as follows that data are put into interface method specific implementation code:
Figure BDA0000048848150000071
In the specific implementation of this method, at first bRes and bTimeOut are initially set to FALSE, show that data are not placed successfully as yet and wait timeout not as yet; The 5th is advanced into the operation critical section; The 6th row judges whether the current cache piece has enough spaces to place the data that need put into, if the space is enough, then carries out to the 21st row, and data are put into the current cache piece, otherwise the desirable event of first trigger data can be obtained data with notification data consumption thread; Under the not enough prerequisite of current cache block space, the 9th row judges that whether next cache blocks is cache blocks the earliest, if illustrating has not had available empty cache blocks, then can put event in the 11st, the 12 capable data of resetting respectively, leave the operation critical section, and the 13rd the row etc. pending data can put event, if wait timeout then is false at the 25th row cycling condition, returns at the 26th row and put into failure, if wait for successfully, then continue to carry out since the 5th row; If it is not cache blocks the earliest that the 9th row is found next cache blocks, illustrate that next cache blocks is the sky piece, then be about to the current cache piece the 18th and move to next cache blocks, and continue the 21st row and carry out, data are put into the current cache piece; The 21st data of being about to put into copy the current cache piece to; The size of data is newly put in the 22nd size of data increase of being about to current data block; The 23rd row leaves the operation critical section; The 24th to be about to the bRes assignment be TRUE, shows that data put into success.Referring to Fig. 2, for data are put into operating process.
4, the data acquisition interface method realizes
The data acquisition interface method is defined as follows:
DataExchangeBlock*GetDataExchangeBlock(DWORD?nTimeOut);
Wherein, nTimeOut is the wait timeout time.The rreturn value of this interface is the substitution block pointer, if rreturn value is non-null value, then can visit the data in this substitution block, otherwise data failure is obtained in expression.
Data acquisition interface method specific implementation code is as follows:
In the specific implementation of this method, the 1st row waits the desirable event of pending data, no matter whether wait is successful, all is advanced into the operation critical section the 2nd; The 3rd row judges whether the size of data of cache blocks the earliest is zero, if equal zero, then in the 5th~7 row the desirable event of replacement data, leave the operation critical section and return null pointer, otherwise, then carry out data acquisition operations; The 10th size of data of being about to substitution block is set to zero, namely empties the data in the substitution block; 11st, 12,13 be about to substitution block and the earliest cache blocks replace; The 14th row judges whether the current cache piece equals cache blocks the earliest, if equate, illustrate that all data have all got, so the 16th~18 be about to the current cache piece and the earliest cache blocks all revert to original state, it is first cache blocks, and the desirable event of replacement data, if unequal, then substitution block moves to next cache blocks the earliest; The 24th row trigger data can be put event; The 25th row leaves the operation critical section; The 26th row returns the substitution block pointer.Referring to Fig. 3, for data are put into operating process.

Claims (5)

1. inter-thread data switching method real-time may further comprise the steps:
1) in the internal memory that data provide thread and data consumption thread to share a data cached structure is set, described data cached inside configuration comprises a plurality of data cache blocks; The inter-thread data exchange assembly of described method for interchanging data comprises three parts: data cached structure, and data are put into interface, data acquisition interface; Described data provide thread to put into interface by calling described data, and data are put into buffer memory, and described data consumption thread reads data in the buffer memory by calling described data acquisition interface;
Three variablees of described method for interchanging data definition: data can be put event, the desirable event of data and operation critical section; Described data can put whether have in the event description inter-thread data exchange assembly less than cache blocks, be used for putting into data; Whether have the cache blocks of data in the desirable event description inter-thread data exchange of the described data assembly, be used for obtaining data; Described operation critical section is used for realizing synchronously when data provide thread and data consumption thread that n cache blocks conducted interviews operation;
2) described data provide thread to put into interface by calling data, data are put into the data cache blocks of data cached structure; Described data cached structure comprises n+1 cache blocks, and wherein n cache blocks put into data for circulation; N+1 cache blocks is set is made as substitution block, replace with the cache blocks that has data when being used for obtaining data; When described data provide the thread dispatching data to put into interface, enter the operation critical section, check whether the current cache piece also has enough spaces to put into data, if the space is enough, then data is put into the current cache piece and left the operation critical section; If the current cache piece does not have enough spaces to put into data, the desirable event of trigger data if next cache blocks is empty, then moves to next cache blocks with the current cache piece, and data are put into the current cache piece; If next cache blocks is not empty, wait pending data can put event, reenter the operation critical section, and data are put into cache blocks;
3) the data consumption thread reads data in the data cache blocks by calling data acquisition interface, realizes the fast data displacement;
Described method of replacing is replaced with the content of substitution block for the cache blocks that data are arranged in n the cache blocks the earliest is made as cache blocks k, becomes the cache blocks content that data are arranged the earliest in the substitution block, and cache blocks k becomes the substitution block that content is sky.
2. inter-thread data switching method real-time as claimed in claim 1 is characterized in that: wait pending data can put the parameter of time for arranging of event, if wait timeout, then data are put into failure.
3. inter-thread data switching method real-time as claimed in claim 1, it is characterized in that: when data consumption thread dispatching data acquisition interface, at first wait the desirable event of pending data, enter the operation critical section, if judge the cache blocks that has data, then the data in the substitution block are emptied, and with substitution block outside have the cache blocks of data to replace, last trigger data can be put event, leaves the operation critical section, and data are obtained success.
4. inter-thread data switching method real-time as claimed in claim 1 is characterized in that: if do not have the cache blocks of data outside the substitution block, and the desirable event of the data of then resetting.
5. inter-thread data switching method real-time as claimed in claim 1 is characterized in that: if judge the cache blocks that does not have data, and the desirable event of the data of then resetting, data are obtained failure.
CN2011100519715A 2011-03-04 2011-03-04 Real-time and high-speed inter-thread data exchange method Active CN102129396B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100519715A CN102129396B (en) 2011-03-04 2011-03-04 Real-time and high-speed inter-thread data exchange method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011100519715A CN102129396B (en) 2011-03-04 2011-03-04 Real-time and high-speed inter-thread data exchange method

Publications (2)

Publication Number Publication Date
CN102129396A CN102129396A (en) 2011-07-20
CN102129396B true CN102129396B (en) 2013-07-10

Family

ID=44267486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100519715A Active CN102129396B (en) 2011-03-04 2011-03-04 Real-time and high-speed inter-thread data exchange method

Country Status (1)

Country Link
CN (1) CN102129396B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662771B (en) * 2012-03-03 2013-12-25 西北工业大学 Data interaction method between real-time process and non real-time process based on message mechanism
CN103595653A (en) * 2013-11-18 2014-02-19 福建星网锐捷网络有限公司 Cache distribution method, device and apparatus
CN103970597B (en) * 2014-04-24 2017-06-20 烽火通信科技股份有限公司 Queue implementing method and device are blocked in read-write in a balanced way
CN104616238A (en) * 2015-02-10 2015-05-13 北京嘀嘀无限科技发展有限公司 Method and device for order allocation
CN106557272B (en) * 2015-09-30 2019-07-30 中国科学院软件研究所 A kind of efficient sensor historic data archiving method
CN107729221B (en) * 2017-09-29 2020-12-11 深圳科立讯通信有限公司 Method and device for monitoring messages among threads, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6871266B2 (en) * 1997-01-30 2005-03-22 Stmicroelectronics Limited Cache system
CN101478472A (en) * 2008-10-21 2009-07-08 北京闪联讯通数码科技有限公司 Socket data transmission processing method and apparatus
CN101630276A (en) * 2009-08-18 2010-01-20 深圳市融创天下科技发展有限公司 High-efficiency memory pool access method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003216597A (en) * 2002-01-23 2003-07-31 Hitachi Ltd Multiprocessor system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6871266B2 (en) * 1997-01-30 2005-03-22 Stmicroelectronics Limited Cache system
CN101478472A (en) * 2008-10-21 2009-07-08 北京闪联讯通数码科技有限公司 Socket data transmission processing method and apparatus
CN101630276A (en) * 2009-08-18 2010-01-20 深圳市融创天下科技发展有限公司 High-efficiency memory pool access method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JP特开2003-216597A 2003.07.31

Also Published As

Publication number Publication date
CN102129396A (en) 2011-07-20

Similar Documents

Publication Publication Date Title
CN102129396B (en) Real-time and high-speed inter-thread data exchange method
US9230002B2 (en) High performant information sharing and replication for single-publisher and multiple-subscriber configuration
CN101901207B (en) Operating system of heterogeneous shared storage multiprocessor system and working method thereof
Gelado et al. Throughput-oriented GPU memory allocation
CN101753608B (en) Dispatching method and system of distributed system
CN1311348C (en) Data processing system
Pan et al. The new hardware development trend and the challenges in data management and analysis
CN101013415A (en) Thread aware distributed software system for a multi-processor array
CN101382953A (en) Interface system for accessing file system in user space and file reading and writing method
CN103930875A (en) Software virtual machine for acceleration of transactional data processing
CN103279428B (en) A kind of explicit multi-core Cache consistency active management method towards stream application
CN1320458C (en) Data processing system
CN102521419A (en) Hierarchical storage realization method and system
CN102521028B (en) Transactional memory system under distributed environment
CN102937964A (en) Intelligent data service method based on distributed system
CN103034593A (en) Multi--core processor oriented on-chip lock variable global addressing storage method and device
CN109918450A (en) Based on the distributed parallel database and storage method under analysis classes scene
EP1760581A1 (en) Processing operations management systems and methods
CN101290591B (en) Embedded operating system task switching method and unit
CN103186501A (en) Multiprocessor shared storage method and system
CN102446226A (en) Method for achieving NoSQL key-value storage engine
CN1295609C (en) Data processing system having multiple processors and a communications means in a data processing system
Barthels et al. Designing Databases for Future High-Performance Networks.
CN102110019B (en) Transactional memory method based on multi-core processor and partition structure
CN1052562A (en) Primary memory plate with single-bit set and reset function

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210126

Address after: 209, 2 / F, Lianchuang building, 2 Dongbeiwang Road, Haidian District, Beijing

Patentee after: Anjie Zhongke (Beijing) data Technology Co.,Ltd.

Address before: 100190 No. four, 4 South Street, Haidian District, Beijing, Zhongguancun

Patentee before: Institute of Software, Chinese Academy of Sciences