CN101127685B - An inter-process communication device and inter-process communication method - Google Patents

An inter-process communication device and inter-process communication method Download PDF

Info

Publication number
CN101127685B
CN101127685B CN2007101530448A CN200710153044A CN101127685B CN 101127685 B CN101127685 B CN 101127685B CN 2007101530448 A CN2007101530448 A CN 2007101530448A CN 200710153044 A CN200710153044 A CN 200710153044A CN 101127685 B CN101127685 B CN 101127685B
Authority
CN
China
Prior art keywords
data
thread
memory cell
receiving
inter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2007101530448A
Other languages
Chinese (zh)
Other versions
CN101127685A (en
Inventor
卢勤元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN2007101530448A priority Critical patent/CN101127685B/en
Publication of CN101127685A publication Critical patent/CN101127685A/en
Application granted granted Critical
Publication of CN101127685B publication Critical patent/CN101127685B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model relates to a device and a method for an interprocess communication, which is used for a data communication between a sending thread in a sending process and a receiving thread in a receiving process. The utility model comprises a shared memory cell, a shared message queue, a distributive memory cell and a distributive cell; wherein, the shared memory cell is used for providing a memory space for the data which needs to be sent to the receiving thread of the receiving process, for the sending thread; the shared message queue is used for transmitting a data notification message between the sending thread and the distributive cell; the distributive memory cell lies in the address space of the receiving process and is used for storing the data which needs to be sent to the receiving thread of the receiving process; the distributive cell is used for reading a data notification message from the shared message queue, and copying the data to the distributive memory cell from the shared memory cell according to the data address message in the data notification message, and sending the address message of the data in the distributive memory cell to the receiving thread, which is corresponding to the receiving thread mark in the data notification message.

Description

A kind of Inter-Process Communication device and Inter-Process Communication method thereof
Technical field
The present invention relates to a kind of Inter-Process Communication device and Inter-Process Communication method thereof.
Background technology
Along with Service Control Point of intelligent network SCP (Service Control Point, service control point) function becomes increasingly complex, service control point need be divided into different modules according to the difference of function, as service condition control module, operation flow control module, service management module, accounting module and ticket processing module etc.For the ease of administering and maintaining, these decomposition module are process independently, each process inside further is subdivided into the correlation function that different threads is finished module again, and typical multi-process multi-threaded system framework is as shown in Figure 1.
In order to finish the intelligent network business call flow, need to carry out a large amount of interacting messages between each inside modules and each module, existing in-process message also has the message between process.Message between in-process each thread sends relatively simple, owing to send thread and receiving thread in the address space of a process, send thread and only need the pointer of message body be sent to receiving thread by message queue, receiving thread is direct processing messages body pointer just.But for the communication between process, if the content (data) of whole message body is directly put into message queue to be sent, message queue to operating system can form very big impact, and a large amount of message may cause the operating system message queue to block, and causes the unusual of whole service processor.Therefore, for the communication between process, must adopt other more effective method solve.
Except intelligent network system, existing has other system of communication need that similar situation is also arranged between a plurality of processes and a plurality of process.
The conventional method that overcomes the above problems is the Inter-Process Communication pattern that adopts as shown in Figure 2, this Inter-Process Communication pattern is used in combination message queue and shared drive and carries out communication between process: the transmission process will need the content (being data) of the message body that sends to copy in the shared drive zone of receiving process correspondence, and offset address (data initial address) and the data length of content in shared drive with message body sends to receiving process by message queue simultaneously.
Inter-Process Communication pattern shown in Figure 2 has solved because message count is too much, the excessive and problem of the message queue that causes obstruction of data volume that message body content comprises; But caused a new problem, that is: when a plurality of transmission process/thread simultaneously when a receiving process sends message, send process/thread writes message body simultaneously to the shared drive zone of receiving process content, cause write conflict easily, promptly the same address in this shared drive zone is write data simultaneously by a plurality of transmission processes, causes data collision; Exist under the situation of a large amount of concurrent message in a plurality of transmission processes, the probability that write conflict occurs is very big.Therefore, before the transmission process/thread copies message content in the shared drive zone of receiving process, the mutual exclusion lock between process must be obtained earlier, the integrality of data could be guaranteed.But before write operation, increase the concurrency that mutually exclusive operation has reduced system undoubtedly, cause the decline of entire system performance; And mutual exclusion lock too much also can consume a large amount of system resource.
Summary of the invention
Technical problem to be solved by this invention is, overcomes the deficiency of Inter-Process Communication method in the prior art, provides a kind of mutually exclusive operation that do not use can realize correct Inter-Process Communication device and Inter-Process Communication method thereof.
In order to address the above problem, the invention provides a kind of Inter-Process Communication device, be used between the receiving thread of transmission thread that sends process and receiving process, carrying out data communication, it is characterized in that, this device comprises: shared memory cell, the shared message queue unit, distribution memory cell, Dispatching Unit; Wherein,
Shared memory cell is used to described transmission thread that the memory space of the data of the receiving thread that storage need send to described receiving process is provided;
The shared message queue unit is used for transmitting data notification message between described transmission thread and Dispatching Unit;
The distribution memory cell is in the address space of described receiving process, is used to store the data that described need send to the receiving thread of described receiving process;
Dispatching Unit is used for from shared message queue unit reading of data notification message, according to the data address information that comprises in this message data is copied to the distribution memory cell from shared memory cell; And the address information of these data in the distribution memory cell sent in the described receiving thread of the receiving thread sign correspondence that comprises in the data notification message, the receiving thread reading of data, and discharge the heap memory of distributing the memory cell correspondence.
In addition, described Dispatching Unit is the thread of described receiving process.
In addition, this device also comprises the dispatch messages queue unit, and the address information that is used for being kept at the described data of described distribution memory cell sends in the receiving thread of described receiving thread identification information correspondence.
In addition, be included as the storage area that described transmission thread distributes in the described shared memory cell; Described transmission thread sends to described need the storage of described receiving thread in the available address section of this storage area.
In addition, described distribution memory cell is the heap memory pond in the pre-assigned described receiving process address space; Or the described Dispatching Unit heap memory of dynamic assignment as required.
The present invention also provides a kind of Inter-Process Communication method, it is characterized in that, when the transmission thread that sends process when the receiving thread of receiving process sends data, this method comprises following steps:
A: send thread with described storage in the available address section of shared memory cell, and data notification message sent to Dispatching Unit by shared message queue; Comprise address information and the receiving thread sign of described data in shared memory cell in the data notification message;
B: after Dispatching Unit receives above-mentioned data notification message, obtain the start information and the receiving thread sign of the data that comprise in this message;
C: Dispatching Unit copies described data the distribution memory cell to from shared memory cell, and the address information of these data in distributing memory cell sent in the receiving thread of receiving thread sign correspondence, the receiving thread reading of data, and discharge the heap memory of distributing the memory cell correspondence.
In addition, described Dispatching Unit is the thread of described receiving process.
In addition, in described step C, Dispatching Unit sends to the address information of described data in the distribution memory cell in the described receiving thread by the dispatch messages formation.
In addition, be included as the storage area that described transmission thread distributes in the described shared memory cell; In described steps A, described transmission thread is with in the available address section of described storage in this storage area.
In addition, between described step B and C, comprise following steps:
B1: Dispatching Unit distributes heap memory at the address space of described receiving process, with this heap memory as described distribution memory cell;
After described step C, also comprise following steps:
D: described receiving thread reads described data according to the address information of data from the distribution memory cell, and data are handled;
E: described receiving thread discharges the heap memory of described distribution memory cell correspondence.
As from the foregoing, because the present invention sends thread distributing independent storage area in shared memory cell for each, prevented that effectively a plurality of transmission process/thread from causing the problem of data collision to same address space write data, improved the concurrency of system handles; In addition, since in receiving process, adopted the Dispatching Unit thread to the data that send process/thread and send in time copy, processing such as notice, the data of having avoided sending thread and receiving thread effectively send and inbound pacing does not match and the data that cause can not be by the phenomenon that in time reads even cause data to be capped.
Description of drawings
Fig. 1 is typical multi-process multi-threaded system configuration diagram;
Fig. 2 is an Inter-Process Communication pattern diagram of the prior art;
Fig. 3 is the structural representation of a kind of Inter-Process Communication device of the present invention;
Fig. 4 is a kind of Inter-Process Communication method flow diagram of the present invention.
Embodiment
Describe the present invention below in conjunction with drawings and Examples.
Fig. 3 is the structural representation of a kind of Inter-Process Communication device of the present invention.As shown in Figure 3, the Inter-Process Communication device is used for carrying out data communication between transmission process and receiving process, and transmission process (process A) comprises one or more transmission threads: thread A1 ...., thread An, n are the quantity of the transmission thread that comprises among the process A; Receiving process (process B) comprises one or more transmission threads: thread B1 ...., thread Bm, m are the quantity of the transmission thread that comprises among the process B.
This device comprises: shared memory cell, shared message queue, Dispatching Unit, distribution memory cell, dispatch messages formation.Wherein:
Shared memory cell is used to the thread that respectively sends in the transmission process that memory space is provided, and this memory space is used to store the data that send to other process; In shared memory cell, can distribute different storage areas for the difference in the transmission process sends thread; The thread of desiring to other process transmission data can be in this thread assigned region with deposit data.
Shared message queue is used for transmitting data notification message between the Dispatching Unit of transmission process and receiving process; This message comprises the offset address (initial addresses of data) and the data length of shared memory cell usually; Also can comprise message (being the initial address and the length of data) in addition in this message and transmit the purpose identification information, be used to indicate the recipient of this message, i.e. the identification information of the receiving thread in the receiving process.
The initial address of above-mentioned data and data length can be referred to as the address information of data, if transmit leg and recipient make an appointment to the length of data, then can not comprise data length in the address information of data.
Dispatching Unit is used for knowing the receiving thread identification information of this message from shared message queue reading of data notification message by the forwarding purpose identification information that wherein comprises, and obtains the offset address and the data length of shared memory cell from this message; The reading of data from shared memory cell according to this offset address and data length, and with these data copy to the distribution memory cell in; Dispatching Unit also sends to offset address (initial addresses of data) and the data length of these data in the distribution memory cell in the thread of above-mentioned receiving thread identification information correspondence by the dispatch messages formation.
Dispatching Unit is in the address space of receiving process usually, that is to say, Dispatching Unit is a particular thread that realizes data and message distribution function in the receiving process.
The distribution memory cell is used to store data, and this memory cell is usually located in the address space of receiving process; Heap memory in the normally pre-assigned receiving process address space of distribution memory cell also can be the Dispatching Unit heap memory of dynamic assignment as required.
The dispatch messages formation is used for transmitting message between each thread of Dispatching Unit and receiving process, comprises offset address (data initial address) and data length in the above-mentioned distribution memory cell in this message usually; Owing in receiving process, have a plurality of threads, and a plurality of thread is shared this dispatch messages formation usually, therefore also comprise above-mentioned receiving thread identification information usually in the above-mentioned message, be used for the recipient and confirm whether should read this message.
In unix system, above-mentioned shared message queue and dispatch messages formation be system V (system V) message queue normally.
Fig. 4 is a kind of Inter-Process Communication method flow diagram of the present invention.As shown in Figure 4, when the receiving thread (for example, thread B1) that the transmission thread in the transmission process (for example, thread A1) need be in receiving process sent data, this method comprised following steps:
101: thread A1 is sent to need in the current available address field of the storage area that the storage of thread B1 distributed for thread A1 in shared memory cell;
Because big or small limited for storage area that A1 distributed, thread A1 need recycle this storage area, and avoiding when sending multiple segment data continuously, the last period, data were not read as yet promptly by one piece of data covering down.The size of this storage area is relevant with frequency and data volume that thread A1 writes data segment, can not occur the phenomenon that follow-up data covers the data that will not be read as yet when promptly the size of this storage area need guarantee to recycle this storage area.
102: thread A1 sends to Dispatching Unit with the offset address and the data length of above data in shared memory cell by shared message queue;
As from the foregoing, above-mentioned shared message queue is generally system message queue, and therefore above-mentioned offset address and data length information are generally comprised within the data notification message and send Dispatching Unit to, also need comprise the receiving thread identification information in this data notification message, that is the identification information of thread B1.
103: after Dispatching Unit receives above-mentioned data notification message, from this message, obtain offset address (data initial address) and data length and receiving thread identification information;
104: Dispatching Unit distributes heap memory as the distribution memory cell in the address space of receiving process,, and these data is copied in the distribution memory cell from the shared memory cell reading of data according to above-mentioned offset address and data length information;
105: Dispatching Unit will be distributed the address of memory cell, and promptly the initial address of data and data length send to the corresponding receiving thread of above-mentioned receiving thread sign by the dispatch messages formation, that is, and and among the thread B1;
Equally, because the dispatch messages formation is generally system message queue, therefore above-mentioned data initial address and data length information are generally comprised within the data forwarding message and send receiving thread to, be thread B1, also need comprise the receiving thread identification information in this data forwarding message, that is the identification information of thread B1.
106: after thread B1 received data forwarding message, the reading of data from the distribution memory cell according to data initial address that comprises in this message and data length was handled accordingly these data, and is discharged the heap memory of distribution memory cell correspondence;
Because thread B1 and Dispatching Unit are in the address space of same process, therefore after data were finished using, thread B1 can use above-mentioned data initial address directly to discharge the heap memory of distribution memory cell correspondence.
In addition, thread B1 also can notify the data in the Dispatching Unit heap memory to finish using, discharges this address space voluntarily by Dispatching Unit.For example, thread B1 sends the internal memory release message to Dispatching Unit, and the notice Dispatching Unit discharges the heap memory of distribution memory cell correspondence; In addition, can also discharge internal memory by the communication modes notice Dispatching Unit of global variable.Belonging to in-process cross-thread communication modes because receiving thread notice Dispatching Unit discharges the mode of internal memory, is not emphasis of the present invention, therefore this is not carried out too much description.
Based on basic principle of the present invention, the foregoing description can have multiple variation pattern, for example:
Above-mentioned distribution memory cell can be one section pre-assigned heap memory pond, and Dispatching Unit is responsible for it is managed, and selects to use one or more snippets heap memory wherein as required from heap memory; Mass data as burst needs copy, but can use by the dynamic assignment heap memory when storage allocation in the pre-assigned heap memory pond can't meet the demands, and can reduce the number of times of Memory Allocation/release like this.
As from the foregoing, because the present invention sends thread distributing independent storage area in shared memory cell for each, prevented that effectively a plurality of transmission process/thread from causing the problem of data collision to same address space write data, improved the concurrency of system handles; In addition, since in receiving process, adopted the Dispatching Unit thread to the data that send process/thread and send in time copy, processing such as notice, the data of having avoided sending thread and receiving thread effectively send and inbound pacing does not match and the data that cause can not be by the phenomenon that in time reads even cause data to be capped.

Claims (10)

1. Inter-Process Communication device is used for carrying out data communication between the receiving thread of the transmission thread of the process of transmission and receiving process, it is characterized in that this device comprises: shared memory cell, shared message queue unit, distribution memory cell, Dispatching Unit; Wherein,
Shared memory cell is used to described transmission thread that the memory space of the data of the receiving thread that storage need send to described receiving process is provided;
The shared message queue unit is used for transmitting data notification message between described transmission thread and Dispatching Unit;
The distribution memory cell is in the address space of described receiving process, is used to store the data that described need send to the receiving thread of described receiving process;
Dispatching Unit is used for from shared message queue unit reading of data notification message, according to the data address information that comprises in this message data is copied to the distribution memory cell from shared memory cell; And the address information of these data in the distribution memory cell sent in the described receiving thread of the receiving thread sign correspondence that comprises in the data notification message, described receiving thread reads described data according to the address information of data from the distribution memory cell, and discharges the heap memory of distribution memory cell correspondence.
2. Inter-Process Communication device as claimed in claim 1 is characterized in that, described Dispatching Unit is the thread of described receiving process.
3. Inter-Process Communication device as claimed in claim 1, it is characterized in that, this device also comprises the dispatch messages queue unit, and the address information that is used for being kept at the described data of described distribution memory cell sends in the receiving thread of described receiving thread identification information correspondence.
4. Inter-Process Communication device as claimed in claim 1 is characterized in that, is included as the storage area that described transmission thread distributes in the described shared memory cell; Described transmission thread sends to described need the storage of described receiving thread in the available address section of this storage area.
5. Inter-Process Communication device as claimed in claim 2 is characterized in that, described distribution memory cell is the heap memory pond in the pre-assigned described receiving process address space; Or the described Dispatching Unit heap memory of dynamic assignment as required.
6. an Inter-Process Communication method is characterized in that, when the transmission thread that sends process when the receiving thread of receiving process sends data, this method comprises following steps:
A: send thread with described storage in the available address section of shared memory cell, and data notification message sent to Dispatching Unit by shared message queue; Comprise address information and the receiving thread sign of described data in shared memory cell in the data notification message;
B: after Dispatching Unit receives above-mentioned data notification message, obtain the start information and the receiving thread sign of the data that comprise in this message;
C: Dispatching Unit copies described data the distribution memory cell to from shared memory cell, and the address information of these data in distributing memory cell sent in the receiving thread of receiving thread sign correspondence, described receiving thread reads described data according to the address information of data from the distribution memory cell, and discharges the heap memory of distribution memory cell correspondence.
7. Inter-Process Communication method as claimed in claim 6 is characterized in that, described Dispatching Unit is the thread of described receiving process.
8. Inter-Process Communication method as claimed in claim 6 is characterized in that, in described step C, Dispatching Unit sends to the address information of described data in the distribution memory cell in the described receiving thread by the dispatch messages formation.
9. Inter-Process Communication method as claimed in claim 6 is characterized in that, is included as the storage area that described transmission thread distributes in the described shared memory cell; In described steps A, described transmission thread is with in the available address section of described storage in this storage area.
10. Inter-Process Communication method as claimed in claim 7 is characterized in that,
Between described step B and C, comprise following steps:
B1: Dispatching Unit distributes heap memory at the address space of described receiving process, with this heap memory as described distribution memory cell.
CN2007101530448A 2007-09-20 2007-09-20 An inter-process communication device and inter-process communication method Expired - Fee Related CN101127685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2007101530448A CN101127685B (en) 2007-09-20 2007-09-20 An inter-process communication device and inter-process communication method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2007101530448A CN101127685B (en) 2007-09-20 2007-09-20 An inter-process communication device and inter-process communication method

Publications (2)

Publication Number Publication Date
CN101127685A CN101127685A (en) 2008-02-20
CN101127685B true CN101127685B (en) 2011-05-25

Family

ID=39095613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2007101530448A Expired - Fee Related CN101127685B (en) 2007-09-20 2007-09-20 An inter-process communication device and inter-process communication method

Country Status (1)

Country Link
CN (1) CN101127685B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826003A (en) * 2010-04-16 2010-09-08 中兴通讯股份有限公司 Multithread processing method and device
CN102906706A (en) * 2010-05-24 2013-01-30 索尼电脑娱乐公司 Information processing device and information processing method
CN102799490B (en) * 2011-05-27 2014-08-06 北京神州泰岳软件股份有限公司 System and method for realizing one-to-many interprocess communication
WO2012159305A1 (en) * 2011-06-28 2012-11-29 华为技术有限公司 Distributed multi-process communication method and device
CN102662771B (en) * 2012-03-03 2013-12-25 西北工业大学 Data interaction method between real-time process and non real-time process based on message mechanism
CN102662773B (en) * 2012-03-13 2014-05-07 中冶南方工程技术有限公司 Structured document communication system between multiple processes
CN103634707A (en) * 2012-08-23 2014-03-12 上海斐讯数据通信技术有限公司 Communication method
CN103096168B (en) * 2012-12-25 2016-03-02 四川九洲电器集团有限责任公司 A kind of data communication method for parallel processing based on IPTV set top box
CN103164359B (en) * 2013-01-29 2017-04-05 北京雪迪龙科技股份有限公司 A kind of pipeline communication method and apparatus
CN107783845B (en) * 2016-08-25 2021-04-13 阿里巴巴集团控股有限公司 Message transmission system, method and device
CN108733496B (en) * 2017-04-24 2023-07-14 腾讯科技(上海)有限公司 Event processing method and device
CN107678866B (en) * 2017-09-22 2020-02-21 北京东土科技股份有限公司 Partition communication method and device based on embedded operating system
CN107819764B (en) * 2017-11-13 2020-06-02 重庆邮电大学 Evolution method of C-RAN-oriented data distribution mechanism
CN108984321B (en) * 2018-06-29 2021-03-19 Oppo(重庆)智能科技有限公司 Mobile terminal, limiting method for interprocess communication of mobile terminal and storage medium
CN110597640A (en) * 2019-08-29 2019-12-20 深圳市优必选科技股份有限公司 Inter-process data transmission method and device, terminal and computer storage medium
CN110955535B (en) * 2019-11-07 2022-03-22 浪潮(北京)电子信息产业有限公司 Method and related device for calling FPGA (field programmable Gate array) equipment by multi-service request process
CN112148444A (en) * 2020-09-04 2020-12-29 珠海格力电器股份有限公司 Data processing method, device and system and storage medium
CN112416625B (en) * 2020-11-30 2024-04-09 深信服科技股份有限公司 Copy-free interprocess communication system and method
CN115080258A (en) * 2021-03-11 2022-09-20 华为技术有限公司 Data transmission system and related equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1289962A (en) * 1999-09-23 2001-04-04 国际商业机器公司 Establishment of multiple process spanned communication programme in multiple linear equation running environment
CN1859327A (en) * 2006-02-09 2006-11-08 华为技术有限公司 Method, device and system for transfer news

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1289962A (en) * 1999-09-23 2001-04-04 国际商业机器公司 Establishment of multiple process spanned communication programme in multiple linear equation running environment
CN1859327A (en) * 2006-02-09 2006-11-08 华为技术有限公司 Method, device and system for transfer news

Also Published As

Publication number Publication date
CN101127685A (en) 2008-02-20

Similar Documents

Publication Publication Date Title
CN101127685B (en) An inter-process communication device and inter-process communication method
CN104915151B (en) A kind of memory excess distribution method that active is shared in multi-dummy machine system
CN1128406C (en) Interrupt architecture for non-uniform memory access (NUMA) data processing system
CN102255926B (en) Method for allocating tasks in Map Reduce system, system and device
CN101859279B (en) Memory allocation and release method and device
CN101419561A (en) Resource management method and system in isomerization multicore system
CN106980595B (en) The multiprocessor communication system and its communication means of shared physical memory
CN109933438A (en) High speed shared drive data receiving-transmitting system
CN103218329A (en) Digital signal processing data transfer
CN102855216A (en) Improvent for performance of multiprocessor computer system
CN101853210A (en) Memory management method and device
CN101707565A (en) Method and device for transmitting and receiving zero-copy network message
CN112463400A (en) Real-time data distribution method and device based on shared memory
CN101470636B (en) Message read-write method and apparatus
CN114155026A (en) Resource allocation method, device, server and storage medium
CN102088719A (en) Method, system and device for service scheduling
CN114625533A (en) Distributed task scheduling method and device, electronic equipment and storage medium
CN109614223A (en) Hardware resource dispatching method, device and hardware resource controlling equipment
CN102375790A (en) Shared bus transmission system and method
CN105049372A (en) Method of expanding message middleware throughput and system thereof
CN101189579B (en) Method and device for using semaphores for multi-threaded processing
CN103823712A (en) Data flow processing method and device for multi-CPU virtual machine system
CN103853676A (en) PCIe (Peripheral Component Interface express) bus based channel allocating, releasing, data transmitting method and system
CN101950272B (en) Memory management method and device in embedded system
CN109639599B (en) Network resource scheduling method and system, storage medium and scheduling device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110525

Termination date: 20160920

CF01 Termination of patent right due to non-payment of annual fee