CN112596889B - Method for managing chained memory based on state machine - Google Patents

Method for managing chained memory based on state machine Download PDF

Info

Publication number
CN112596889B
CN112596889B CN202011604317.8A CN202011604317A CN112596889B CN 112596889 B CN112596889 B CN 112596889B CN 202011604317 A CN202011604317 A CN 202011604317A CN 112596889 B CN112596889 B CN 112596889B
Authority
CN
China
Prior art keywords
memory
state
queue
state queue
state machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011604317.8A
Other languages
Chinese (zh)
Other versions
CN112596889A (en
Inventor
薛峰
周钟海
赵严
姚毅
杨艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luster LightTech Co Ltd
Original Assignee
Luster LightTech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luster LightTech Co Ltd filed Critical Luster LightTech Co Ltd
Priority to CN202011604317.8A priority Critical patent/CN112596889B/en
Publication of CN112596889A publication Critical patent/CN112596889A/en
Application granted granted Critical
Publication of CN112596889B publication Critical patent/CN112596889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System (AREA)

Abstract

The application provides a method for managing chained memories based on a state machine, which comprises the steps of creating state machine variables of idle, preprocessing, running and pre-resetting four state queues, sequentially and circularly calling adjacent state queues according to the order of the idle, preprocessing, running and pre-resetting, returning an object pointer contained in a first memory object in a first state queue to an applicant when the applicant applies for calling a memory space of the state machine variable under any state queue, simultaneously transferring the object pointer to a queue tail position of a second state queue, and deleting the memory object in the first state queue where the object pointer is located. According to the method, by creating the state machine variables in different states, the state machine variables cover all states of the memory object in operation, so that the state management of the memory is more clear, the safety mechanism is better, the problems of disorder and safety generated when the prior art accesses a large amount of memories in a multithreading manner are solved, and the use efficiency of the memory space is improved.

Description

Method for managing chained memory based on state machine
Technical Field
The application relates to the technical field of computer software programming, in particular to a method for managing chained memories based on a state machine.
Background
With the expansion of the capacity of memory devices in a computer operating system, the improvement of the running efficiency and the expansion of the addressing space range of a system memory by a 64-bit software program, the programming of the memory by a program developer in the design process is looser, and the use frequency is increased by times compared with the prior art. The complicated planning and operation brings more risks and workload, such as memory access crossing, access overlapping, synchronization deadlock and the like, and gradually becomes a main problem puzzling the developer. Therefore, in terms of memory operation and management, more points of review and creative development begin to aggregate, and more correlated strategies are emerging.
In a computer system, a linked list is a common data structure, and generally, multiple threads access data of each node of a read linked list at the same time is free from problems, and no data exception exists in the access conflict among the multiple threads. However, when a plurality of threads access a large amount of memory at the same time, nodes are added or deleted to the linked list, the problems of confusion and safety are caused, so that the use efficiency of the memory space is greatly reduced.
Disclosure of Invention
The application provides a method for managing chained memories based on a state machine, which groups each memory block in a memory space and assigns different operation authorities according to different use states, thereby achieving the aim of safety and high efficiency.
The technical scheme adopted by the application for solving the technical problems is as follows:
a method for managing chained memories based on a state machine, comprising the steps of:
creating state machine variables, wherein the state machine variables comprise four state queues of idle, preprocessing, running and pre-resetting, and the adjacent state queues are sequentially and circularly called according to the sequence of idle, preprocessing, running and pre-resetting;
recording corresponding linked list objects and thread mutex objects of the state machine variables in different states, and setting each linked list object in a clearing state;
creating a specified number of memory objects according to preset parameters, wherein each memory object is a memory information structure body comprising a memory sequence number and a memory block address;
requesting to allocate memory spaces with the same number and size as the memory objects, and recording the first address of each memory block in the memory space into a memory block address, wherein the memory block address is a memory block address corresponding to a memory sequence number in the memory objects;
storing the memory object recording the memory block head address to any state queue of the state machine;
when an applicant applies to call a memory space of the state machine variable in any state queue, an object pointer contained in a first memory object in a first state queue is returned to the applicant, and is migrated to a tail position of a second state queue, and the memory object in the first state queue where the object pointer is located is deleted, wherein the first state is any state of the state machine variable, and the second state is the next state adjacent to the first state.
Optionally, when the applicant applies to call the memory space under the idle state queue, the object pointer contained in the first memory object in the idle state queue is returned to the applicant, and meanwhile, the object pointer is migrated to the tail position of the preprocessing state queue, and the memory object in the idle state queue where the object pointer is located is deleted.
Optionally, when the applicant applies to call the memory space under the preprocessing state queue, the object pointer contained in the first memory object in the preprocessing state queue is returned to the applicant, and meanwhile, the object pointer is migrated to the tail position of the running state queue, and the memory object in the preprocessing state queue where the object pointer is located is deleted.
Optionally, when the applicant applies to call the memory space under the running state queue, the object pointer contained in the first memory object in the running state queue is returned to the applicant, and meanwhile, the object pointer is migrated to the tail position of the pre-reset state queue, and the memory object in the running state queue where the object pointer is located is deleted.
Optionally, when the applicant applies to call the memory space under the pre-reset state queue, the object pointer contained in the first memory object in the pre-reset state queue is returned to the applicant, and meanwhile, the object pointer is migrated to the tail position of the idle state queue, and the memory object in the pre-reset state queue where the object pointer is located is deleted.
Optionally, the method further comprises:
and in the operation of calling the memory space or resetting the length of the linked list or the size of the memory block, deleting and releasing all the memory objects in the original memory space.
Optionally, the method further comprises:
when the applicant applies to call the memory space in different states, the mutex object of the corresponding state queue is set to be no signal, and is reset to be the signal after the operation is finished.
The technical scheme provided by the application has the following beneficial technical effects:
the application provides a method for managing chained memories based on a state machine, which comprises the steps of creating state machine variables of idle, preprocessing, running and pre-resetting four state queues, sequentially and circularly calling adjacent state queues according to the order of the idle, preprocessing, running and pre-resetting, returning an object pointer contained in a first memory object in a first state queue to an applicant when the applicant applies for calling a memory space of the state machine variable under any state queue, simultaneously transferring the object pointer to a queue tail position of a second state queue, and deleting the memory object in the first state queue where the object pointer is located, wherein the first state is any state of the state machine variable, and the second state is the next state adjacent to the first state. According to the method provided by the application, through creating the state machine variables in different states, the state machine variables cover all states of the memory object in operation, so that the state management of the memory is more clear, the safety mechanism is better, the problems of disorder and safety generated when the prior art accesses a large amount of memories in a multithreading manner are solved, and the use efficiency of the memory space is improved.
Drawings
In order to more clearly illustrate the technical solution of the present application, the drawings that are needed in the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a diagram of a queue configuration of four state machines according to an embodiment of the present application;
FIG. 2 is a diagram of a memory information structure object according to an embodiment of the present application;
FIG. 3 is a diagram of a memory queue configuration when a memory object is stored in an idle state queue;
FIG. 4 is a schematic diagram of a migration from an idle state queue to a pre-processing state queue;
FIG. 5 is a schematic diagram of a migration from a pre-processing status queue to an on-stream status queue;
FIG. 6 is a schematic diagram illustrating a migration from an on-going status queue to a pre-reset status queue;
FIG. 7 is a diagram illustrating a process for migrating from a pre-reset state queue to an idle state queue.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present application, the technical solutions of the application embodiments will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application; it will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
The method for managing chained memories based on a state machine provided by the application is described in detail below by means of specific embodiments.
First, a brief description will be given of a usage scenario of the present solution.
The technical scheme of the application is suitable for managing the system memory space, for example, the use frequency of the memory space is multiplied compared with the prior art along with the expansion of the capacity of memory equipment in a computer operating system, and when a plurality of threads access a large amount of memory at the same time, nodes are added or deleted to a linked list, disorder can be caused. Therefore, the technical scheme of the application provides a chained memory management method, which is based on the state variables of a state machine, wherein the state machine variables cover all states of a memory object in operation, and can realize standard management of the memory object in a memory space under different states.
The method for managing the chained memory based on the state machine provided by the embodiment of the application comprises the following steps:
s1: and creating state machine variables, wherein the state machine variables comprise four state queues of idle, preprocessing, running and pre-resetting, and calling adjacent state queues in turn in the order of idle, preprocessing, running and pre-resetting.
As shown in fig. 1, a diagram is constructed for four state machine queues, where the four state machine variables cover all states of the memory object in operation, and are sequentially and circularly called in order of idle, preprocessing, running and pre-reset, including calling from idle state queue to preprocessing state, calling from preprocessing state to running state, calling from running state to pre-reset state, and finally returning from pre-reset state to idle state.
S2: and recording corresponding linked list objects and thread mutex objects of the state machine variables in different states, and setting each linked list object in a clearing state.
The linked list object includes a data field storing a data element and a pointer field storing a next node, the pointer field storing an object pointer of the linked list object. The thread mutex object can ensure that no interference is generated among all memory objects in the process of calling the memory objects, and ensures independent operation among threads.
S3: and creating a specified number of memory objects according to preset parameters, wherein each memory object is a memory information structure body comprising a memory sequence number and a memory block address.
As shown in fig. 2, a diagram is constructed for the memory information structure object of the memory object, and each memory object further includes member variables such as a name, a size, remark information, and a mutex object.
It should be noted that, the memory objects do not interfere with each other, and each memory object may be called between different state queues in sequence, or a specific memory object may be selected according to a system requirement to call between each state queue.
S4: requesting to allocate memory spaces with the same number and size as the memory objects, setting the content as hexadecimal values, and recording the head address of each memory block in the memory space into a memory block address, wherein the memory block address is the memory block address of the corresponding memory sequence number in the memory objects.
S5: the memory object for recording the first address of the memory block is stored in any state queue of the state machine, for example, as shown in fig. 3, the memory object for recording the first address of the memory block is stored in an idle state queue of the state machine, so as to complete the initialization state. As shown in fig. 3, the memory objects may call the memory objects in different state queues according to the sequence of the memory sequence numbers, or may call the memory objects in specific sequence numbers according to the system requirements.
S6: when an applicant applies to call a memory space of the state machine variable in any state queue, an object pointer contained in a first memory object in a first state queue is returned to the applicant, and is migrated to a tail position of a second state queue, and the memory object in the first state queue where the object pointer is located is deleted, wherein the first state is any state of the state machine variable, and the second state is the next state adjacent to the first state.
It should be noted that, when the applicant applies to call the memory space of the state machine variable under any state queue, the method includes when a migration instruction of migrating the memory object from the idle state queue to the pre-processing state queue is received, or a migration instruction of migrating the memory object from the pre-processing state queue to the running state queue is received, or a migration instruction of migrating the memory object from the running state queue to the pre-reset state queue is received, and a migration instruction of migrating the memory object from the pre-reset state queue to the idle state queue is received.
As shown in fig. 4, when the applicant applies to call the memory space in the idle state queue, that is, when the user applies to use the memory in the idle state queue, the object pointer contained in the first memory object in the idle state queue is returned to the applicant, and meanwhile, the object pointer is migrated to the tail position of the preprocessing state queue, and the memory object in the idle state queue where the object pointer is located is deleted.
As shown in fig. 5, when the applicant applies to call the memory space under the preprocessing state queue, that is, after the memory blocks in the preprocessing state queue are assigned and filled, the object pointer contained in the first memory object in the preprocessing state queue is returned to the applicant, and meanwhile, the object pointer is migrated to the tail position of the running state queue, and the memory object in the preprocessing state queue where the object pointer is located is deleted.
As shown in fig. 6, when the applicant applies to call the memory space under the running state queue, that is, after the use of the memory block of the running state queue is completed, the object pointer contained in the first memory object in the running state queue is returned to the applicant, and meanwhile, the object pointer is migrated to the tail position of the pre-reset state queue, and the memory object in the running state queue where the object pointer is located is deleted.
As shown in fig. 7, when the applicant applies to call the memory space in the pre-reset state queue, that is, when the user applies to release the memory, the object pointer contained in the first memory object in the pre-reset state queue is returned to the applicant, and meanwhile, the object pointer is migrated to the tail position of the idle state queue, and the memory object in the pre-reset state queue where the object pointer is located is deleted.
As an embodiment, the method further comprises:
and in the operation of calling the memory space or resetting the length of the linked list or the size of the memory block, deleting and releasing all the memory objects in the original memory space.
In this embodiment, the method further includes:
when the applicant applies to call the memory space in different states, the mutex object of the corresponding state queue is set to be no signal, and is reset to be the signal after the operation is finished.
On the other hand, the embodiment of the application also provides a specific implementation mode of the method, which comprises the following steps:
(1) And the interface encapsulation, taking the number of blocks and the size of the blocks as input parameters and management objects, encapsulating the management objects by taking a pointer as a return value or a reference parameter, initializing an interface function, encapsulating and acquiring and returning the interface function by taking a memory information structure object pointer as a return value or a reference parameter for each state chain, and encapsulating the management object release process as a release interface function.
(2) Initializing, namely calling an initializing function to initialize a management object;
(3) Acquiring an idle memory, calling an idle queue interface function to apply for an object pointer of a structure body of idle memory information, dividing and assigning the contained memory blocks, and setting identification information such as names and remarks;
(4) Using the memory, the user calls a return interface of the in-use state queue to transfer the memory information structure object after the segmentation and assignment to the in-use state queue;
(5) Returning the memory, wherein a user calls an acquisition interface of the in-use state queue to take out an object pointer of the memory information structure body which is already used, and then calls a return interface function of the pre-reset state queue to migrate the object to the pre-reset state queue; then when the user needs to thoroughly release the memory block, calling a return function of the idle state queue to transfer the object to the idle state queue;
(6) And releasing the memory management object, and enabling the user to call the release interface to delete the management object in use and delete all memory information structure objects applied in the management object, including the applied memory addresses in the object.
An embodiment of the present application also provides a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the moire fringe removal method of the first aspect described above.
It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the application to enable those skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
It will be understood that the application is not limited to what has been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (7)

1. A method for managing chained memories based on a state machine, comprising the steps of:
creating state machine variables, wherein the state machine variables comprise four state queues of idle, preprocessing, running and pre-resetting, and the adjacent state queues are sequentially and circularly called according to the sequence of idle, preprocessing, running and pre-resetting;
recording corresponding linked list objects and thread mutex objects of the state machine variables in different states, and setting each linked list object in a clearing state;
creating a specified number of memory objects according to preset parameters, wherein each memory object is a memory information structure body comprising a memory sequence number and a memory block address;
requesting to allocate memory spaces with the same number and size as the memory objects, and recording the first address of each memory block in the memory space into a memory block address, wherein the memory block address is a memory block address corresponding to a memory sequence number in the memory objects;
storing the memory object recording the memory block head address to any state queue of the state machine;
when an applicant applies to call a memory space of the state machine variable in any state queue, an object pointer contained in a first memory object in a first state queue is returned to the applicant, and is migrated to a tail position of a second state queue, and the memory object in the first state queue where the object pointer is located is deleted, wherein the first state is any state of the state machine variable, and the second state is the next state adjacent to the first state.
2. The method of claim 1, wherein,
when the applicant applies to call the memory space in the idle state queue, the object pointer contained in the first memory object in the idle state queue is returned to the applicant, and meanwhile, the object pointer is migrated to the tail position of the preprocessing state queue, and the memory object in the idle state queue where the object pointer is located is deleted.
3. The method of claim 2, wherein,
when an applicant applies to call a memory space under a preprocessing state queue, an object pointer contained in a first memory object in the preprocessing state queue is returned to the applicant, and meanwhile, the object pointer is migrated to the tail position of the running state queue, and the memory object in the preprocessing state queue where the object pointer is located is deleted.
4. The method for managing chained memory based on a state machine as set forth in claim 3, wherein,
when an applicant applies for calling a memory space under the running state queue, an object pointer contained in a first memory object in the running state queue is returned to the applicant, and meanwhile, the object pointer is migrated to the tail position of the pre-reset state queue, and the memory object in the running state queue where the object pointer is located is deleted.
5. The method of claim 4, wherein,
when an applicant applies to call a memory space in the pre-reset state queue, an object pointer contained in a first memory object in the pre-reset state queue is returned to the applicant, and meanwhile, the object pointer is migrated to the tail position of the idle state queue, and the memory object in the pre-reset state queue where the object pointer is located is deleted.
6. The method of state machine based management of chained memory according to claim 1, further comprising:
and in the operation of calling the memory space or resetting the length of the linked list or the size of the memory block, deleting and releasing all the memory objects in the original memory space.
7. The method of state machine based management of chained memory according to claim 1, further comprising:
when the applicant applies to call the memory space in different states, the mutex object of the corresponding state queue is set to be no signal, and is reset to be the signal after the operation is finished.
CN202011604317.8A 2020-12-29 2020-12-29 Method for managing chained memory based on state machine Active CN112596889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011604317.8A CN112596889B (en) 2020-12-29 2020-12-29 Method for managing chained memory based on state machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011604317.8A CN112596889B (en) 2020-12-29 2020-12-29 Method for managing chained memory based on state machine

Publications (2)

Publication Number Publication Date
CN112596889A CN112596889A (en) 2021-04-02
CN112596889B true CN112596889B (en) 2023-09-29

Family

ID=75203972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011604317.8A Active CN112596889B (en) 2020-12-29 2020-12-29 Method for managing chained memory based on state machine

Country Status (1)

Country Link
CN (1) CN112596889B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010037568A (en) * 1999-10-18 2001-05-15 서평원 method for memory management in switching system
WO2014094472A1 (en) * 2012-12-17 2014-06-26 华为技术有限公司 Global memory sharing method and device and communication system
CN110209493A (en) * 2019-04-11 2019-09-06 腾讯科技(深圳)有限公司 EMS memory management process, device, electronic equipment and storage medium
CN111679914A (en) * 2020-06-12 2020-09-18 北京字节跳动网络技术有限公司 Memory management method, system, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8843682B2 (en) * 2010-05-18 2014-09-23 Lsi Corporation Hybrid address mutex mechanism for memory accesses in a network processor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010037568A (en) * 1999-10-18 2001-05-15 서평원 method for memory management in switching system
WO2014094472A1 (en) * 2012-12-17 2014-06-26 华为技术有限公司 Global memory sharing method and device and communication system
CN110209493A (en) * 2019-04-11 2019-09-06 腾讯科技(深圳)有限公司 EMS memory management process, device, electronic equipment and storage medium
CN111679914A (en) * 2020-06-12 2020-09-18 北京字节跳动网络技术有限公司 Memory management method, system, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112596889A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN105893126B (en) A kind of method for scheduling task and device
JP4750350B2 (en) Task switching device, method and program
US4796178A (en) Special purpose processor for off-loading many operating system functions in a large data processing system
CN112214313B (en) Memory allocation method and related equipment
JPH0570177B2 (en)
JPH07175698A (en) File system
JPH0551942B2 (en)
CN113535363A (en) Task calling method and device, electronic equipment and storage medium
CN104424030B (en) Method and device for sharing memory by multi-process operation
CN112860458B (en) Inter-process communication method and system based on shared memory
CN109308269B (en) Memory management method and device
CN111324427A (en) Task scheduling method and device based on DSP
CN109298888B (en) Queue data access method and device
CN115576716A (en) Memory management method based on multiple processes
CN112596889B (en) Method for managing chained memory based on state machine
CA1299758C (en) Task scheduling mechanism for large data processing systems
CN104572483B (en) Dynamic memory management device and method
WO2020005597A1 (en) Managing global and local execution phases
US20140289739A1 (en) Allocating and sharing a data object among program instances
WO2021227789A1 (en) Storage space allocation method and device, terminal, and computer readable storage medium
CN117112246A (en) Control device of spin lock
CN116661690A (en) Method, device, computer equipment and storage medium for recording memory state
WO2017142525A1 (en) Allocating a zone of a shared memory region
JP7217341B2 (en) How processors and registers are inherited
US20200004577A1 (en) Managing global and local execution phases

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant