CN114063894A - Coroutine execution method and coroutine execution device - Google Patents

Coroutine execution method and coroutine execution device Download PDF

Info

Publication number
CN114063894A
CN114063894A CN202011188682.5A CN202011188682A CN114063894A CN 114063894 A CN114063894 A CN 114063894A CN 202011188682 A CN202011188682 A CN 202011188682A CN 114063894 A CN114063894 A CN 114063894A
Authority
CN
China
Prior art keywords
data
memory
pool
storage
protocol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011188682.5A
Other languages
Chinese (zh)
Inventor
赵冬梅
崔文林
罗四维
马腾
鲁凤成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN114063894A publication Critical patent/CN114063894A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Abstract

The application provides a coroutine execution method and device. The method comprises the steps that a first protocol is used for generating an error signal when first data in a memory pool are accessed, the error signal is generated when the first protocol is operated, the first data are migrated from the memory pool to the memory pool in response to the error signal, and the first protocol is switched to a second protocol to serve as a current execution protocol in the process of migrating the first data from the memory pool to the memory pool. By the method, the thread operation is not blocked in the process of migrating the first data to the memory pool, so that the problem of unstable IO time delay is solved, and the performance and the stability of the system are improved.

Description

Coroutine execution method and coroutine execution device
Cross Reference to Related Applications
The present application claims priority from chinese patent application having application number 202010786176.X entitled "a storage system and data storage method" filed by the chinese patent office on 07/08/2020, which is incorporated herein by reference in its entirety.
Technical Field
The present application relates to the field of computer technologies, and in particular, to a coroutine execution method and apparatus.
Background
With the development of IT technology, more and more data need to be processed by a computer, and more memory needs of applications, the prior art can realize that when the memory is insufficient, part of memory data is migrated to a hard disk, and this way can temporarily alleviate the problem of insufficient memory, but when a program accesses the migrated data in the memory, an exception is often generated, which results in the problem that the program is interrupted.
Disclosure of Invention
The application provides a coroutine execution method and device, which are used for reducing thread interruption.
In a first aspect, a coroutine execution method is provided, where the coroutine execution method is applied to a storage system, where the storage system includes a memory pool and a storage pool, and the method includes: generating an error signal when a first protocol is operated, wherein the first protocol is used for accessing first data in a memory pool; migrating the first data from the storage pool to the memory pool in response to the error signal; and in the process of migrating the first data from the storage pool to the memory pool, switching the first coroutine into a second coroutine as a current execution coroutine.
In the method, when the error signal is responded, the first protocol can be switched to the second protocol, the second protocol is executed, and the first data can be asynchronously migrated to the memory pool, so that the running of the thread can not be blocked, the problem of unstable IO (input/output) time delay is reduced, and the performance and the stability of the system are improved.
In one possible design, before running the first routine, the method further includes: and migrating the first data from the memory pool to the storage pool according to the utilization rate of the memory pool and/or the cold and hot information of the first data.
In the method, the utilization rate of the memory pool and the cold and hot information of the first data are combined, the cold data can be migrated according to the cold and hot information of the first data, and the hot data is retained, so that the resource stability of the memory pool is maintained, the stability of accessing the data in the memory pool can be considered, and frequent data migration and migration are avoided.
In one possible design, the hot and cold information of the first data is configured by an application to which the first data corresponds.
In the above method, for example, the hot and cold information of the first data may also be configured by a process corresponding to the first data, or configured by a physical page corresponding to the first data, or configured by a structure corresponding to the first data, so that multiple configuration modes are improved, the flexibility is high, the determination of the hot and cold degree of the data by simply counting the frequency of accessing the data through an algorithm such as LRU is avoided, and it is ensured that the data which is accessed at a low frequency and has a high reliability requirement is not easily migrated.
In one possible design, the memory pool includes a plurality of different types of memory located in one or more computing devices of the memory system.
In the method, the computing device may use the storage resources of other computing devices in the storage system to support the migration of data in the memory to other computing devices, so as to achieve the effect of global storage resource flow.
In one possible design, the different types of memory include: DRAM, SCM, AEP, hard disk.
In one possible design, the memory pool includes one or more first memories, one or more second memories, the first memories having higher performance than the second memories; the cold and hot information of the first data is used for indicating the cold and hot degree of the first data; if the cold and hot degree of the first data is higher than a first preset value, storing the first data to the first memory; and if the cold and hot degree of the first data is lower than the first preset value, storing the first data to the second memory.
In one possible design, the first coroutine is a thread created at the user level for processing a task.
In a second aspect, a computing device is provided, comprising: an execution unit: the device comprises a first protocol for generating an error signal when running a first protocol, wherein the first protocol is used for accessing first data in the memory pool; a processing unit: means for migrating the first data from the storage pool to the memory pool in response to the error signal; and the processor is further configured to switch the first protocol to a second protocol as a current execution protocol during execution of the process of migrating the first data from the storage pool to the memory pool.
In one possible design, before the execution unit runs the first coroutine, the processing unit is further configured to: and migrating the first data from the memory pool to the storage pool according to the utilization rate of the memory pool and/or the cold and hot information of the first data.
In one possible design, the hot and cold information of the first data is configured by an application to which the first data corresponds.
In one possible design, the memory pool includes a plurality of different types of memory, the plurality of different types of memory being located in one or more computing devices of the memory system.
In a third aspect, a computing device is provided, the device comprising a processor for implementing the method described in the first aspect above. The management device may also include a memory for storing program instructions and data. The memory is coupled to the processor, and the processor can call and execute the program instructions stored in the memory for implementing any one of the methods described in the first aspect above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a computer, cause the computer to perform the method of any one of the first aspects.
In a fifth aspect, the present application provides a computer program product, which stores a computer program, the computer program comprising program instructions, which, when executed by a computer, cause the computer to perform the method of any one of the first aspect.
In a sixth aspect, the present application provides a chip system, which includes a processor and may further include a memory, and is configured to implement the method of the first aspect. The chip system may be formed by a chip, and may also include a chip and other discrete devices.
In a seventh aspect, an embodiment of the present application provides a storage system, where the storage system includes a storage pool, and resources of the storage pool are provided by at least two computing devices, where the at least two computing devices include a first computing device and a second computing device; generating an error signal when the first computing device runs a first protocol, wherein the first protocol is used for accessing first data in the memory pool; migrating the first data from the storage pool to the memory pool in response to the error signal; and in the process of executing the process of migrating the first data from the storage pool to the memory pool, switching the first protocol to a second protocol as a current execution protocol, wherein the position of the first data in the storage pool is located in the first computing device or the second computing device.
Advantageous effects of the second to seventh aspects and implementations thereof described above reference may be made to the description of the method of the first aspect and advantageous effects of implementations thereof.
Drawings
Fig. 1 is a schematic diagram of a possible system architecture provided by an embodiment of the present application;
fig. 2 is a schematic internal structural diagram of a computing device according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a relationship between coroutines and threads according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an architecture of a storage pool provided by an embodiment of the present application;
FIG. 5 is a schematic illustration of a hierarchy included in a storage pool provided by an embodiment of the present application;
fig. 6 is a schematic flowchart corresponding to a coroutine execution method according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of water levels in a storage pool provided by an embodiment of the present application;
fig. 8 is a schematic block diagram of a coroutine handover according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart of a coroutine handover provided in an embodiment of the present application;
FIG. 10 is a schematic structural diagram of a computing device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of another computing device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the embodiments of the present invention will be described below with reference to the accompanying drawings.
The network architecture and the service scenario described in the embodiment of the present invention are for more clearly illustrating the technical solution of the embodiment of the present invention, and do not form a limitation on the technical solution provided in the embodiment of the present invention, and it can be known by those skilled in the art that the technical solution provided in the embodiment of the present invention is also applicable to similar technical problems along with the evolution of the network architecture and the appearance of a new service scenario.
Referring to fig. 1, a schematic diagram of an architecture of a storage system provided in an embodiment of the present application is shown. The storage system includes a plurality of computing devices (three computing devices 10a, 10b, and 10c are illustrated in fig. 2, but the application is not limited to three). The computing device may be, for example, a server, a desktop computer, a notebook computer, or the like, which is not limited in this application.
Taking one computing device (e.g., computing device 10a) of fig. 1 as an example, and with continued reference to fig. 1, the computing device 10a includes at least a processor 101, a memory 102, and a communication interface 103. The processor 101, the memory 102 and the communication interface 103 communicate with each other via a communication bus.
The processor 101 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured as an ASIC.
The storage 102 is used for storing data, and may be a memory or a hard disk. The memory is an internal memory which directly exchanges data with the processor, can read and write data at any time, is high in speed and serves as a temporary data memory of running applications. The Memory includes at least two types of memories, such as Random Access Memory (RAM) and Read Only Memory (ROM). For example, the RAM may be a Dynamic Random Access Memory (DRAM) or an About (AEP). DRAM is a semiconductor memory, and, like most of the RAM, belongs to a volatile memory (volatile memory) device. AEP can provide persistent storage, can provide greater capacity, and can approach the read and write speed of DRAM.
However, DRAM and AEP are only exemplary in this embodiment, and the Memory may also include other random access memories, such as Storage Class Memory (SCM), which is a hybrid Storage technology combining the features of conventional Storage devices and memories, and can provide faster read and write speed than a hard disk, but slower operation speed than DRAM, and cheaper cost than DRAM. For ROM, for example, a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), XL-Flash, etc. can be used. It should be noted that the above-mentioned memories are merely examples, and do not mean that the computing device does not include other types of memories.
The memory in the present application may also be a hard disk, and unlike a memory, the speed of reading and writing data in a hard disk is slower than that in a memory, and is generally used for persistently storing data. Taking computing device 10a as an example, one or more hard disks may be disposed therein; alternatively, a hard disk frame (as shown in fig. 2) may be mounted outside the computing device 10a, and a plurality of hard disks may be disposed in the hard disk frame. In any arrangement, these hard disks may be considered the hard disks included in computing device 10 a. The hard disk types include at least a Solid State Disk (SSD), a mechanical hard disk (HDD), a QLC, or other types of hard disks, for example, XL-FLASH may be used as the hard disk in practical use.
Communication interface 103 is configured to provide efficient network transmission capabilities for enabling computing device 10a to communicate with an external device or an internal device, such as a Network Interface Controller (NIC). In the storage system shown in fig. 1, each computing device may communicate with any other computing device. For example, computing device 10a may communicate with computing device 10b through communication interface 103, or computing device 10a may communicate with computing device 10c through communication interface 103. Of course, as described herein by taking the communication system shown in fig. 2 as an example, the computing device may also communicate with other devices through the communication interface, for example, referring to fig. 2, the computing device 10a may communicate with the hard disk frame through the communication interface 103 to access any hard disk in the hard disk frame, and correspondingly, the hard disk frame communicates with the computing device 10a through the communication interface 104, which is not listed here.
Taking computing device 10a as an example, computing device 10a may further include a memory pool, which may illustratively consist of some or all of the memory type storage of computing device 10a, the memory pool being dedicated to computing device 10a, and each computing device may have its own dedicated memory pool. For example, those with low memory requirements or low computation load or the storage system may contribute part or even all of its memory type storage to the storage pool. The storage pool is shared by the various computing devices within the storage system, i.e., may be common to each computing device. The memory pool is used for persistently storing data, in particular data with a low access frequency, and the memory pool is used for temporarily storing data, in particular data with a high access frequency.
The storage pool provided by the present embodiment is described below. Fig. 4 is a schematic architecture diagram of a memory pool and a storage pool provided in the present application. The memory pool of computing device 10a is composed of all the memory type storage of computing device 10a, the memory pool of computing device 10b is composed of part of the memory type storage of computing device 10b, and similarly, the memory pool of computing device 10c is composed of part of the memory type storage of itself. For convenience of description, the memories including DRAM and AEP are illustrated in fig. 4 as examples. Of course, the AEP may be other memories, such as SCM, etc., and the present application is not limited thereto.
The storage pool shown in FIG. 4 is comprised of the hard disks of computing device 10a, computing device 10b, and computing device 10c, and a portion of memory. The memory pool contains a plurality of different types of memory, each type of memory being considered a hierarchy. The performance of each tier of memory is different from the performance of the other tiers of memory. The performance of the memory in the present application is mainly considered from the aspects of operation speed and/or access delay. The various levels of the memory pool are described below with reference to computing device 10a as an example, and FIG. 5 is a schematic diagram of the various levels of memory included in the memory pool provided herein. As shown in fig. 5, the storage pool may be composed of storage in each computing device, and for computing device 10a, the performance of the local AEP is higher than that of the storage of other computing devices in the storage system, and the local AEP has a faster read-write speed and a larger capacity, and may serve as a first tier of the storage pool; while the memory type storage of the remote computing devices (including computing device 10b and computing device 10c) may be a second level of storage pools. Further, the hard disk local to computing device 10a and the hard disk of the remote storage may have the lowest performance and may be the third tier of the storage pool. It should be understood that fig. 5 is a division of the hierarchy of the storage pool from the perspective of computing device 10a, if for computing device 10b, the hierarchy of the same storage pool is the first hierarchy for AEP local to computing device 10b, the second hierarchy for memory of the remote computing device, and the third hierarchy for hard disk local to computing device 10b and hard disk of the remote storage, and similarly, other computing devices may divide the levels of the storage pool according to the performance of the local tests on the respective memories of the storage pool, and the description is not repeated here.
Of course, fig. 5 only illustrates a hierarchical structure of the memory pools by taking partial types of memories as an example, and the present application does not set any limit to the types of memories included in the memory pools and the number of hierarchies. For example, on the basis of fig. 5, the second hierarchy level in fig. 5 may be further subdivided according to the performance of the storage contained in the memory of the remote computing device, for example, the DRAM of the remote computing device is one hierarchy level, and the AEP of the remote computing device is one hierarchy level; similarly, the third level in fig. 5 is further subdivided according to the performance of the memory included in the hard disk, which is not listed here. In addition, the hierarchy of the storage pool is only an internal partition and is not perceived by upper-level applications. It should be noted that although the hard disk on the computing device and the hard disk on the other computing device belong to the same hierarchy, the performance of using the local hard disk is higher than that of using the hard disk of the other computing device, so when the computing device applies for a certain hierarchy of storage space, the storage space that belongs to the hierarchy locally is preferentially allocated to the user, and when the local space is insufficient, the storage space is allocated from the same hierarchy of the other computing device.
In this embodiment, the storage pool may include only the hard disk of each computing device unit, which is not limited in the embodiment of the present application. In addition, the storage pools shown in fig. 4 or fig. 5 are merely examples, and do not represent that each computing device in the storage system must contribute storage space to a storage pool that may cover only a portion of the computing devices in the storage system. The same storage system may also create one or more storage pools, and computing devices occupied by different storage pools may or may not be duplicated. In summary, the memory pool in this embodiment is established in at least two computing devices, containing memory space derived from at least two different types of memory.
The creation of the memory pool, which may store data in "pages" for example, may be done by the computing device itself. Regarding the creation of storage pools. And each computing device reports the state information of the memory to the management node regularly through the heartbeat channel. There may be one management node or a plurality of management nodes may be deployed. The state information of the memory includes, but is not limited to: the type of various memories included with the computing device, the health of the various memories, the total capacity of each type of memory, and the available capacity, among others. And the management node creates a storage pool according to the collected information, wherein the creation means that the storage spaces provided by the computing devices are collected to be uniformly managed as the storage pool. In some scenarios, the computing device may selectively provide memory to the memory pool based on its own circumstances, such as the health of the memory. In other words, it is possible that some memory in some computing devices is not part of a memory pool. For example, a portion of the DRAM in computing device 10b may contribute to the pool, while the remaining portion of the DRAM may be dedicated memory for computing device 10 b.
After the information is collected, the management node needs to uniformly address the storage space included in the storage pool. With uniform addressing, each segment of space in the storage pool has a unique global address. The space indicated by a global address is unique in the storage pool and each computing device knows the meaning of the address. After a physical space is allocated to a space in a memory pool, the global address of the space has its corresponding physical address indicating on which memory of which computing device the space represented by the global address is actually located, and the offset in that memory, i.e., the location of the physical space. Each segment of space is also referred to as a "page", and the size of the page in the memory pool and the size of the page in the storage pool may be the same or different, so that the following description of the present application takes the case where the sizes of the two pages are equal for convenience of data migration.
The management node may allocate a physical space to each global address after the storage pool is created, or may allocate a physical space to a global address corresponding to a write data request when the write data request is received. The correspondence between each global address and its physical address is recorded in an index table, which is synchronized by the management node to each computing device. Each computing device stores the index table so as to inquire the physical address corresponding to the global address according to the index table when data is read and written subsequently. Specifically, taking the computing device 10a as an example, the computing device 10a may access other memories in the memory pool through a communication interface or a disk pass-through manner, which is not limited in this embodiment of the application.
The index table is mainly used for recording the corresponding relation between the global address and the physical address, and can be used for recording the attribute information of the data. For example, data with a global address of 000001 has hot and cold information, whether it is occupied, and the like. And then, migration of data among various memories can be realized according to the attribute information, or attribute setting and the like can be carried out. It should be understood that the attribute information of the data is only an option of the index table, and is not necessarily recorded.
When new computing equipment is added into the storage system, the management node collects node update information, brings the new computing equipment into the memory pool, addresses the storage space contained in the computing equipment, generates a new global address, and refreshes the corresponding relation between the physical address and the global address. The capacity expansion is also suitable for the condition that some computing devices are added with memories or hard disks, the management node regularly collects the state information of the memories contained in each computing device, if a new memory is added, the new memory is taken into the memory pool and addresses a new memory space, so that a new global address is generated, and then the corresponding relation between the physical address and the global address is refreshed. Similarly, the storage pool provided by this embodiment also supports capacity reduction, and only the corresponding relationship between the global address and the physical address needs to be updated.
The storage system and computing device are introduced at the hardware level as described above. At the software level, the computing device runs software programs, with different software programs having different functions. Examples of the software programs include an Operating System (OS), an application program (application), and the like. Computing devices run different applications to implement different functions. Those skilled in the art will appreciate that the functions of the application are implemented by the processor of the computing device running a program in memory.
In actual use, programs and data in the storage pool need to be written into the local memory first, and the processor can read data from the local memory directly. That is, data in the memory pool needs to be written to memory before it can be used by the processor. To solve the problem of insufficient memory, a practical way is that the computing device may control data to flow between the local memory pool and the storage pool, for example, when the memory pool of the computing device is not enough, a part of the space in the memory pool may be released, the released space may come from some program (denoted as program a) which has not been operated for a long time, and the released data may be temporarily stored in the storage pool. When the program a accesses the data in the memory pool, an error signal is generated because the data in the memory pool is already released, and the error signal may specifically be a page missing signal because the memory data is missing and the memory pool is a page storing data with granularity. The kernel of the computing device may then move the data from the storage pool back to memory in response to the page fault signal.
It can be understood by those skilled in the art that the application program runs in the user mode, when responding to the page fault signal, the processor switches from the user mode to the kernel mode, the thread running the application program may suspend execution, and after data is moved back to the memory from the hard disk, the execution of the current application program is resumed, which may cause unstable IO latency and delay.
In view of this, an embodiment of the present application provides a coroutine execution method, in which, for example, an error signal is generated when a first coroutine is executed, where the first coroutine is used to access first data in a memory pool, the first data is migrated from the memory pool to the memory pool in response to the error signal, and the first coroutine is switched to a second coroutine as a current execution coroutine during execution of the migration of the first data from the memory pool to the memory pool. By the method, in the process of migrating the data to the memory pool, the operation can be switched to the second coroutine operation, so that the thread operation is not blocked, and the problem of unstable IO time delay is reduced as much as possible.
In order to make the present application easier to understand, some basic concepts related to the embodiments of the present application are explained first below. It should be noted that these explanations are for the convenience of those skilled in the art, and do not limit the scope of protection claimed in the present application.
One, user mode, kernel mode
User state and kernel state refer to the running state of the operating system. At present, hierarchical protection is generally implemented in computer systems, i.e. according to the severity of the affected operation in the computer system, some operations must be executed by some roles with corresponding permissions, for example, operations such as direct access to hardware and modification of hardware working mode require the highest permission to execute.
The protection of the computer needs to be completed by the cooperation of a CPU and an operating system, the modern CPU generally provides multiple operation permission levels, the operating system is generally divided into multiple operation states to be matched with the CPU, the common states of the operating system are a user state and a kernel state, wherein the kernel state generally has the highest permission, and all instructions and operations are allowed to be executed by the CPU; the user mode is generally a lower authority, in this state, the software program can only execute limited instructions and operations, and the high-risk operations are not allowed by the CPU hardware, such as configuring the internal control register of the CPU, accessing the memory address of the kernel portion, and the like. When an operating system needs to execute programs under different permissions, the permission state of the CPU is usually switched to a corresponding state, and then the corresponding program is executed.
Two, process
A process may refer to a running activity of a program with a certain independent function, and is a carrier for running an application program, or may refer to a process as a running instance of an application program, and is a dynamic execution of the application program. For example, when a user runs a Notepad program (Notepad), the user creates a process to accommodate the code that makes up Notepad. The process is an independent unit for resource allocation and scheduling of the system, each process has a memory space of the process, and different processes are independent of each other.
Three, thread
A process may include at least one thread that shares resources (e.g., memory space) of the same process, each of the at least one thread concurrently performing a different task. For example, one thread may be used to write a file to a disk, and another thread may be used to receive a key operation of a user and react in time, and the threads do not interfere with each other. The concurrency means that a time resource is divided into continuous time slices with equal length, a plurality of threads of the same process are allocated with different time slices, each thread can only run in the allocated time slice, the threads are executed alternately, and the time duration of the time slices is very short, so that the effect of parallel execution is achieved. Specifically, the thread state at least includes: operational, in-service, blocked, etc. The runnable state may refer to a state in which a thread waits for a time slice belonging to the thread to arrive. In operation, it may refer to a state in which a thread runs in a time slice belonging to the thread. The blocking may refer to that the thread is interrupted by other operations during running, for example, the application program runs in a user mode, when the processor executes the interrupt processing, for example, when a page fault signal (which may also be understood as a page fault interrupt) is responded in the present application, the operating system may fall into a kernel mode, so as to interrupt the running of the thread, and correspondingly, the current thread state of the application program is a blocking state, and when the operating system is switched to the user mode again, the thread may continue to run.
The threads are scheduled by an operating system, the threads are used for applications distributed to a user space, the operating system needs to perform thread context switching actions during thread switching and switching of the operation authority level of CPU hardware, the operations consume much time, and in order to reduce the overhead of a large amount of thread switching, coroutine is introduced. Coroutine can be understood as a lightweight thread, and is a thread which is created in a user space and used for processing tasks, namely an expression mode. Because the coroutine runs in the user space, coroutine switching can be carried out in the user space, and the overhead caused by a large number of switching threads is reduced.
For example, a process may create multiple threads, and similarly, a thread may create multiple coroutines. That is, a thread is a carrier of coroutines, which may be shared by multiple coroutines. As shown in fig. 3, a schematic diagram of the relationship between coroutines and threads, a coroutine may be regarded as a function group, a relatively independent task is formed by several subfunctions, the functions within the coroutine have a calling relationship, and the coroutines also have a jump calling relationship of the functions, but the calling address of the next coroutine is not specified by the current coroutine but is selected and determined by a scheduler, the scheduler may be regarded as a special task, the scheduler will exist all the time without exiting, when the current coroutine runs, the scheduler automatically jumps to the context of the scheduler to run, and the scheduler selects a proper coroutine from a task queue (containing the coroutines currently waiting to be executed) and jumps to the context of the coroutine to run.
Four, synchronous and asynchronous
For example, synchronization means that when the program 1 calls the program 2, the program 1 stops executing, and the program 1 starts executing after the program 2 finishes executing and returns to the program 1. In another expression, the program 1 does not return when calling the program 2, and the return is executed after the program 2 returns a response message.
Asynchronous means that when the program 1 calls the program 2, the program 1 continues its next action and is not affected by the program 2. In another expression, the program 1 returns immediately after calling the program 2 without waiting for a response message.
The coroutine execution method provided by the present embodiment is described in detail below with reference to specific drawings.
Referring to fig. 6, a flowchart of a coroutine execution method provided in this embodiment of the present application is schematically shown, where the method may be applied to any one of the computing devices in the storage system, for example, the computing device 10a, and the method may include the following steps:
step 601, a first coroutine is run, and the first coroutine is used for accessing first data in the memory pool.
Specifically, the present application includes a first mapping relationship between memory pages in the memory pool and logical addresses, and assumes that a logical address of the first data is a first logical address, where the first logical address corresponds to the first memory page in the memory pool.
In actual use, the corresponding relationship between the memory page and the stored data is recorded by the page table, when the first protocol is switched to, the kernel of the operating system stores the data for the first protocol into the memory space allocated for the first protocol, and updates the page table according to the memory page corresponding to the memory space and the data stored in each memory page.
Step 602, migrating the first data from the memory pool to the storage pool.
Specifically, the first data may be migrated to the storage pool according to the utilization rate of the memory pool and/or the cold and hot information of the first data.
In a first practical manner, when data migration is controlled only according to the utilization rate of the memory pool, for example, the process may be that, when the utilization rate of the memory pool is lower than a first preset value, data may not be migrated to the storage pool; when the utilization rate of the memory pool is higher than a second preset value (the second preset value is higher than the first preset value), the data which is not operated temporarily in the memory pool, such as the first data, can be migrated to the storage pool.
In a second possible implementation manner, when data migration is controlled only according to the cold and hot information of the first data, the cold and hot information may be used to indicate the cold and hot degree of the data. For example, the cold and hot degree of the first data in one cycle may be counted according to a cache eviction algorithm, such as an LRU algorithm, for example, the data may be divided into cold data, warm data, hot data, and the like according to the cold and hot degree, and if the first data is cold data, the first data may be migrated from the memory pool to the storage pool. For another example, the hot and cold information of the first data may also be determined according to the hot and cold configuration information of the object to which the data belongs, for example, the object to which the data belongs may be an application, a process, a page, or a structure, and may be configured according to the application, the process, the page, or the structure. For example, the data of the application a is configured as hot data, and the data of the application B is configured as cold data; for another example, the data of the configuration process a is cold data, and the data of the configuration process B is hot data; for another example, the data of the first page in the memory is configured as hot data, the second page in the memory is configured as warm data, and the third page in the memory is configured as cold data. Finally, the granularity of the structure may be used to configure the cold and hot degrees of the data, for the structure, for example, the write operation request issued by the operating system is usually 4KB, in other words, the operation request issued by the operating system is usually 4KB as a logical page, and in practical applications, one memory page or storage page is usually 16KB or 32KB, so that the logical page here may also be written as a structure into the memory page or storage page. For example, if one memory page includes 4 structures, i.e., structure 1 to structure 4, the structure 1 and structure 2 can be set as hot data, the structure 3 can be set as warm data, and the structure 4 can be set as cold data. When the data is subsequently controlled to be migrated, the cold and hot degree of the data can be determined according to the configuration. The mode supports the cold and hot degree of the data configured by the application or the process or the finer granularity, avoids the situation that some key data determined by the LRU algorithm are low in heat degree but high in reliability requirement are swapped out, achieves a swapping-out strategy of the finer granularity, improves flexibility of swapping out, and improves reliability of system operation.
It should be understood that the above-mentioned manner for determining the hot and cold information and the division of the hot and cold levels are only examples, for example, the hot and cold levels of the data can also be expressed by numerical values (e.g., 0-1, or 0-100%, etc.), and the embodiment of the present application is not limited thereto. Hereinafter, for convenience of description, the cold and hot degrees of data will be expressed as cold data, warm data, and hot data.
In addition, the description of migrating cold data to a storage pool is merely exemplary, and in fact, warm data or hot data may also be migrated to a storage pool. As described above, the storage pool in this embodiment may include a plurality of levels, for example, a first level, a second level, and a third level, and for convenience of understanding, the first level may be referred to as a hot medium, the second level may be referred to as a warm medium, and the third level may be referred to as a cold medium. In an implementation manner, if the first data is cold data, the first data may be migrated to a cold medium in the storage pool during the migration; if the first data is temperature data, the first data can be migrated to a temperature medium in the storage pool during migration; if the first data is hot data, the first data may be migrated to the thermal medium in the storage pool at the time of migration.
In a third implementable manner, when controlling data migration according to the utilization rate of the memory pool and the cold and hot information of the first data, in combination with the above, for example, the memory pool may be divided into three sections, as shown in fig. 7, which is a schematic diagram of the sections included in the memory pool, for example, the first threshold corresponds to a low water level, the second threshold corresponds to a medium water level, and the third threshold corresponds to a high water level. Exemplarily, (1) when the utilization rate of the memory pool is lower than the low water level, data migration may not be started; when the utilization rate of the memory pool is between the low water level and the high water level, cold data in the memory pool can be migrated to cold media in the storage pool; (2) when the utilization rate of the memory pool is between the medium water level and the high water level, the temperature data in the memory pool can be migrated to the temperature medium in the storage pool; (3) when the utilization rate of the memory pool is higher than the high water level, the hot data in the memory pool can be migrated to the hot medium in the storage pool. The hot and cold degree of the data in the memory pool may be determined according to the cache elimination algorithm described above, or according to the hot and cold configuration information of the object to which the data belongs, or determined in other manners, please refer to the above related descriptions, and details are not described here and below.
Specifically, when migrating the first data to the storage pool, for example, the computing device 10a may apply for a storage pool space from the management node, the management node allocates a free storage space in the storage pool for the management node, and sends a global address corresponding to the storage space to the computing device, and the computing device migrates the first data to the storage space corresponding to the global address. For another example, the first data may be sent to the management node, stored into the storage pool by the management node, and then sent to the computing device by the management node at a global address corresponding to a storage location of the first data in the storage pool. As another example, computing device 10a may determine its own storage location for the first data in the storage pool. It should be understood that the first data may be migrated locally to computing device 10a or may be migrated to other computing devices included in the storage pool in addition to computing device 10 a.
As an optimization scheme, when the first data is stored in the storage pool, in order to ensure the reliability of the data, a multi-copy and Erasure Code (EC) checking mechanism may be sampled to implement data redundancy. The EC verification mechanism is to divide the first data into at least two data fragments, calculate the verification fragments of the at least two data fragments according to a certain verification algorithm, and recover the first data by using another data fragment and the verification fragment when one of the data fragments is lost. Then, for the data, the global address is a set of multiple fine-grained global addresses, and each fine-grained global address corresponds to a physical address of a data slice/check slice. The multi-copy mechanism refers to storing at least two identical copies of data, and the at least two copies of data are stored in two different physical addresses. When one of the data copies is lost, other data copies may be used for recovery. In the mode, through the redundancy mechanism, the situation that the data which is replaced is lost due to the damage of the memory in the memory pool, such as the bad block of the memory or the bad track, is reduced, and the reliability of the data is improved.
After the data in the memory pool is migrated to the storage pool, on one hand, the logical address of the migrated data and the global address in the storage pool may be updated to the second mapping relationship, so that the data may be subsequently moved back to the memory pool from the storage pool according to the second mapping relationship. For example, if the first data is migrated to a location corresponding to the first global address in the storage pool, the correspondence between the first logical address and the first global address of the first data may be added to the second mapping.
On the other hand, after the first data is migrated to the storage pool, the page table needs to be updated, for example, the first memory page is identified as an unoccupied state, and then, the first memory page is released and may be occupied by other data.
Step 603, generating an error signal when the first routine is executed.
The error signal may be a plurality of exception signals, and here, taking the page fault signal as an example, when the first protocol accesses the first logical address, the page table is queried to find that the first data is not in the corresponding memory page, or the data in the memory page does not belong to the first protocol, and the page fault is abnormal, and the page fault signal is generated.
In response to the error signal, the first data is migrated from the storage pool to the memory pool, step 604.
In response to the page fault signal, querying a location of the first data in the storage pool by querying the second mapping relationship, and migrating the first data from the corresponding location back to the memory pool, it should be understood that, after the first data is migrated back to the memory pool again, the location of the first data in the memory pool is not limited, and if the previous first memory page is not occupied or is released again, the first data may be migrated back to the first memory page, or may be another memory page, which is not limited herein, but the page table needs to be followed according to the memory page that is migrated back.
In addition, in the present application, there are a plurality of migration operations, and for convenience of understanding, the process of migrating a storage pool from a memory pool is referred to as migration, and the process of migrating a storage pool from a memory pool is referred to as migration. It should be noted that the migration here may be understood as cutting out the first data in the storage pool and copying the first data to the memory pool, or may be copying the first data in the storage pool to the memory pool. The embodiment of the present application does not limit this.
In this embodiment of the application, the second mapping relationship may be persistently stored, and after the first data is migrated from the storage pool to the memory pool, the mapping relationship between the first logical address (of the first data) and the first global address in the second mapping relationship is deleted. Furthermore, the first data in the storage page corresponding to the first global address can be marked as invalid, when the wireless data of the storage page is more, for example, exceeds a preset recovery threshold, the remaining still valid data in the storage page can be moved to a new page, and a large amount of fragmented space in the storage pool caused by long-time migration is avoided. Meanwhile, in order to fully exert the performance of different levels of the storage pool, the data of the new page can be marked as lower heat by combining with some heat statistical strategies, and the flow of the data of different levels in the storage pool can be controlled according to the heat.
For example, when the cold and hot degree of the data in a new memory page is lower than a certain value, the data in the memory page can be stored on the storage medium which is lowered from the current hierarchy to a lower hierarchy. For example, when the new memory page is a warm medium, the data of the page may be migrated from the warm medium to a cold medium after the cold level of the data of the page is lower than the cold medium admission threshold. The method can fully utilize the hierarchical storage of the storage pool, so that the medium of each hierarchy stores the data with corresponding heat as much as possible, and the performance of partial hierarchies is improved as much as possible. Of course, to ensure the reliability of the user-configured hot and cold information, the portion of data that may flow between different levels in the storage pool may be data of hot and cold information of the unconfigured application, such as the hot and cold levels of data determined according to the LRU algorithm when migrating to the storage pool. Alternatively, the data may be data in which the application hot and cold information is arranged. The above-mentioned manner of counting the number of wireless data by using the storage page as the granularity is only an example, and the present application is not limited thereto, and for example, the number of invalid data in the storage page may be counted by using two or more pages or a structure as an example.
In the above manner, the second mapping relationship can be persistently stored, and only after the data is migrated into the memory pool, the relevant information of the data is deleted in the second mapping relationship, so that the problem that the data cannot be recovered when the process is redrawn due to the fact that the relevant information of the process in the page table and the second mapping relationship is deleted when the process fails in the prior art is solved.
Step 605, during the process of migrating the first data from the storage pool to the memory pool, the first coroutine is switched to the second coroutine as the current execution coroutine.
Assuming that the first coroutine is a coroutine of the first thread, during the process of migrating the first data from the storage pool to the memory pool, the execution of the first thread is not suspended, for example, the first coroutine may be switched to a second coroutine as a currently executed coroutine, where the second coroutine may be any other coroutine included in the first thread except for the first coroutine.
It should be noted that step 604 and step 605 do not have a strict timing relationship, and step 604 and step 605 may be executed simultaneously. In practical use, two steps may be understood as two steps performed asynchronously, i.e. when a first protocol is switched to a second protocol, the first data is asynchronously migrated from the storage pool to the memory pool.
Specifically, before switching the first coroutine in response to the page fault signal, the context of the first thread needs to be saved, such as some register values and data stack information, so as to restore the context when the first coroutine is interrupted when the first coroutine is subsequently executed again. When the first coroutine is switched out, the first coroutine can be put into a task queue of the first thread again, but because the first coroutine is switched out due to a page fault exception, the first coroutine can be suspended before the first data is moved back to the memory pool, the suspension can be understood as marking the first coroutine as a non-executable state, and after the first data is moved back to the memory pool, the first coroutine can be modified into a executable state (for example, a waiting state) to indicate that the first coroutine can be executed.
When the first coroutine is in a runnable state, a implementable mode can directly switch the first coroutine to the current coroutine after the current coroutine is finished. In another implementable manner, after the current coroutine is completed, one coroutine may be randomly selected to be switched to the current cooperative execution, or the task may be scheduled according to other selection policies, for example, the previous coroutine may be selected according to the priority of the coroutine or according to the sequence of the coroutines in the task queue, which is not limited in the embodiment of the present application.
By the method, when data migration is carried out, the interrupted coroutine can be switched to other coroutine operation, so that the operation of the thread is not blocked, the delay condition is reduced, and the performance and the stability of the system are improved.
The coroutine execution method of the present application is illustrated below with reference to specific embodiments.
For convenience of describing the implementation process of the coroutine execution method provided by the embodiment of the present application, the following description will exemplarily refer to a module related to the coroutine execution method in a computing device, where it is to be noted that the module may be a hardware module or a software module, and it is to be understood that a software program is installed in a processor, and different software programs may be regarded as one module and have different functions. The software module refers to a module corresponding to a software program, and the module shown below is only an example and is not a limitation on the module for executing the coroutine execution method of the present application. For example, the functions respectively represented by one or more modules in fig. 8 may also be integrated on at least one module, which is not limited in this application.
Referring to fig. 8, in order to implement the schematic diagram of the internal modules of the coroutine execution method provided in this embodiment, as shown in fig. 8, the computing device includes a migration (swap) module, a page table accelerator (page table accelerator), a back-end storage management module (back store manager), and a page fault signal notification module (page fault signal notification). Illustratively, the page fault signal processing module can be arranged in a kernel space, and the rest of the modules can be arranged in a user space.
Of course, the above listed modules are only modules that may be involved in implementing coroutine execution, for example, more or fewer modules may be further provided in the user space, for example, the modules may further include an application management module, and the like, which is not limited in this embodiment of the application.
The migration module is used for configuring the cold and hot degrees of the data according to different granularities; the data storage device is also used for transferring the data from the memory pool to the storage pool according to the utilization rate of the memory pool and/or the cold and hot information of the data; and is also used to migrate data in the storage pool to the memory pool.
And the page table acceleration module is used for recording the corresponding relation (second mapping relation) between the logic address and the global address of the storage pool.
The back-end storage media management module is used for managing and allocating the space of the storage pool, and the module is an optional module, or the computing device can directly apply for the free space in the storage pool with the management node, and receive the command of the management node to store the data migrated from other computing devices.
And the application program management module is used for generating a page missing signal when the memory data accessed by the application is missing, and is also used for saving thread-to-coroutine switching.
And the page missing signal processing module is used for receiving the page missing signal generated by the application program management module and triggering the application program management module to execute the program coordination switching and the asynchronous data switching.
The following describes the coroutine execution process of the present embodiment based on the modules shown in fig. 8.
Please refer to fig. 9, which is a flowchart illustrating a coroutine execution process according to an embodiment of the present application. Please refer to fig. 8 and 9. Illustratively, the application management module runs a page fault signal hook function (page fault signal handler). The migration module runs a data migration function. The missing page signal processing module runs a missing page signal hook function (page fault handler). To facilitate distinguishing the missing page signal hook function that can run in kernel mode is called a first hook function, and the missing page signal hook function that can run in user mode is called a second hook function.
Assuming that a first thread is executing coroutine 1(task 1), when first data accessed by the coroutine 1 does not have a memory pool, an application program management module generates a page missing signal; the first hook function responds to the page missing signal, calls a second hook function, inputs the page missing signal to the second hook function, the second hook function responds to the page missing signal, stores the current context of the first thread, the current context of the first thread is actually the context of the coroutine 1, after the context of the first thread is stored, the coroutine 1 is switched to the coroutine 2, and the second hook function sends a response message to the first hook function, so that the first hook function switches the current context to the context of the coroutine 2, and the execution of the coroutine 2 is started. In addition, in the process of saving the context of the first thread, the data migration function of the migration module is called asynchronously, and the first data is migrated from the storage pool to the memory pool.
Those skilled in the art can know that most disk drives of the memory are customized, and through the design, in the embodiment of the present application, asynchronous data swapping can be realized in a user mode, so that it is avoided that an operating system needs to complete data migration in a cross-mode manner when synchronous data migration is realized by a kernel, and the efficiency and performance of data migration are improved.
In the embodiments provided in the present application, in order to implement the functions in the methods provided in the embodiments of the present application, the storage system may include a hardware structure and/or a software module, and the functions are implemented in the form of a hardware structure, a software module, or a hardware structure and a software module. Whether any of the above-described functions is implemented as a hardware structure, a software module, or a hardware structure plus a software module depends upon the particular application and design constraints imposed on the technical solution.
Fig. 10 shows a schematic diagram of a computing device 1000. The computing apparatus may be computing device 10a in the embodiment shown in fig. 6, 8, or 9, or may be located in computing device 10a, and may be used to implement the functions of computing device 10 a. The computing device 1000 may be a hardware structure or a hardware structure plus software modules.
As shown in fig. 10, the computing device 1000 includes: an execution unit 1001 and a processing unit 1002.
An execution unit 1001, configured to generate an error signal when a first routine is executed, where the first routine is used to access first data in the memory pool; a processing unit 1002, configured to migrate the first data from the storage pool to the memory pool in response to the error signal; and the processor is further configured to switch the first protocol to a second protocol as a current execution protocol during execution of the process of migrating the first data from the storage pool to the memory pool.
In a possible implementation manner, before the execution unit 1001 executes the first coroutine, the processing unit 1002 is further configured to: and migrating the first data from the memory pool to the storage pool according to the utilization rate of the memory pool and/or the cold and hot information of the first data.
In a possible implementation, the hot and cold information of the first data is configured by an application corresponding to the first data.
In one possible implementation, the memory pool includes a plurality of different types of memory, the plurality of different types of memory being located in one or more computing devices of the memory system.
In one possible design, the different types of memory include: DRAM, SCM, AEP, hard disk.
In one possible design, the memory pool includes one or more first memories, one or more second memories, the first memories having higher performance than the second memories; the cold and hot information of the first data is used for indicating the cold and hot degree of the first data; if the cold and hot degree of the first data is higher than a first preset value, storing the first data to the first memory; and if the cold and hot degree of the first data is lower than the first preset value, storing the first data to the second memory.
In one possible design, the first coroutine is a thread created at the user level for processing a task.
Similar to the above concept, as shown in fig. 11, the present application provides an apparatus 1100, where the apparatus 1100 may be applied to any one of the computing devices in the system shown in fig. 1, for example, the computing device 10a, and performs the steps performed by the main body in the method shown in fig. 6, 8, or 9.
The apparatus 1100 may include a processor 1101 and a memory 1102. Further, the apparatus may further include a communication interface 1104, which may be a transceiver, or a network card. Further, the apparatus 1100 may also include a bus system 1103.
The processor 1101, the memory 1102 and the communication interface 1104 may be connected via the bus system 1103, the memory 1102 may store instructions, and the processor 1101 may be configured to execute the instructions stored in the memory 1102 to control the communication interface 1104 to receive or send a signal, so as to complete the steps of executing the main body in the method shown in fig. 2, fig. 3, fig. 7 or fig. 8.
The memory 1102 may be integrated in the processor 1101 or may be a different physical entity from the processor 1101.
As an implementation manner, the function of the communication interface 1104 may be realized by a transceiver circuit or a dedicated chip for transceiving. The processor 1101 may be considered to be implemented by a dedicated processing chip, processing circuit, processor, or general purpose chip.
As another implementation manner, a manner of using a computer may be considered to implement the first computing node or the function of the first computing node provided in the embodiment of the present application. That is, program code that implements the functions of the processor 1101 and the communication interface 1104 is stored in the memory 1102, and a general-purpose processor can implement the functions of the processor 1101 and the communication interface 1104 by executing the code in the memory.
For the concepts, explanations, and detailed descriptions related to the technical solutions provided in the present application and other steps related to the apparatus 1100, reference may be made to the descriptions of the foregoing methods or other embodiments, which are not repeated herein.
In an example of the present application, the apparatus 1100 may be used to execute the steps of the main body in the flow shown in fig. 6, 8 or 9. For example, when a first protocol is executed, an error signal is generated, where the first protocol is used to access first data in the memory pool; migrating the first data from the storage pool to the memory pool in response to the error signal; and the processor is further configured to switch the first protocol to a second protocol as a current execution protocol during execution of the process of migrating the first data from the storage pool to the memory pool.
For the description of the processor 1101 and the communication interface 1104, reference may be made to the description of the flow shown in fig. 6, fig. 8, or fig. 9, which is not described herein again.
Optionally, the computer-executable instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
Those of ordinary skill in the art will understand that: the various numbers of the first, second, etc. mentioned in this application are only used for the convenience of description and are not used to limit the scope of the embodiments of this application, but also to indicate the sequence. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one" means one or more. At least two means two or more. "at least one," "any," or similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one (one ) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple. "plurality" means two or more, and other terms are analogous. Furthermore, for elements (elements) that appear in the singular form "a," an, "and" the, "they are not intended to mean" one or only one "unless the context clearly dictates otherwise, but rather" one or more than one. For example, "a device" means for one or more such devices.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The various illustrative logical units and circuits described in this application may be implemented or operated upon by design of a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in the embodiments herein may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to include such modifications and variations.

Claims (18)

1. A coroutine execution method is applied to a storage system, wherein the storage system comprises a memory pool and a storage pool, and the method comprises the following steps:
generating an error signal when a first protocol is operated, wherein the first protocol is used for accessing first data in the memory pool;
migrating the first data from the storage pool to the memory pool in response to the error signal;
and in the process of executing the process of migrating the first data from the storage pool to the memory pool, switching the first protocol to a second protocol as a current execution protocol.
2. The method of claim 1, wherein prior to the running the first coroutine, the method further comprises:
and migrating the first data from the memory pool to the storage pool according to the utilization rate of the memory pool and/or the cold and hot information of the first data.
3. The method of claim 2, wherein the hot and cold information of the first data is configured by an application to which the first data corresponds.
4. The method of any of claims 1-3, wherein the memory pool comprises a plurality of different types of memory, the plurality of different types of memory located in one or more computing devices of the storage system.
5. The method of claim 4, wherein the different types of memory comprise: DRAM, memory level memory SCM, Oak memory AEP, ROM, hard disk.
6. The method of claim 4, wherein the pool of memory comprises one or more first memories, one or more second memories, the first memories having higher performance than the second memories; the cold and hot information of the first data is used for indicating the cold and hot degree of the first data;
if the cold and hot degree of the first data is higher than a first preset value, storing the first data to the first memory; and if the cold and hot degree of the first data is lower than the first preset value, storing the first data to the second memory.
7. The method of claim 4, wherein the first coroutine is a thread created at a user level for processing a task.
8. A coroutine-executing computing apparatus, comprising:
an execution unit: the device comprises a first protocol for generating an error signal when running a first protocol, wherein the first protocol is used for accessing first data in the memory pool;
a processing unit: means for migrating the first data from the storage pool to the memory pool in response to the error signal; and the processor is further configured to switch the first protocol to a second protocol as a current execution protocol during execution of the process of migrating the first data from the storage pool to the memory pool.
9. The apparatus of claim 8, wherein the execution unit, prior to the executing the first coroutine, the processing unit is further configured to: and migrating the first data from the memory pool to the storage pool according to the utilization rate of the memory pool and/or the cold and hot information of the first data.
10. The apparatus of claim 9, wherein the hot and cold information of the first data is configured by an application to which the first data corresponds.
11. The apparatus of any of claims 8-10, wherein the memory pool comprises a plurality of different types of memory, the plurality of different types of memory being located in one or more computing devices of the storage system.
12. The apparatus of claim 11, wherein the different types of memory comprise: DRAM, SCM, AEP, hard disk.
13. The apparatus of claim 11, wherein the memory pool comprises one or more first memories, one or more second memories, the first memories having higher performance than the second memories; the cold and hot information of the first data is used for indicating the cold and hot degree of the first data;
if the cold and hot degree of the first data is higher than a first preset value, the first data is stored in the first memory; and if the cold and hot degree of the first data is lower than the first preset value, the first data is stored in the second memory.
14. The apparatus of claim 11, wherein the first coroutine is a thread created at a user level for processing a task.
15. A computing device comprising a processor and a memory having stored therein computer-executable instructions for causing the processor to perform the method of any of claims 1-7 when invoked by the processor.
16. A computer storage medium having stored thereon instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-7.
17. A computer program product having stored thereon instructions which, when run on a computer, cause the computer to perform the method according to any one of claims 1-7.
18. A storage system, comprising a storage pool, resources of the storage pool being provided by at least two computing devices, the at least two computing devices comprising a first computing device and a second computing device;
generating an error signal when the first computing device runs a first protocol, wherein the first protocol is used for accessing first data in the memory pool; migrating the first data from the storage pool to the memory pool in response to the error signal; and in the process of executing the process of migrating the first data from the storage pool to the memory pool, switching the first protocol to a second protocol as a current execution protocol, wherein the position of the first data in the storage pool is located in the first computing device or the second computing device.
CN202011188682.5A 2020-08-07 2020-10-30 Coroutine execution method and coroutine execution device Pending CN114063894A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010786176X 2020-08-07
CN202010786176 2020-08-07

Publications (1)

Publication Number Publication Date
CN114063894A true CN114063894A (en) 2022-02-18

Family

ID=80233097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011188682.5A Pending CN114063894A (en) 2020-08-07 2020-10-30 Coroutine execution method and coroutine execution device

Country Status (1)

Country Link
CN (1) CN114063894A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327606A (en) * 2022-03-11 2022-04-12 武汉中科通达高新技术股份有限公司 Configuration management method and device, electronic equipment and computer readable storage medium
CN117055820A (en) * 2023-10-09 2023-11-14 苏州元脑智能科技有限公司 Command processing method of solid state disk, solid state disk and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327606A (en) * 2022-03-11 2022-04-12 武汉中科通达高新技术股份有限公司 Configuration management method and device, electronic equipment and computer readable storage medium
CN114327606B (en) * 2022-03-11 2022-07-05 武汉中科通达高新技术股份有限公司 Configuration management method and device, electronic equipment and computer readable storage medium
CN117055820A (en) * 2023-10-09 2023-11-14 苏州元脑智能科技有限公司 Command processing method of solid state disk, solid state disk and storage medium
CN117055820B (en) * 2023-10-09 2024-02-09 苏州元脑智能科技有限公司 Command processing method of solid state disk, solid state disk and storage medium

Similar Documents

Publication Publication Date Title
US10552337B2 (en) Memory management and device
US10552272B2 (en) Maintaining high availability during N-node failover
KR101222823B1 (en) Information processing device, process control method, and computer program
US9824011B2 (en) Method and apparatus for processing data and computer system
EP2851807B1 (en) Method and system for supporting resource isolation under multi-core architecture
EP4239473A1 (en) Container-based application management method and apparatus
CN108292235B (en) Network attached storage using selective resource migration
US20050097384A1 (en) Data processing system with fabric for sharing an I/O device between logical partitions
US10909072B2 (en) Key value store snapshot in a distributed memory object architecture
JP5547373B2 (en) Method for optimizing logging and playback of multitasking applications in a single processor or multiprocessor computer system
JP4902501B2 (en) Power control method, computer system, and program
US20130097382A1 (en) Multi-core processor system, computer product, and control method
CN114063894A (en) Coroutine execution method and coroutine execution device
JP2008090395A (en) Computer system, node for calculation and program
US20080244118A1 (en) Method and apparatus for sharing buffers
US20230273859A1 (en) Storage system spanning multiple failure domains
CN115981833A (en) Task processing method and device
US7793051B1 (en) Global shared memory subsystem
CN112513828A (en) Implementing file consistency and page cache support in a distributed manner
JP5382471B2 (en) Power control method, computer system, and program
JP2019164510A (en) Storage system and IO processing control method
JP5994339B2 (en) Virtualization system, storage device, storage data migration method, and storage data migration program
KR102565873B1 (en) Method for allocating memory bus connected storage in numa system
US20220147462A1 (en) Hybrid memory management apparatus and method for many-to-one virtualization environment
WO2020024589A1 (en) Key value store snapshot in a distributed memory object architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination