CN114048149A - Method, system, storage medium and equipment for controlling construction of page chain table - Google Patents
Method, system, storage medium and equipment for controlling construction of page chain table Download PDFInfo
- Publication number
- CN114048149A CN114048149A CN202111278615.7A CN202111278615A CN114048149A CN 114048149 A CN114048149 A CN 114048149A CN 202111278615 A CN202111278615 A CN 202111278615A CN 114048149 A CN114048149 A CN 114048149A
- Authority
- CN
- China
- Prior art keywords
- control page
- bytes
- control
- page
- byte
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0871—Allocation or management of cache space
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a method for controlling the construction of a page linked list, which is characterized by comprising the following steps: constructing a common control page to enable the common control page to comprise a 128-byte control page header area, a plurality of byte control block storage areas, a 64-byte data cache pointer area and a 64-byte original NVMe management and IO instruction backup area; and constructing a chain type control page to enable the chain type control page to comprise a 64-byte control page header area and a plurality of byte control block storage areas, wherein the control page header area of the common control page comprises an 8-byte address pointer pointing to the next chain type control page, the control page header area of the chain type control page comprises three 8-byte chain type pointers which respectively point to the first common control page, the previous control page and the next chain type control page, and the sizes of the common control page and the chain type control page forming the control page chain table are the same.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a method, a system, a storage medium and equipment for constructing a control page linked list based on a general computation acceleration architecture.
Background
With the recent emergence of computing Storage (Computational Storage) technology, the computing Storage architecture reduces the move operation of corresponding data by offloading data computation from a host CPU to a data processing acceleration unit near a Storage unit, thereby releasing system performance as much as possible.
Compute storage incorporates three product modalities, Compute Storage Processor (CSP), Compute Storage Drive (CSD), and Compute Storage Array (CSA), and is expected to be redefined architecturally
Reducing CPU occupancy;
reduced consumption of network and DDR bandwidth;
reduce system power consumption;
support for potentially massively parallel computing processing, etc.
The core idea of a universal computing acceleration architecture (UAA) is to accomplish the optimized division of a soft interface and a hard interface by adopting a microcode-driven mode through the innovation of a micro-architecture, and simultaneously keep the high flexibility and the expandable characteristic of software programmable on the premise of considering the high performance of hardware execution. In the architectural definition of UAA, the size of a control page (hereinafter sometimes abbreviated as "CP") may be 512 bytes, 1024 bytes or 2048 bytes, and the remaining space is used to store a control block (hereinafter sometimes abbreviated as "CB") except for the CP header, the data buffer and the original host IO command backup, as shown in fig. 6. All CPs are placed in a contiguous memory space, thus forming a resource pool for one CP. In addition, for the convenience of CP resource pool management, CP granularity in a single resource pool needs to be kept consistent, that is, only one CP size can be selected.
The size of the CB may be 16 bytes, 32 bytes, 64 bytes and 128 bytes depending on the type of application engine. In this case, the number of CBs that can be carried in a single CP is determined by both the size of the CP and the size of the CB. In some complex application scenarios, if the length difference of CB chains corresponding to different classes of IO is too large, the size of CP is selected to cause certain difficulty, and if the length difference is too small, the long CB chain cannot be borne; on the contrary, if the granularity of the CP is too large, a certain CP resource is wasted.
Disclosure of Invention
In view of the above, the present invention is to provide a method, a system, a storage medium, and a device for constructing a linked list of control pages based on a generic computation acceleration architecture, so as to solve the disadvantages of the existing method and system for constructing control pages.
Based on the above purpose, the present invention provides a method for constructing a control page linked list based on a generic computation acceleration architecture, which comprises the following steps:
constructing a common control page to enable the common control page to comprise a 128-byte control page header area, a plurality of byte control block storage areas, a 64-byte data cache pointer area and a 64-byte original NVMe (non-volatile memory host controller interface specification) management and IO instruction backup area; and
the chained control page is constructed to include a 64-byte control page header area and a several-byte control block storage area,
wherein the control page header area of the normal control page comprises an 8-byte address pointer pointing to the next chained control page,
the control page header region of the chain control page comprises three chain pointers of 8 bytes, which respectively point to the first common control page, the previous control page and the next chain control page, and
the size of the common control page and the chain control page forming the control page chain table is the same.
In some embodiments, the control header area of the generic control page further comprises: the attribute of the 8-byte control page is used for identifying the type of the control page and identifying whether the control page is positioned in the linked list of the control page and the position of the control page in the linked list; a sequence number of a current control page of 8 bytes, specified when the control page is created; the relevant information of the NVMe queue of 8 bytes comprises a completion queue ID of 2 bytes, a submission queue ID of 2 bytes and tuning queue head information of 4 bytes; the timestamp with 4 bytes is used for recording a key time node in the execution process of the common control page; and 64 bytes of firmware reserved space.
In some embodiments, the control header area of the chained control page further comprises: the attribute of the 8-byte control page is used for identifying the type of the control page and identifying whether the control page is positioned in the linked list of the control page and the position of the control page in the linked list; and 4 8-byte time stamps for recording key time nodes in the execution process of the chained control page.
In some embodiments, the size of the normal control page and the chained control page is 512 bytes, 1 kbyte, or 2 kbytes, respectively, the size of the control block storage area of the normal control page is 256 bytes, 768 bytes, or 1792 bytes, respectively, and the size of the control block storage area of the chained control page is 448 bytes, 960 bytes, or 1984 bytes, respectively. The control block is 16 bytes, 32 bytes, 64 bytes, or 128 bytes in size.
In some embodiments, the control block includes an 8-bit cbPosition identifying the position of the control block in a normal control page, a chained control page, or a linked list of control pages,
the calculation formula of the relative position of the control block in the common control page, the chained control page or the linked list of the control pages is as follows:
cpResolution: controlling the size of the page;
cpOffset: the obtained quotient represents the position of the control page where the control block is located in the control page linked list;
cbOffset: the obtained remainder represents the relative position of the control block in the control page.
In another aspect of the present invention, a system for controlling the construction of a page linked list based on a generic computation acceleration architecture is further provided, including:
the common control page component module is used for constructing a common control page to enable the common control page to comprise a 128-byte control page header area, a plurality of byte control block storage areas, a 64-byte data cache pointer area and a 64-byte original NVMe management and IO instruction backup area; and
a chain control page building module for building a chain control page to include a 64-byte control page header area and a plurality of bytes of control block storage area,
wherein the control page header area of the normal control page comprises an 8-byte address pointer pointing to the next chained control page,
the control page header region of the chain control page comprises three chain pointers of 8 bytes, which respectively point to the first common control page, the previous control page and the next chain control page, and
the size of the common control page and the chain control page forming the control page chain table is the same.
In some embodiments, the control header area of the generic control page further comprises: the attribute of the 8-byte control page is used for identifying the type of the control page and identifying whether the control page is positioned in the linked list of the control page and the position of the control page in the linked list; a sequence number of a current control page of 8 bytes, specified when the control page is created; the relevant information of the NVMe queue of 8 bytes comprises a completion queue ID of 2 bytes, a submission queue ID of 2 bytes and tuning queue head information of 4 bytes; the timestamp with 4 bytes is used for recording a key time node in the execution process of the common control page; and 64 bytes of firmware reserved space.
In some embodiments, the control header area of the chained control page further comprises: the attribute of the 8-byte control page is used for identifying the type of the control page and identifying whether the control page is positioned in the linked list of the control page and the position of the control page in the linked list; and 4 8-byte time stamps for recording key time nodes in the execution process of the chained control page.
In some embodiments, the size of the normal control page and the chained control page is 512 bytes, 1 kbyte, or 2 kbytes, respectively, the size of the control block storage area of the normal control page is 256 bytes, 768 bytes, or 1792 bytes, respectively, and the size of the control block storage area of the chained control page is 448 bytes, 960 bytes, or 1984 bytes, respectively. The control block is 16 bytes, 32 bytes, 64 bytes, or 128 bytes in size.
In some embodiments, the control block includes an 8-bit cbPosition identifying the position of the control block in a normal control page, a chained control page, or a linked list of control pages,
the calculation formula of the relative position of the control block in the common control page, the chained control page or the linked list of the control pages is as follows:
cpResolution: controlling the size of the page;
cpOffset: the obtained quotient represents the position of the control page where the control block is located in the control page linked list;
cbOffset: the obtained remainder represents the relative position of the control block in the control page
In yet another aspect of the present invention, there is also provided a computer readable storage medium storing computer program instructions which, when executed, implement any one of the methods described above.
In yet another aspect of the present invention, a computer device is provided, which includes a memory and a processor, the memory storing a computer program, the computer program executing any one of the above methods when executed by the processor.
The invention provides a method and a system for constructing a control page linked list based on a general computation acceleration architecture, which further improve the expandability and the resource utilization efficiency of the computation acceleration architecture by introducing the concept of a CP linked list and can be competent for application scenes with complex and multi-type task mixing.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
FIG. 1 is a diagram illustrating a method for constructing a linked list of control pages based on a generic computation acceleration architecture according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a system for controlling the construction of a linked list of pages based on a generic compute acceleration architecture according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an example of a general compute acceleration architecture based control page linked list according to an embodiment of the present invention;
fig. 4(a) to 4(c) show flowcharts of engine retrieval of CP linked lists, where fig. 4(a) shows a flowchart when searching a common data buffer pointer region, fig. 4(b) shows a flowchart when preparing an address pointer of a next CB, and fig. 4(c) shows a flowchart when updating timestamp information.
FIG. 5 is a diagram of a computer-readable storage medium for implementing a method for controlling the construction of a linked list of pages based on a generic computation acceleration architecture according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a hardware structure of a computer device for executing a method for constructing a control page linked list based on a generic computation acceleration architecture according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an example of a control page in the related art.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two non-identical entities with the same name or different parameters, and it is understood that "first" and "second" are only used for convenience of expression and should not be construed as limiting the embodiments of the present invention. Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements does not include all of the other steps or elements inherent in the list.
In view of the foregoing, a first aspect of the embodiments of the present invention provides an embodiment of a method for controlling the construction of a page linked list based on a generic computation acceleration architecture. Fig. 1 is a schematic diagram illustrating an embodiment of a method for controlling the building of a page linked list based on a generic computation acceleration architecture according to the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
step S10, constructing a common control page to enable the common control page to comprise a 128-byte control page header area, a plurality of byte control block storage areas, a 64-byte data cache pointer area and a 64-byte original NVMe management and IO instruction backup area;
step S20, constructing a chained control page to include a 64-byte control header area and several bytes of control block storage area,
the control page head area of the common control page comprises 8-byte address pointers pointing to the next chain control page, the control page head area of the chain control page comprises three 8-byte chain pointers pointing to the first common control page, the previous control page and the next chain control page respectively, and the sizes of the common control page and the chain control page forming the control page chain table are the same.
The data buffer pointer area is used to point to a common data buffer area. And an original NVMe management and IO instruction backup area is convenient for firmware intervention to recover errors when an exception occurs.
The present invention uses 8-byte (64-bit wide) addresses, but 4-byte (32-bit wide) addresses may also be used.
As shown in table 1, the control header area of the normal control page further includes: the attribute of the 8-byte control page is used for identifying the type of the control page and identifying whether the control page is positioned in the linked list of the control page and the position of the control page in the linked list; a sequence number of a current control page of 8 bytes, specified when the control page is created; the relevant information of the NVMe queue of 8 bytes comprises a completion queue ID of 2 bytes, a submission queue ID of 2 bytes and tuning queue head information of 4 bytes; the timestamp with 4 bytes is used for recording a key time node in the execution process of the common control page; and 64 bytes of firmware reserved space.
[ Table 1]
As shown in table 2, the control header area of the chain control page further includes: the attribute of the 8-byte control page is used for identifying the type of the control page and identifying whether the control page is positioned in the linked list of the control page and the position of the control page in the linked list; and 4 8-byte time stamps for recording key time nodes in the execution process of the chained control page.
[ Table 2]
In some preferred embodiments, the size of the normal control page and the chained control page is 512 bytes, 1K bytes, or 2K bytes, respectively, the size of the control block storage area of the normal control page is 256 bytes, 768 bytes, or 1792 bytes, respectively, and the size of the control block storage area of the chained control page is 448 bytes, 960 bytes, or 1984 bytes, respectively. The control block has a size of 16 bytes, 32 bytes, 64 bytes, or 128 bytes. However, the single control page length may not be limited to 512B, 1KB, and 2KB mentioned in the present invention, and may be set as needed by those skilled in the art.
For an engine, after acquiring its CB, there are several cases that need to search the CP linked list:
1. searching a common data cache pointer area;
2. preparing an address pointer of a next CB;
3. updating the timestamp information;
at this time, the engine first needs to locate the relative position of the corresponding CB in the CP/CP linked list. As shown in table 3, for a generic CB header definition, it is a common definition for each CB. Where there is 8 bits of data that is responsible for identifying the location of the CB in the entire CP/CP linked list, either AEM (host interface management engine) or firmware is given when programming the CB. The count value of 0-255 takes 16 bytes as the minimum granularity of counting, and the maximum count value can cover the length of the CP chain table of 4KB, so that under the condition of different CP lengths of 512 bytes, 1K bytes or 2K bytes, the longest length of the CP chain table is 8, 4 or 2.
[ Table 3]
The calculation formula of the relative position of the CB in the whole CP/CP linked list is as follows:
cpResolution: controlling the size of the page;
cpOffset: the obtained quotient represents the position of the control page where the control block is located in the control page linked list;
cbOffset: the obtained remainder represents the relative position of the control block in the control page.
Since each control block identifies its location in the entire linked list of control pages, it facilitates the engine processing to quickly locate the desired information.
Fig. 4(a) to 4(c) show flowcharts of engine retrieval of CP linked lists, where fig. 4(a) shows a flowchart when searching a common data buffer pointer region, fig. 4(b) shows a flowchart when preparing an address pointer of a next CB, and fig. 4(c) shows a flowchart when updating timestamp information.
As shown in fig. 1, the invention provides a method for constructing a control page linked list based on a general computation acceleration architecture, and by introducing the concept of a CP linked list, the extensibility and resource utilization efficiency of the computation acceleration architecture are further improved, and the method can be used in an application scenario in which complex and multi-type tasks are mixed.
In a second aspect of the embodiments of the present invention, a system for constructing a control page linked list based on a generic computation acceleration architecture is also provided. FIG. 2 is a diagram illustrating an embodiment of a system for controlling the construction of a linked list of pages based on a generic compute acceleration architecture according to the present invention. As shown in fig. 2, a system for controlling the building of a linked list of pages based on a generic compute acceleration architecture includes: the ordinary control page component module 10 is used for constructing an ordinary control page, so that the ordinary control page comprises a 128-byte control page header area, a plurality of byte control block storage areas, a 64-byte data cache pointer area and a 64-byte original NVMe management and IO instruction backup area; and a chain control page building module 20 for building a chain control page to include a 64-byte control page header area and a several-byte control block storing area. The control page head region of the common control page comprises an 8-byte address pointer pointing to the next chain control page, the control page head region of the chain control page comprises three 8-byte chain pointers pointing to the first common control page, the previous control page and the next chain control page respectively, and the common control page and the chain control page forming the control page chain table are the same in size
FIG. 3 is a diagram illustrating an example of a control page linked list based on a generic computation acceleration architecture according to an embodiment of the present invention.
In some preferred embodiments, the control header area of the generic control page further comprises: the attribute of the 8-byte control page is used for identifying the type of the control page and identifying whether the control page is positioned in the linked list of the control page and the position of the control page in the linked list; a sequence number of a current control page of 8 bytes, specified when the control page is created; the relevant information of the NVMe queue of 8 bytes comprises a completion queue ID of 2 bytes, a submission queue ID of 2 bytes and tuning queue head information of 4 bytes; the timestamp with 4 bytes is used for recording a key time node in the execution process of the common control page; and 64 bytes of firmware reserved space. The control header area of the chain control page further comprises: the attribute of the 8-byte control page is used for identifying the type of the control page and identifying whether the control page is positioned in the linked list of the control page and the position of the control page in the linked list; and 4 8-byte time stamps for recording key time nodes in the execution process of the chained control page.
In some preferred embodiments, the size of the normal control page and the chained control page is 512 bytes, 1K bytes, or 2K bytes, respectively, the size of the control block storage area of the normal control page is 256 bytes, 768 bytes, or 1792 bytes, respectively, and the size of the control block storage area of the chained control page is 448 bytes, 960 bytes, or 1984 bytes, respectively. The control block has a size of 16 bytes, 32 bytes, 64 bytes, or 128 bytes.
In some embodiments, a control block includes an 8-bit cbPosition identifying the position of the control block in a normal control page, a chained control page, or a linked list of control pages,
the calculation formula of the relative position of the control block in the normal control page, the chained control page or the linked list of the control pages is as follows:
cpResolution: controlling the size of the page;
cpOffset: the obtained quotient represents the position of the control page where the control block is located in the control page linked list;
cbOffset: the obtained remainder represents the relative position of the control block in the control page.
The cbPosition definition bit width may be increased to support longer CP linked lists.
The system for constructing the control page linked list based on the general computation acceleration architecture shown in fig. 2 further improves the extensibility and the resource utilization efficiency of the computation acceleration architecture by introducing the concept of the CP linked list, and can be competent for application scenarios with complex and multi-type task mixing.
In a third aspect of the embodiments of the present invention, a computer-readable storage medium is further provided, and fig. 5 is a schematic diagram of a computer-readable storage medium for implementing a method for controlling building of a page linked list based on a generic computation acceleration architecture according to an embodiment of the present invention. As shown in fig. 5, the computer-readable storage medium 3 stores computer program instructions 31, the computer program instructions 31 being executable by a processor. The computer program instructions 31 when executed implement the method of any of the embodiments described above.
It should be understood that all of the embodiments, features and advantages set forth above with respect to the method for construction of a generic computation acceleration architecture based control page chain table according to the present invention are equally applicable to the system and storage medium for construction of a generic computation acceleration architecture based control page chain table according to the present invention without conflicting therewith.
In a fourth aspect of the embodiments of the present invention, there is further provided a computer device, including a memory 402 and a processor 401, where the memory stores a computer program, and the computer program, when executed by the processor, implements the method of any one of the above embodiments.
Fig. 6 is a schematic hardware structural diagram of an embodiment of a computer device for executing the method for controlling building of a page linked list based on a generic computation acceleration architecture according to the present invention. Taking the computer device shown in fig. 6 as an example, the computer device includes a processor 401 and a memory 402, and may further include: an input device 403 and an output device 404. The processor 401, the memory 402, the input device 403 and the output device 404 may be connected by a bus or other means, and fig. 4 illustrates an example of a connection by a bus. The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the system based on construction of a general compute acceleration architecture controlled page link list. The output device 404 may include a display device such as a display screen.
The memory 402, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the method for controlling the building of a page chain table based on a generic computation acceleration architecture in the embodiments of the present application. The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created based on use of a method of controlling construction of a page linked list of a general computation acceleration architecture, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 402 may optionally include memory located remotely from processor 401, which may be connected to local modules via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 401 executes various functional applications and data processing of the server by running the nonvolatile software program, instructions and modules stored in the memory 402, that is, the method for implementing the construction of the control page linked list based on the generic computation acceleration architecture of the above method embodiment.
Finally, it should be noted that the computer-readable storage medium (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of example, and not limitation, nonvolatile memory can include Read Only Memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which can act as external cache memory. By way of example and not limitation, RAM is available in a variety of forms such as synchronous RAM (DRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The storage devices of the disclosed aspects are intended to comprise, without being limited to, these and other suitable types of memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with the following components designed to perform the functions herein: a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP, and/or any other such configuration.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items. The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.
Claims (10)
1. A method for controlling the construction of a page linked list, comprising the steps of:
constructing a common control page to enable the common control page to comprise a 128-byte control page header area, a plurality of byte control block storage areas, a 64-byte data cache pointer area and a 64-byte original NVMe management and IO instruction backup area; and
the chained control page is constructed to include a 64-byte control page header area and a several-byte control block storage area,
wherein the control page header area of the normal control page comprises an 8-byte address pointer pointing to the next chained control page,
the control page header region of the chain control page comprises three chain pointers of 8 bytes, which respectively point to the first common control page, the previous control page and the next chain control page, and
the size of the common control page and the chain control page forming the control page chain table is the same.
2. The method of claim 1, wherein the control header area of the generic control page further comprises:
the attribute of the 8-byte control page is used for identifying the type of the control page and identifying whether the control page is positioned in the linked list of the control page and the position of the control page in the linked list;
a sequence number of a current control page of 8 bytes, specified when the control page is created;
the relevant information of the NVMe queue of 8 bytes comprises a completion queue ID of 2 bytes, a submission queue ID of 2 bytes and tuning queue head information of 4 bytes;
the timestamp with 4 bytes is used for recording a key time node in the execution process of the common control page; and
a 64 byte firmware reserves space.
3. The method of claim 1, wherein the control header area of the chained control page further comprises:
the attribute of the 8-byte control page is used for identifying the type of the control page and identifying whether the control page is positioned in the linked list of the control page and the position of the control page in the linked list; and
and 4 8-byte time stamps for recording key time nodes in the execution process of the chained control page.
4. The method of claim 1,
the size of the normal control page and the chain control page is 512 bytes, 1K bytes or 2K bytes respectively,
the size of the control block storing area of the general control page is 256 bytes, 768 bytes or 1792 bytes correspondingly, and
the size of the control block storage area of the chained control page is 448 bytes, 960 bytes, or 1984 bytes, respectively.
5. The method of claim 1,
the control block includes an 8-bit cbPosition identifying the position of the control block in a normal control page, a chained control page, or a linked list of control pages,
the calculation formula of the relative position of the control block in the common control page, the chained control page or the linked list of the control pages is as follows:
cpResolution: controlling the size of the page;
cpOffset: the obtained quotient represents the position of the control page where the control block is located in the control page linked list;
cbOffset: the obtained remainder represents the relative position of the control block in the control page.
6. A system for controlling the construction of a linked list of pages, comprising:
the common control page building module is used for building a common control page to enable the common control page to comprise a 128-byte control page header area, a plurality of byte control block storage areas, a 64-byte data cache pointer area and a 64-byte original NVMe management and IO instruction backup area; and
a chain control page building module for building a chain control page to include a 64-byte control page header area and a plurality of bytes of control block storage area,
wherein the control page header area of the normal control page comprises an 8-byte address pointer pointing to the next chained control page,
the control page header region of the chain control page comprises three chain pointers of 8 bytes, which respectively point to the first common control page, the previous control page and the next chain control page, and
the size of the common control page and the chain control page forming the control page chain table is the same.
7. The system of claim 6, wherein the control header area of the generic control page further comprises:
the attribute of the 8-byte control page is used for identifying the type of the control page and identifying whether the control page is positioned in the linked list of the control page and the position of the control page in the linked list;
a sequence number of a current control page of 8 bytes, specified when the control page is created;
the relevant information of the NVMe queue of 8 bytes comprises a completion queue ID of 2 bytes, a submission queue ID of 2 bytes and tuning queue head information of 4 bytes;
the timestamp with 4 bytes is used for recording a key time node in the execution process of the common control page; and
a 64 byte firmware reserves space.
8. The system of claim 6, wherein the control header area of the chained control page further comprises:
the attribute of the 8-byte control page is used for identifying the type of the control page and identifying whether the control page is positioned in the linked list of the control page and the position of the control page in the linked list; and
4 8-byte time stamps for recording key time nodes in the execution process of the chained control page; and the number of the first and second groups,
the size of the normal control page and the chain control page is 512 bytes, 1K bytes or 2K bytes respectively,
the size of the control block storage area of the general control page is 256 bytes, 768 bytes or 1792 bytes,
the size of the control block storing area of the chained control page is 448 bytes, 960 bytes or 1984 bytes, respectively, and
the control block is 16 bytes, 32 bytes, 64 bytes, or 128 bytes in size.
9. A computer-readable storage medium, characterized in that computer program instructions are stored which, when executed, implement the method according to any one of claims 1-5.
10. A computer device comprising a memory and a processor, characterized in that the memory has stored therein a computer program which, when executed by the processor, performs the method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111278615.7A CN114048149A (en) | 2021-10-30 | 2021-10-30 | Method, system, storage medium and equipment for controlling construction of page chain table |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111278615.7A CN114048149A (en) | 2021-10-30 | 2021-10-30 | Method, system, storage medium and equipment for controlling construction of page chain table |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114048149A true CN114048149A (en) | 2022-02-15 |
Family
ID=80206534
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111278615.7A Pending CN114048149A (en) | 2021-10-30 | 2021-10-30 | Method, system, storage medium and equipment for controlling construction of page chain table |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114048149A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117008843A (en) * | 2023-09-26 | 2023-11-07 | 苏州元脑智能科技有限公司 | Control page linked list construction device and electronic equipment |
WO2024113996A1 (en) * | 2022-11-29 | 2024-06-06 | 苏州元脑智能科技有限公司 | Optimization method and apparatus for host io processing, device, and nonvolatile readable storage medium |
-
2021
- 2021-10-30 CN CN202111278615.7A patent/CN114048149A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024113996A1 (en) * | 2022-11-29 | 2024-06-06 | 苏州元脑智能科技有限公司 | Optimization method and apparatus for host io processing, device, and nonvolatile readable storage medium |
CN117008843A (en) * | 2023-09-26 | 2023-11-07 | 苏州元脑智能科技有限公司 | Control page linked list construction device and electronic equipment |
CN117008843B (en) * | 2023-09-26 | 2024-01-19 | 苏州元脑智能科技有限公司 | Control page linked list construction device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114048149A (en) | Method, system, storage medium and equipment for controlling construction of page chain table | |
US9367645B1 (en) | Network device architecture to support algorithmic content addressable memory (CAM) processing | |
JP4901285B2 (en) | Memory card that can improve read performance | |
CN109726163B (en) | SPI-based communication system, method, equipment and storage medium | |
US20070050326A1 (en) | Data Storage method and data storage structure | |
WO2016148837A1 (en) | Optimization of hardware monitoring for computing devices | |
CN114527938A (en) | Data reading method, system, medium and device based on solid state disk | |
CN115470156A (en) | RDMA-based memory use method, system, electronic device and storage medium | |
CN117311817B (en) | Coprocessor control method, device, equipment and storage medium | |
CN107451070B (en) | Data processing method and server | |
CN112732426B (en) | Method, system, equipment and medium for dynamically adjusting task priority | |
CN111930651B (en) | Instruction execution method, device, equipment and readable storage medium | |
JPH08305626A (en) | Memory device | |
KR102174337B1 (en) | Memory System and Electronic device including memory system | |
US11307795B2 (en) | Electronic processing devices and memory control methods thereof | |
US10042774B2 (en) | Method and apparatus for masking and transmitting data | |
CN113222807A (en) | Data memory, data storage method, data reading method, chip and computer equipment | |
CN111897612A (en) | Page updating method and device, electronic equipment and storage medium | |
CN115328892B (en) | Business form data structure processing method, system, electronic device and medium | |
CN110347333A (en) | Improve method, apparatus, computer equipment and the storage medium of clone's mirror image performance | |
CN111381905A (en) | Program processing method, device and equipment | |
WO2017024873A1 (en) | Memory unit and processing system | |
CN114153391B (en) | Data storage method, device, equipment and storage medium based on register | |
CN116974952B (en) | Digital dynamic processing method, device, equipment and system | |
US20060106976A1 (en) | Method and system for flexible and efficient protocol table implementation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |