WO2020253523A1 - 数据访问方法及装置 - Google Patents

数据访问方法及装置 Download PDF

Info

Publication number
WO2020253523A1
WO2020253523A1 PCT/CN2020/094027 CN2020094027W WO2020253523A1 WO 2020253523 A1 WO2020253523 A1 WO 2020253523A1 CN 2020094027 W CN2020094027 W CN 2020094027W WO 2020253523 A1 WO2020253523 A1 WO 2020253523A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
memory area
written
read
write
Prior art date
Application number
PCT/CN2020/094027
Other languages
English (en)
French (fr)
Inventor
陈宇
覃中
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20825480.5A priority Critical patent/EP3964996A4/en
Publication of WO2020253523A1 publication Critical patent/WO2020253523A1/zh
Priority to US17/554,843 priority patent/US20220107752A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/109Address translation for multiple virtual address spaces, e.g. segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/78Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/78Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
    • G06F21/79Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data in semiconductor storage media, e.g. directly-addressable memories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1052Security improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/202Non-volatile memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/65Details of virtual memory and virtual address translation
    • G06F2212/657Virtual address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7204Capacity control, e.g. partitioning, end-of-life degradation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7208Multiple device management, e.g. distributing data over multiple flash devices

Definitions

  • This application relates to the storage field, and in particular to a data access method and device.
  • SCM media is a new form of media that the industry is currently paying attention to, and is usually also called persistent memory media or non-volatile storage media. SCM media has the characteristics of non-volatility and fast access speed.
  • a central processing unit when a central processing unit (CPU) accesses the SCM medium, it usually uses a local update method for access. That is, for a memory area that has been allocated and has not been reclaimed in the SCM medium, the CPU can update the data stored in the memory area multiple times. In this case, the data in the SCM medium can easily be tampered with illegally, so that the data stored in the SCM medium is destroyed.
  • This application provides a data access method and device.
  • the technical solution is as follows:
  • a data access method includes: receiving a write request, the write request including data to be written and address indication information; and obtaining the corresponding data corresponding to the data to be written according to the address indication information
  • the target address of the memory area, the memory area refers to the space in the storage-level memory SCM; write the data to be written into the memory area indicated by the target address in an additional write mode; After the input data is written into the memory area, the read and write attributes corresponding to the target address are set so that the data in the memory area cannot be modified.
  • the write request is received, the target address of the memory area corresponding to the data to be written is obtained according to the address indication information included in the write request, and the data to be written is written into the memory area in an additional writing manner.
  • the read and write attributes corresponding to the target address so that the data in the memory area cannot be modified, thereby effectively avoiding illegal tampering of the data stored in the memory area, and realizing the SCM Read-only protection of the data in the media.
  • the setting the read-write attribute corresponding to the target address includes: setting the read-write attribute of the page table entry corresponding to the target address to a read-only attribute.
  • the method before the writing the data to be written into the memory area, the method further includes: determining that the read-write attribute of the page table entry corresponding to the target address is not a read-only attribute.
  • the read-write attribute of the page table entry corresponding to the target address is not a read-only attribute, it means that the memory area indicated by the target address is not a read-only area, and data writing is allowed. In this case, the data to be written can be written into the memory area.
  • the read and write attributes of the page table entry are stored in the memory management unit MMU.
  • the read and write attributes of the page table entries are stored in the input and output memory management unit IOMMU.
  • both the MMU and the IOMMU store a page table for converting a logical address into a physical address.
  • a data access device in a second aspect, is provided, and the data access device has the function of realizing the behavior of the data access method in the first aspect.
  • the data access device includes at least one module, and the at least one module is used to implement the data access method provided in the first aspect.
  • a data access device in a third aspect, includes a processor and a memory, and the memory is used to store a program that supports the data access device to execute the data access method provided in the first aspect, And storing the data involved in the data access method provided in the first aspect.
  • the processor is configured to execute a program stored in the memory.
  • the operating device of the storage device may further include a communication bus, which is used to establish a connection between the processor and the memory.
  • a computer-readable storage medium stores instructions that, when run on a computer, cause the computer to execute the data access method described in the first aspect.
  • a computer program product containing instructions which when running on a computer, causes the computer to execute the data access method described in the first aspect.
  • the write request is received, the target address of the memory area corresponding to the data to be written is obtained according to the address indication information included in the write request, and the data to be written is written into the memory area in an additional writing manner.
  • the read and write attributes corresponding to the target address so that the data in the memory area cannot be modified, thereby effectively avoiding illegal tampering of the data stored in the memory area, and realizing the SCM Read-only protection of the data in the media.
  • FIG. 1 is a schematic diagram of a distributed storage system involved in a data access method provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of a storage node provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of a data access method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an area where data has been written and an area where data has not been written included in the memory area after data is written in the allocated memory area according to an embodiment of the present application;
  • FIG. 5 is a flowchart of another data access method provided by an embodiment of the present application.
  • Fig. 6 is a schematic structural diagram of a data access device provided by an embodiment of the present application.
  • Fig. 1 is a schematic diagram of a distributed storage system involved in a data access method provided by an embodiment of the present application. As shown in FIG. 1, the system includes a client node 101, a cluster management node 102, and multiple storage nodes 103.
  • the client node 101 may communicate with the cluster management node 102 and multiple storage nodes 103 respectively, and the cluster management node 102 may also communicate with multiple storage nodes 103.
  • the cluster management node 102 is used to manage multiple storage nodes 103.
  • a distributed hash table is usually used for routing when selecting storage nodes.
  • the hash ring can be evenly divided into several parts, each part is called a partition, and each partition corresponds to one or more storage nodes 103.
  • the cluster management node 102 may maintain partition information of each partition in the system, and the cluster management node 102 may be used to manage the distribution of multiple partitions in the system.
  • the cluster management node 102 may allocate a suitable partition to the client node 101 according to the load of each partition to be maintained.
  • the partition information of the allocated partition is fed back to the client node 101 so that the client node 101 can implement data access according to the partition information.
  • the client node 101 may be a client server. When performing data access, the client node 101 may interact with the cluster management node 102 to obtain partition information, and then communicate with the storage node 103 corresponding to the corresponding partition according to the obtained partition information, so as to implement data reading and writing. For example, when writing data, the client node 101 may send a request for applying for partition information of the partition to the cluster management node 102. The cluster management node 102 can feed back the partition information of the partition allocated to the client node 101. After receiving the partition information, the client node 101 can write data into the storage node corresponding to the partition corresponding to the partition information.
  • Each storage node 103 can receive the data access request sent by the client node 101, and respond to the data access request of the client node 101 through the data access method provided in the embodiment of the present application.
  • the data access request may be an additional write request, and the so-called additional write means that the written data is organized according to the time sequence of writing.
  • the subsequent process will no longer perform write operations on the area where the data is located, but will only perform read operations.
  • Fig. 2 is a schematic structural diagram of a storage node provided by an embodiment of the present application.
  • the storage node in the distributed storage system in FIG. 1 can be implemented by the storage node shown in FIG. 2.
  • the storage node includes at least one processor 201, a communication bus 202, a memory 203, an input output (Input Output, IO) device 204, and an input output memory management unit 205 (Input Output Memory Management Unit, IOMMU).
  • processor 201 the storage node includes at least one processor 201, a communication bus 202, a memory 203, an input output (Input Output, IO) device 204, and an input output memory management unit 205 (Input Output Memory Management Unit, IOMMU).
  • IOMMU Input Output Memory Management Unit
  • the processor 201 may be a general-purpose central processing unit (Central Processing Unit, CPU).
  • the processor 201 may include one or more CPU cores (cores) 2011 and one or more memory management units (Memory Management Unit, MMU) 2012.
  • the MMU 2012 may also be independent of the CPU.
  • the communication bus 202 may include a path for transferring information between the aforementioned components.
  • the memory 203 may be an SCM medium such as a phase-change memory (PCM), a resistive random access memory (ReRAM)), a magnetic random access memory (MRAM), etc.
  • the memory 203 may exist independently and is connected to the processor 201 through a communication bus 202.
  • the memory 203 may also be integrated with the processor 201. Among them, the memory 203 is used to realize the storage of persistent data.
  • the CPU core 2011 can access the memory 203 through the MMU 2012.
  • the CPU core 2011 may transmit the logical address to be accessed to the MMU 2012, and the MMU 2012 may convert the logical address into a physical address according to the stored first page table, and then access the memory 203 through the physical address.
  • the first page table stores a mapping relationship between logical addresses and physical addresses accessible by the CUP.
  • the input and output device 204 can access the memory 203 through the IOMMU 205 to read data from the memory 203.
  • the IO device 204 may send the logical address to be accessed to the IOMMU 205, and the IOMMU 205 may convert the received logical address into a physical address according to the stored second page table, and then access the memory 203 through the physical address.
  • the second page table stores the mapping relationship between the logical address and the physical address that the input device and the output device can access.
  • the IO device 204 may be a remote direct memory access (Remote Direct Memory Access, RDMA) network card.
  • RDMA Remote Direct Memory Access
  • the storage node may include multiple processors, such as the processor 201 and the processor 206 shown in FIG. 2. Each of these processors can be a single-CPU (single-CPU) processor or a multi-core (multi-CPU) processor.
  • the processor here may refer to a processing core for processing data (for example, computer program instructions).
  • the aforementioned storage node may be a general-purpose computer device or a special-purpose computer device.
  • the storage node may be a desktop computer, a portable computer, a network server, a PDA (Personal Digital Assistant, PDA), a mobile phone, a tablet computer, a wireless terminal device, a communication device, or an embedded device.
  • PDA Personal Digital Assistant
  • the embodiment of the present invention does not limit the type of computer equipment.
  • the memory 203 is also used to store program codes for executing the solutions of the present application, and the processor 201 controls the execution.
  • the processor 201 is configured to execute the program code stored in the memory 203.
  • One or more software modules can be included in the program code.
  • the storage node in the distributed storage system shown in FIG. 1 can implement data access through one or more software modules in the program code in the processor 201 and the memory 203.
  • Fig. 3 is a flowchart of a data access method provided by an embodiment of the present application. This method can be applied to the distributed storage scenario shown in FIG. 1 and executed by the storage node shown in FIG. 2. Referring to FIG. 3, the method includes the following steps:
  • Step 301 Receive a write request, where the write request includes data to be written and address indication information.
  • the storage node receives the write request sent by the client node.
  • the write request may carry data to be written, and the data to be written is the data requested to be written to the storage node.
  • the write request may be a write request received by the IO device of the storage node, or may not be a write request received by the IO device.
  • an additional write request refers to a write request sent in an append only mode.
  • the so-called append method means that the written data is organized according to the time sequence of writing. Moreover, after a process of a program writes data in this way, the subsequent process will no longer perform write operations on the area where the data is located, but will only perform read operations.
  • Step 302 Obtain the target address of the memory area corresponding to the data to be written according to the address indication information, and detect whether the memory area allows data to be written.
  • the storage node After receiving the write request, the storage node can determine the target address of the memory area corresponding to the data to be written according to the address indication information included in the write request; determine the read and write attributes of the page table entry corresponding to the target address; if the target address If the read-write attribute of the corresponding page table entry is not read-only, it is determined that the memory area allows data to be written.
  • the address indication information may include an offset amount and the data length of the data to be written.
  • the offset is used to indicate the length information of the area where the data is currently written in the memory area allocated for the program corresponding to the data to be written.
  • the storage node can determine the starting logical address of the memory area corresponding to the data to be written according to the offset, and then the storage node can determine the end of the memory area according to the starting logical address of the memory area and the data length of the data to be written Logical address. All logical addresses from the start logical address to the end logical address of the memory area are used as the target address of the memory area.
  • the storage node may send the target address of the memory area to the MMU.
  • the MMU can obtain the read and write attributes of the page table entry corresponding to the target address from the stored first page table according to the target address of the memory area. If the read-write attribute of the page table entry corresponding to the target address in the first page table is not a read-only attribute, it means that data writing is allowed in the memory area. Otherwise, it can be determined that the memory area is not allowed to write data.
  • the first page table stores the mapping relationship between the logical address in the address space of the CPU and the physical address of the allocated area in the SCM.
  • the CPU core in the storage node accesses the SCM medium, the CPU core can transmit the logical address of the memory area to be accessed to the MMU, and the MMU can convert the logical address to be accessed into a corresponding physical address through the first page table.
  • the first page table stored in the MMU may be as shown in Table 1, where the first page table uses the page number as an index, and the page number is determined according to the logical address.
  • the upper four bits of all possible 16-bit logical addresses can be used as the page number in the first page table, and each page number can be used to indicate a logical address range.
  • Each page number corresponds to a page table entry, and the page table entry corresponding to each page number includes a page frame number and a protection bit.
  • the page frame number can be used to indicate a physical address range.
  • the protection bit can be used to indicate the access form of the corresponding physical address.
  • the protection bit when the protection bit is the first value, it is used to indicate that the read and write attributes of the memory area corresponding to the physical address range indicated by the corresponding page frame number are read-only attributes.
  • the protection bit is the second value, use The read and write attributes of the memory area corresponding to the physical address range indicated by the corresponding page frame number are non-read-only attributes, that is, data can be read or written.
  • the first value and the second value are different. As shown in Table 1, the first value can be 1, and the second value can be 0.
  • Page number Page frame number Protection bit 0 010 0 1 001 1 2 110 0 3 000 0 ... ... ...
  • the write request is a write request received through an IO device
  • the SCM medium can be accessed through the IO device.
  • the storage node may send the target address of the memory area to the IOMMU of the storage node.
  • the IOMMU can obtain the read and write attributes of the page table entry corresponding to the target address from the stored second page table according to the target address. If the read-write attribute of the page table entry corresponding to the target address in the second page table is not a read-only attribute, it means that the memory area allows data to be written. Otherwise, it can be stated that the memory area only allows reading but not writing, that is, the storage node can no longer write data in the memory area, and the memory area does not allow data to be written.
  • mapping relationship between the logical address in the address space of the IO device and the physical address of the allocated area in the SCM medium is stored in the second page table.
  • the IO device in the storage node accesses the SCM medium, the IO device can transmit the logical address of the memory area to be accessed to the IOMMU, and the IOMMU can convert the logical address into a corresponding physical address through the second page table.
  • the storage node may not allocate a memory area from the SCM medium for the program corresponding to the data to be written.
  • the address indication information carried in the write request sent by the client node may only include the amount of data to be written.
  • the storage node can also detect whether there is a memory area allocated for the data to be written in the SCM medium according to the write request. If there is no memory area allocated for the data to be written in the SCM medium, before performing this step, the storage node may first allocate a memory area for the data to be written.
  • a corresponding memory area may be allocated for the application according to the application corresponding to the data to be written. At this time, the space of the allocated memory area will be larger than the space occupied by the data to be written.
  • the storage node can obtain the length information of the area where data has been written in the allocated memory area. That is, the offset. Since the allocated memory area was just allocated and no data has been written, the length information is 0. According to the length information, the storage node can use the starting logical address of the allocated memory area as the starting logical address of the memory area corresponding to the data to be written.
  • the end logical address of the memory area corresponding to the data to be written is determined. All logical addresses between the start logical address and the end logical address are used as the target address of the memory area. Since the memory area is allocated when the write request is received, no data has been written to the memory area before. In this case, the read and write attributes of the memory area will not be read-only, so , The storage node may no longer need to detect whether the read/write attribute of the page table entry corresponding to the obtained target address is a read-only attribute, but can directly write the data to be written into the memory area.
  • Step 303 If the memory area allows data to be written, write the data to be written into the memory area in an additional write mode.
  • the storage node may write the data to be written included in the write request into the memory area in an append manner.
  • the storage node may send the target address of the memory area determined in the foregoing steps to the IOMMU, and the IOMMU may convert the target address of the memory area into According to the corresponding physical address, the data to be written is written into the memory area according to the additional writing mode according to the physical address.
  • the IOMMU can look up the physical address corresponding to the target address of the memory area from the second page table.
  • the storage node may send the target address determined in the foregoing steps to the MMU.
  • the MMU can convert the target address into a corresponding physical address, and then write the data to be written into the memory area in an additional write mode according to the physical address.
  • the MMU stores a first page table, and the first page table stores a mapping relationship between logical addresses and physical addresses that can be accessed by the CPU core. Based on this, the MMU can look up the physical address corresponding to the target address from the first page table.
  • the storage node can persist the written data.
  • the storage node can read written data from the memory area, and perform a cyclic redundancy check (CRC) on the read data. If the verification is successful, it indicates that the data persistence is completed.
  • CRC cyclic redundancy check
  • the process that writes the data to be written since the data to be written is written into the memory area in an additional write mode in the embodiment of the present application, for the process that writes the data to be written, subsequently, when the process writes the data again , The data will be written to the area other than the memory area where no data has been written, and no more write operations will be performed on the memory area. However, the process can perform read operations on this memory area. That is, the process can read the data in the memory area. It can be seen that after the data to be written is written into the memory area in this way, the memory area naturally becomes a read-only area for the corresponding process.
  • FIG. 4 is a schematic diagram of an area where data has been written and an area where data has not been written included in the allocated memory area after data is written in the allocated memory area according to an embodiment of the present application.
  • the memory areas R1-R7 are areas where data has been written through the additional write method. Since data has been written in the additional write method, the memory areas R1-R7 will no longer write data in the future , Thus forming a read-only area.
  • W8-W10 are areas where no data has been written, and data will continue to be written in this area in the future.
  • Step 304 Set the read and write attributes corresponding to the target address of the memory area after data is written, so that the data in the memory area cannot be modified.
  • the process After the data to be written is written to the memory area in the additional write mode, for the process corresponding to the data to be written, the process will no longer write data to the memory area. However, if the hardware of the storage node fails or there is an error program, other processes may write to the memory area. In this case, in order to prevent the data written in the memory area from being accidentally overwritten by an error program, the embodiment of the present application may set the page table entry corresponding to the target address of the memory area after the data has been written to a read-only attribute. The data in this memory area cannot be modified.
  • the storage node may store the target address in the corresponding page in the first page table stored by the MMU
  • the entry is set as a read-only attribute.
  • the storage node may set the read-write attribute of the page table entry corresponding to the target address in the first page table stored by the MMU to the read-only attribute by calling syscall.
  • the storage node may obtain the first page table stored in the MMU through a system call. Afterwards, the storage node can determine at least one page number according to the target address. Find the page table entry corresponding to each page number determined from the first page table, and set the protection bit in the corresponding page table entry for indicating the read and write attributes of the page table entry to the first value. The first value is used to indicate that the memory area corresponding to the physical address range indicated by the page frame number in the corresponding page table entry is a read-only attribute.
  • the memory area forms a read-only area for the process corresponding to the data to be written.
  • the storage node may set the page table entry corresponding to the target address in the first page table as a read-only attribute.
  • the CPU accesses the SCM media through a local update method, after writing data to a certain area, the logical address of this area cannot be set to only the page table entry corresponding to the first page table. Read the attributes, so it is impossible to protect the data in the area.
  • the MMU can detect what the received write request requires Whether the read-write attribute of the corresponding page table entry of the accessed logical address in the first page table is read-only. If it is a read-only attribute, the first exception information can be generated. The first exception information is used to indicate that the write request is an exception request.
  • the storage node since the logical address of the memory area after the data has been written has been set as a read-only attribute in the corresponding page table entry in the first page table, if this When the storage node again receives a write request for these areas that have been set as read-only, and the CPU needs to write data, the storage node can send the logical address carried in the write request to the MMU.
  • the MMU determines that these areas are read-only areas according to the read and write attributes of the page table entries corresponding to the logical addresses of these areas in the first page table. In this case, the MMU will generate an abnormal signal.
  • the storage node may determine that the write request is an abnormal request according to the abnormal signal.
  • the storage node may refuse to respond to the write request, and generate abnormal information according to the abnormal signal generated by the MMU, thereby indicating that the write request is an abnormal request. It can be seen that by setting the corresponding page table entry in the first page table of the memory area after data is written as read-only attribute, the data in these memory areas are prevented from being accidentally overwritten by other programs, thereby realizing the realization of these memories. Read-only protection of data in the area.
  • the first exception information may include information used to indicate that the write request is an exception request.
  • it may also include process information of the write request.
  • the process information refers to information used to indicate the program corresponding to the write request. In this way, it is possible to determine which program is trying to rewrite the read-only memory area where data has been written according to the first abnormal information, so as to facilitate the user to locate the error program.
  • the storage node may store the target address in the second page table stored in the IOMMU
  • the corresponding page table entry is set to read-only attribute.
  • the storage node may set the read-write attribute of the page table entry corresponding to the target address in the second page table stored by the IOMMU to a read-only attribute through syscall.
  • the write request received again can be detected through the IOMMU Whether the logical address of the area to be accessed in the second page table corresponds to the read and write attribute of the page table entry is a read-only attribute; if it is a read-only attribute, the second exception information is generated, and the second exception information is used to indicate the write request It is an abnormal request, and the second abnormal information includes process information of the write request.
  • the storage node can send the logical addresses of these areas targeted by the write request to the IOMMU, and the IOMMU according to the second page table It will be detected that the read-write attribute of the page table entry corresponding to the logical address is read-only. In this case, the IOMMU will generate an abnormal signal, and the storage node can determine that the write request is an abnormal request based on the abnormal signal.
  • the access to the read-only memory area where data has been written through the IO device is an illegal access.
  • the storage node may refuse to respond to the write request, and generate second abnormal information according to the abnormal signal of the IOMMU to indicate that the write request is an abnormal request. It can be seen that by setting the corresponding page table entry in the second page table of the memory area after data is written as a read-only attribute, the data in these memory areas are prevented from being rewritten by the IO device again, thereby realizing the realization of these memories. Read-only protection of data in the area.
  • the IO device uses the physical address to access the SCM through the direct memory access (DMA) method. , Can avoid the malicious attack of DMA.
  • DMA direct memory access
  • the page table entry corresponding to the target address in the first page table may be set as a read-only attribute, or The page table entry corresponding to the target address in the second page table is set to a read-only attribute, or the page table entries corresponding to the target address in the first page table and the second page table can be set to the read-only attribute at the same time.
  • the write request is received, the target address of the memory area corresponding to the data to be written is obtained according to the address indication information included in the write request, and the data to be written is written into the memory area in an additional writing manner.
  • the read and write attributes corresponding to the target address so that the data in the memory area cannot be modified, thereby effectively avoiding illegal tampering of the data stored in the memory area, and realizing the SCM Read-only protection of the data in the media.
  • the foregoing embodiment mainly introduces a specific implementation manner of data access by a storage node. It can be seen from the foregoing embodiment that when the client node writes data, it can access the SCM medium through the CPU of the storage node, or access the SCM medium through the IO device in the storage node. Next, in conjunction with FIG. 5, the implementation process of the client node writing data to the storage node in the distributed storage system shown in FIG. 1 will be introduced.
  • Fig. 5 is a flowchart of a data access method provided by an embodiment of the present application. This method can be applied to the distributed storage system shown in FIG. 1. Referring to Figure 5, the method includes the following steps:
  • Step 501 The client node sends a partition application request to the cluster management node, where the partition application request is used to apply to the cluster management node for a partition for storing multiple replica data.
  • a copy mechanism is usually used to ensure the reliability of stored data.
  • the client node when a client node wants to store data in a storage node, the client node can generate multiple copies of data, where each copy has the same data. After that, the client node may send a partition application request for applying for a partition to the cluster management node. In the embodiment of the present application, three copies of data are taken as an example for description.
  • Step 502 The cluster management node feeds back partition information to the client node.
  • the cluster management node After the cluster management node receives the partition application request, it can determine a suitable partition according to the load situation of each partition in the current system, and allocate the same amount of multiple copies of the client node data from the storage nodes included in the partition as the copy data Storage node.
  • the determined partition information of the partition is fed back to the client node.
  • the partition information may include the partition identifier of the determined partition and the node identifier of the storage node in the partition allocated to the client node.
  • the cluster management node may directly send the identifiers of all storage nodes included in the partition to the client node.
  • the client node can select the same number of storage nodes as its replica data.
  • the number of storage nodes selected from the determined partition is three.
  • Step 503 The client node determines the routing information of the storage node corresponding to each replica data according to the partition information.
  • the client node may determine the routing information of the storage node corresponding to each copy data according to the partition identifier contained in the partition information and the determined storage node identifier corresponding to each copy data.
  • Step 504 The client node sends a write request to the storage node corresponding to each copy data according to the routing information of the storage node corresponding to each copy data, and the write request includes the corresponding copy data and address indication information.
  • the write request may be an append_replica_data request. And for the specific introduction of the additional request, please refer to the related content in the foregoing embodiment.
  • the operation of the client node sending the write request to the storage node corresponding to each copy data can be executed concurrently.
  • Step 505 Each storage node obtains the target address of the memory area corresponding to the copy data according to the address indication information included in the received write request, and detects whether the memory area allows data to be written.
  • the storage node can refer to the implementation described in step 302 in the foregoing embodiment to obtain the memory area corresponding to the copy data according to the address indication information, and check whether the memory area allows data to be written .
  • Step 506 If the memory area on each storage node allows data to be written, the corresponding copy data is written into the memory area in an additional write mode.
  • the storage node For the storage node corresponding to each copy data, if the storage node determines that the memory area allows data to be written, the storage node can refer to step 303 in the foregoing embodiment to write the copy data included in the write request received by itself to the corresponding In the memory area.
  • step 303 in the foregoing embodiment to write the copy data included in the write request received by itself to the corresponding In the memory area. The embodiments of this application will not be repeated here.
  • Step 507 Each storage node sets the read and write attributes of the target address of the memory area after data is written, so that the data in the memory area cannot be modified.
  • the storage node corresponding to each copy data after the copy data is written to the corresponding memory area, you can refer to the relevant implementation described in step 303 in the foregoing embodiment to change the logic of the memory area after the data is written.
  • the page table entry corresponding to the address is set as a read-only attribute.
  • Step 508 Each storage node sends a write success notification message to the client node.
  • the storage node For the storage node corresponding to each copy data, after the page table entry corresponding to the logical address of the memory area after the data is written is set as a read-only attribute, the storage node can feedback a write success notification message to the client node , In order to remind the client node that the corresponding copy data storage is successful.
  • the client node may send a write request to the storage node.
  • the storage node After receiving the write request, the storage node obtains the target address of the memory area corresponding to the data to be written according to the address indication information included in the write request, and Write the data to be written into the memory area in the way of append writing.
  • an embodiment of the present application provides a data access device 600, which includes:
  • the receiving module 601 is configured to perform step 301 in the foregoing embodiment; wherein, the receiving module 601 may be executed by an IO device or a processor in the storage node shown in FIG. 2.
  • the obtaining module 602 is configured to execute step 302 in the foregoing embodiment; wherein, the obtaining module 602 may be executed by the processor in the storage node shown in FIG. 2.
  • the writing module 603 is configured to execute step 303 or step 506 in the above embodiment; wherein, the writing module 603 can be executed by the CPU and MMU shown in FIG. 2, or the writing module 603 can be executed by the CPU and MMU shown in FIG.
  • the IO device and IOMMU shown are implemented.
  • the setting module 604 is configured to execute step 304 or step 507 in the foregoing embodiment; wherein, the setting module 604 can be executed by the processor shown in FIG. 2.
  • the setting module 604 is specifically configured to: set the read-write attribute of the page table entry corresponding to the target address as a read-only attribute.
  • the device further includes:
  • the determining module is used to determine that the read-write attribute of the page table entry corresponding to the target address is not a read-only attribute before the write module writes the data to be written into the memory area.
  • the determining module may be executed by the processor shown in FIG. 2.
  • the read and write attributes of the page table entries are stored in the memory management unit MMU.
  • the read and write attributes of the page table entries are stored in the input and output memory management unit IOMMU.
  • a write request is received, the target address of the memory area corresponding to the data to be written is obtained according to the address indication information included in the write request, and the data to be written is written to the data in an additional write mode.
  • the memory area After writing the data to be written into the memory area, set the read and write attributes corresponding to the target address so that the data in the memory area cannot be modified, thereby effectively avoiding illegal tampering of the data stored in the memory area, and realizing the SCM Read-only protection of the data in the media.
  • the data access device provided in the above embodiment performs data access
  • only the division of the above functional modules is used as an example for illustration.
  • the above functions can be allocated by different functional modules as needed.
  • the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the data access device provided in the foregoing embodiment and the data access method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, and will not be repeated here.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example: floppy disk, hard disk, tape), optical medium (for example: Digital Versatile Disc (DVD)), or semiconductor medium (for example: Solid State Disk (SSD) )Wait.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Storage Device Security (AREA)

Abstract

本申请公开了一种数据访问方法及装置,属于存储领域。在本申请中,接收写请求,根据写请求包括的地址指示信息获得待写入数据对应的内存区域的目标地址,以追加写的方式将待写入数据写入该目标地址所指示的内存区域中。在将待写入数据写入该内存区域之后,设置目标地址对应的读写属性,使得内存区域中的数据不能被修改,从而有效避免对内存区域内存储的数据的非法篡改,实现了对SCM介质中的数据的只读保护。

Description

数据访问方法及装置 技术领域
本申请涉及存储领域,特别涉及一种数据访问方法及装置。
背景技术
存储级内存(storage class memory,SCM)介质是当前业界关注的一种新介质形态,通常也被称为持久化内存介质或者是非易失性存储介质。SCM介质具备非易失性、访问速度快等特点。
相关技术中,中央处理器(central processing unit,CPU)在访问SCM介质时,通常会采用本地更新的方式进行访问。也即,对于SCM介质中已分配且还未回收的内存区域,CPU可以多次更新该内存区域中存储的数据。在这种情况下,SCM介质中的数据很容易遭到非法篡改,从而使得SCM介质中存储的数据遭到破坏。
发明内容
本申请提供了一种数据访问方法及装置。所述技术方案如下:
第一方面,提供了一种数据访问方法,所述方法包括:接收写请求,所述写请求包括待写入数据和地址指示信息;根据所述地址指示信息获得所述待写入数据对应的内存区域的目标地址,所述内存区域是指存储级内存SCM中的空间;以追加写的方式将所述待写入数据写入所述目标地址指示的内存区域中;在将所述待写入数据写入所述内存区域之后,设置所述目标地址对应的读写属性,使得所述内存区域中的数据不能被修改。
在本申请实施例中,接收写请求,根据写请求包括的地址指示信息获得待写入数据对应的内存区域的目标地址,以追加写的方式将待写入数据写入该内存区域中。在将待写入数据写入该内存区域之后,设置目标地址对应的读写属性,使得内存区域中的数据不能被修改,从而有效避免对内存区域内存储的数据的非法篡改,实现了对SCM介质中的数据的只读保护。
可选地,所述设置所述目标地址对应的读写属性包括:将所述目标地址对应的页表项的读写属性设置为只读属性。
可选地,在所述将所述待写入数据写入所述内存区域之前,所述方法还包括:确定所述目标地址对应的页表项的读写属性不为只读属性。当目标地址对应的页表项的读写属性不为只读属性时,说明该目标地址所指示的内存区域不为只读区域,允许写入数据。在这种情况下,可以将待写入数据写入该内存区域。
可选地,所述页表项的读写属性保存在内存管理单元MMU中。
可选地,所述页表项的读写属性保存在输入输出内存管理单元IOMMU中。
其中,在访问SCM的内存区域时,可以通过CPU核和MMU访问,也可以通过IO设备和IOMMU来访问。其中,MMU和IOMMU中均存储有用于将逻辑地址转换为物理地址的页表。
第二方面,提供了一种数据访问装置,所述数据访问装置具有实现上述第一方面中数据访问方法行为的功能。所述数据访问装置包括至少一个模块,该至少一个模块用于实现上述第一方面所提供的数据访问方法。
第三方面,提供了一种数据访问装置,所述数据访问装置的结构中包括处理器和存储器,所述存储器用于存储支持数据访问装置执行上述第一方面所提供的数据访问方法的程序,以及存储用于实现上述第一方面所提供的数据访问方法所涉及的数据。所述处理器被配置为用于执行所述存储器中存储的程序。所述存储设备的操作装置还可以包括通信总线,该通信总线用于该处理器与存储器之间建立连接。
第四方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面所述的数据访问方法。
第五方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面所述的数据访问方法。
上述第二方面、第三方面、第四方面和第五方面所获得的技术效果与第一方面中对应的技术手段获得的技术效果近似,在这里不再赘述。
本申请提供的技术方案带来的有益效果至少包括:
在本申请实施例中,接收写请求,根据写请求包括的地址指示信息获得待写入数据对应的内存区域的目标地址,以追加写的方式将待写入数据写入该内存区域中。在将待写入数据写入该内存区域之后,设置目标地址对应的读写属性,使得内存区域中的数据不能被修改,从而有效避免对内存区域内存储的数据的非法篡改,实现了对SCM介质中的数据的只读保护。
附图说明
图1是本申请实施例提供的数据访问方法所涉及的分布式存储系统的示意图;
图2是本申请实施例提供的一种存储节点的结构示意图;
图3是本申请实施例提供的一种数据访问方法的流程图;
图4是本申请实施例示出的一种在已分配的内存区域中写入数据之后,该内存区域包括的已写入数据的区域和未写入数据的区域的示意图;
图5是本申请实施例提供的另一种数据访问方法的流程图;
图6是本申请实施例提供的一种数据访问装置的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
在对本申请实施例进行详细的解释说明之前,先对本申请实施例涉及的实施环境进行介绍。
图1是本申请实施例提供的数据访问方法所涉及的一种分布式存储系统的示意图。如图1所示,该系统中包括客户端节点101、集群管理节点102和多个存储节点103。
其中,客户端节点101可以分别与集群管理节点102、多个存储节点103进行通信,集群管理节点102也可以与多个存储节点103进行通信。
需要说明的是,集群管理节点102用于管理多个存储节点103。其中,为了保证数据均匀存储在多个存储节点103中,通常在选择存储节点时采用分布式哈希表方式进行路由。按照分布式哈希表方式,可以将哈希环均匀的划分为若干部分,每个部分称为一个分区,每个分区对应一个或多个存储节点103。该集群管理节点102中可以维护有该系统中每个分区的分区信息,并且,该集群管理节点102可以用于管理该系统中多个分区的分配。示例性地,在接收到客户端节点101发送的用于申请分区的分区信息的请求之后,该集群管理节点102可以根据维护的每个分区的负载,为该客户端节点101分配合适的分区,将分配的分区的分区信息反馈至客户点节点101,以便客户端节点101可以根据该分区信息实现数据的访问。
客户端节点101可以是客户端服务器。当进行数据访问时,客户端节点101可以与集群管理节点102进行交互,以获取分区信息,进而根据获取的分区信息与对应的分区所对应的存储节点103进行通信,以实现数据的读写。例如,当写入数据时,该客户端节点101可以向集群管理节点102发送用于申请分区的分区信息的请求。集群管理节点102可以向客户端节点101反馈为其分配的分区的分区信息。客户端节点101在接收到该分区信息之后,可以将数据写入该分区信息对应的分区所对应的存储节点中。
每个存储节点103可以接收客户端节点101发送的数据访问请求,并通过本申请实施例提供的数据访问方法来对客户端节点101的数据访问请求进行响应。其中,该数据访问请求可以是追加写请求,所谓追加写是指写入的数据按照写入的时间先后顺序组织。并且,当某个程序的进程通过该种方式写入数据之后,后续该进程将不会再对该数据所在的区域执行写操作,只会进行读操作。
图2是本申请实施例提供的一种存储节点的结构示意图。图1中的分布式存储系统中的存储节点即可以通过图2所示的存储节点来实现。参见图2,该存储节点包括至少一个处理器201,通信总线202,存储器203、输入输出(Input Output,IO)设备204和输入输出内存管理单元205(Input Output Memory Management Unit,IOMMU)。
处理器201可以是一个通用中央处理器(Central Processing Unit,CPU)。该处理器201可以包括一个或多个CPU核(core)2011以及一个或多个内存管理单元(Memory Management Unit,MMU)2012。
可选地,在一种可能的实现方式中,MMU2012也可以独立于CPU。
通信总线202可包括一通路,在上述组件之间传送信息。
存储器203可以为诸如相变存储器(Phase-change memory,PCM)、阻抗随机存取存储器(resistive random access memory,ReRAM))、磁性随机存取存储器(magnetic random access memory,MRAM)等SCM介质。存储器203可以是独立存在,通过通信总线202与处理器201相连接。存储器203也可以和处理器201集成在一起。其中,存储器203用于实现 持久化数据的存储。
需要说明的是,CPU核2011可以通过MMU2012访问存储器203。示例性地,CPU核2011可以将要访问的逻辑地址传输至MMU2012,MMU2012可以根据存储的第一页表将逻辑地址转换为物理地址,进而通过该物理地址访问存储器203。其中,第一页表中存储有CUP可访问的逻辑地址与物理地址之间的映射关系。
输入输出设备204可以通过IOMMU205访问存储器203,以从存储器203中读取数据。
需要说明的是,IO设备204可以将要访问的逻辑地址发送至IOMMU205,IOMMU205可以根据存储的第二页表将接收到的逻辑地址转换为物理地址,进而通过该物理地址访问存储器203。其中,第二页表中存储有输入设备和输出设备能够访问的逻辑地址与物理地址之间的映射关系。需要说明的是,该IO设备204可以是远程直接数据存取(Remote Direct Memory Access,RDMA)网卡。
在具体实现中,作为一种实施例,该存储节点可以包括多个处理器,例如图2中所示的处理器201和处理器206。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指用于处理数据(例如计算机程序指令)的处理核。
上述的存储节点可以是一个通用计算机设备或者是一个专用计算机设备。在具体实现中,该存储节点可以是台式机、便携式电脑、网络服务器、掌上电脑(Personal Digital Assistant,PDA)、移动手机、平板电脑、无线终端设备、通信设备或者嵌入式设备。本发明实施例不限定计算机设备的类型。
其中,存储器203还用于存储执行本申请方案的程序代码,并由处理器201来控制执行。处理器201用于执行存储器203中存储的程序代码。程序代码中可以包括一个或多个软件模块。图1中所示的分布式存储系统中的存储节点可以通过处理器201以及存储器203中的程序代码中的一个或多个软件模块,来实现数据的访问。
接下来对本申请实施例提供的数据访问方法进行介绍。
图3是本申请实施例提供的一种数据访问方法的流程图。该方法可以应用于图1所示的分布式存储场景下,由图2所示存储节点来执行,参见图3,该方法包括以下步骤:
步骤301:接收写请求,该写请求中包括待写入数据和地址指示信息。
在本申请实施例中,存储节点接收客户端节点发送的写请求。其中,该写请求中可以携带待写入数据,该待写入数据即为请求写入到该存储节点中的数据。
需要说明的是,根据客户端节点的不同,该写请求可以是通过该存储节点的IO设备接收的写请求,也可以不为IO设备接收的写请求。
另外,追加写请求是指按照追加(append only)方式发送的写请求。所谓追加方式是指写入的数据按照写入的时间先后顺序组织。并且,当某个程序的进程通过该种方式写入数据之后,后续该进程将不会再对该数据所在的区域执行写操作,只会进行读操作。
步骤302:根据地址指示信息获得待写入数据对应的内存区域的目标地址,并检测该内存区域是否允许写入数据。
在接收到写请求之后,该存储节点可以根据写请求包括的地址指示信息确定待写入数 据对应的内存区域的目标地址;确定该目标地址对应的页表项的读写属性;如果该目标地址对应的页表项的读写属性不为只读属性,则确定该内存区域允许写入数据。
在本申请实施例中,地址指示信息可以包括偏移(offset)量和待写入数据的数据长度。其中,偏移量用于指示在为待写入数据所对应的程序分配的内存区域内当前已写入数据的区域的长度信息。该存储节点可以根据偏移量确定待写入数据对应的内存区域的起始逻辑地址,之后,该存储节点可以根据内存区域的起始逻辑地址和待写入数据的数据长度确定内存区域的结束逻辑地址。将该内存区域的起始逻辑地址到结束逻辑地址的所有逻辑地址作为该内存区域的目标地址。
在确定该内存区域的目标地址之后,如果该写请求不是通过IO设备接收的写请求,则说明该写请求是需要通过CPU核和MMU来访问SCM介质的写请求。在这种情况下,该存储节点可以将该内存区域的目标地址发送至MMU。MMU可以根据该内存区域的目标地址,从存储的第一页表中获取该目标地址对应的页表项的读写属性。如果该第一页表中该目标地址对应的页表项的读写属性不为只读属性,则说明该该内存区域内允许写入数据。否则,则可以确定该该内存区域不允许写入数据。
其中,第一页表中存储有CPU的地址空间中的逻辑地址与SCM中已分配区域的物理地址之间的映射关系。当存储节点中的CPU核访问SCM介质时,CPU核可以将要访问的内存区域的逻辑地址传输至MMU,MMU可以通过第一页表将要访问的逻辑地址转换为对应的物理地址。
示例性地,MMU中存储的第一页表可以如表1中所示,其中,该第一页表以页号作为索引,且页号根据逻辑地址确定得到。例如,对于16位的逻辑地址,可以将所有可能的16位逻辑地址的高四位作为该第一页表中的页号,每个页号可以用于指示一个逻辑地址范围。每个页号对应一个页表项,且每个页号对应的页表项中包括页框号以及保护位。该页框号可以用于指示一个物理地址范围。该保护位可以用于指示相应物理地址的访问形式。其中,当该保护位为第一数值时,用于指示对应的页框号所指示的物理地址范围对应的内存区域的读写属性为只读属性,当该保护位为第二数值时,用于指示对应的页框号所指示的物理地址范围对应的内存区域的读写属性为非只读属性,也即,既可以读取数据,也可以写入数据。其中,第一数值和第二数值不同,如表1中所示,第一数值可以为1,第二数值可以为0。
表1
页号 页框号 保护位
0 010 0
1 001 1
2 110 0
3 000 0
可选地,如果该写请求是通过IO设备接收的写请求,则说明针对该写请求,可以通过IO设备来访问SCM介质。在这种情况下,该存储节点可以将该该内存区域的目标地址发 送至该存储节点的IOMMU。IOMMU可以根据该目标地址,从存储的第二页表中获取该目标地址对应的页表项的读写属性。如果该第二页表中该目标地址对应的页表项的读写属性不为只读属性,则说明该内存区域允许写入数据。否则,可以说明该内存区域只允许读而不允许写,也即,存储节点不能再在该内存区域内写入数据,该内存区域不允许写入数据。
需要说明的是,第二页表中存储有IO设备的地址空间中的逻辑地址与SCM介质中已分配区域的物理地址之间的映射关系。当存储节点中的IO设备访问SCM介质时,IO设备可以将要访问的内存区域的逻辑地址传输至IOMMU,IOMMU可以通过的第二页表将该逻辑地址转换为对应的物理地址。
可选地,在一种可能的情况中,该存储节点可能还没有从SCM介质中为待写入数据所对应的程序分配内存区域。在这种情况下,客户端节点发送的写请求中携带的地址指示信息中可以仅包括待写入数据的数据量。基于此,存储节点在接收到客户端节点的写请求之后,还可以根据该写请求检测SCM介质中是否存在为待写入数据分配的内存区域。如果SCM介质中不存在为待写入数据分配的内存区域,则在执行本步骤之前,该存储节点可以首先为该待写入数据分配一个内存区域。其中,在为待写入数据分配内存区域时,可以根据该待写入数据所对应的应用程序,为该应用程序分配对应的内存区域。此时,分配的内存区域的空间将大于待写入数据占用的空间。在分配内存区域之后,该存储节点可以获取分配的内存区域中已写入数据的区域的长度信息。也即偏移量。由于分配的内存区域是刚分配的,并未写入数据,因此,该长度信息为0。根据该长度信息,该存储节点可以将分配的内存区域的起始逻辑地址作为待写入数据对应的内存区域的起始逻辑地址。之后,根据该起始逻辑地址和待写入数据的数据长度,来确定待写入数据对应的内存区域的结束逻辑地址。将该起始逻辑地址到结束逻辑地址之间的全部逻辑地址作为该内存区域的目标地址。由于该内存区域是在接收到该写请求时才分配的,因此,该内存区域之前并未写入过数据,在这种情况下,该内存区域的读写属性将不为只读属性,因此,该存储节点可以不必再检测获得的目标地址对应的页表项的读写属性是否为只读属性,而是可以直接将待写入数据写入到该内存区域中。
步骤303:如果该内存区域允许写入数据,则以追加写方式将待写入数据写入该内存区域。
如果确定待写入数据对应的内存区域内允许写入数据,则存储节点可以将该写请求包括的待写入数据通过追加方式写入到该内存区域中。
示例性地,如果该写请求为需要通过IO设备访问SCM介质的写请求,则存储节点可以将前述步骤中确定的内存区域的目标地址发送至IOMMU,IOMMU可以将该内存区域的目标地址转换为对应的物理地址,进而根据该物理地址,按照追加写方式将待写入数据写入到该内存区域中。需要说明的是,基于前述描述可知,IOMMU中存储有第二页表,该第二页表中存储有输入输出设备所能访问的逻辑地址与物理地址之间的映射关系。基于此,IOMMU可以从第二页表中查找内存区域的目标地址对应的物理地址。
可选地,如果该写请求为需要通过CPU访问SCM介质的写请求,则存储节点可以将前述步骤中确定的目标地址发送至MMU。MMU可以将目标地址转换为对应的物理地址,进而根据该物理地址,按照追加写方式将待写入数据写入到该内存区域中。需要说明的是, 基于前述描述可知,MMU中存储有第一页表,该第一页表中存储有CPU核所能访问的逻辑地址与物理地址之间的映射关系。基于此,MMU可以从第一页表中查找该目标地址对应的物理地址。
在将待写入数据写入到内存区域之后,该存储节点可以对写入的数据进行持久化。示例性地,该存储节点可以从该内存区域中读取写入的数据,并对读取到的数据进行循环冗余校验(Cyclic Redundancy Check,CRC)。若校验成功,则表明完成了数据持久化。
需要说明的是,由于本申请实施例中将待写入数据按照追加写方式写入内存区域中,因此,对于写入该待写入数据的进程而言,后续,该进程再次写入数据时,会将数据写入到该内存区域之外的其他未写入过数据的区域中,而不会再对该内存区域执行写操作。但是,该进程可以对该该内存区域执行读操作。也即,该进程可以读取该内存区域中的数据。由此可见,通过该种方式将待写入数据写入到内存区域之后,该内存区域自然的成为了一个对于相应进程而言只读的区域。
图4是本申请实施例示出的一种在已分配的内存区域中写入数据之后,这个已分配的内存区域包括的已写入数据的区域和未写入数据的区域的示意图。如图4中所示,内存区域R1-R7为通过追加写方式已写入数据的区域,由于已按照追加写方式写入了数据,因此,后续,内存区域R1-R7不会再写入数据,从而形成了只读区域。而W8-W10是未写入数据的区域,后续将会继续在这部分区域内写入数据。
步骤304:设置写入数据后的内存区域的目标地址对应的读写属性,使得该内存区域中的数据不能被修改。
按照追加写方式将待写入数据写入到该内存区域之后,对于该待写入数据所对应的进程而言,该进程将不会再对该内存区域进行数据写入。但是,若该存储节点的硬件出现故障或者是存在出错的程序,则其他进程有可能对该该内存区域进行写操作。在这种情况下,为了避免写入该内存区域的数据被出错的程序意外改写,本申请实施例可以将写入数据后的内存区域的目标地址对应的页表项设置为只读属性,以使该内存区域中的数据无法被修改。
其中,在一种可能的实现方式中,如果待写入数据是通过CPU核和MMU写入到该内存区域的,则存储节点可以将该目标地址在MMU存储的第一页表中对应的页表项设置为只读属性。具体地,存储节点可以通过系统调用syscall将MMU存储的第一页表中该目标地址对应的页表项的读写属性设置为只读属性。
在本申请实施例中,基于前述表1中示出的第一页表,存储节点可以通过系统调用获取MMU中存储的该第一页表。之后,该存储节点可以根据该目标地址确定得到至少一个页号。从第一页表中查找确定的每个页号对应的页表项,并将相应页表项中用于指示该页表项的读写属性的保护位设置为第一数值。其中,第一数值用于指示相应页表项中页框号所指示的物理地址范围对应的内存区域为只读属性。
值得注意的是,正是由于本申请实施例通过追加写方式将待写入数据写入该内存区域,使得该内存区域对于待写入数据所对应的进程而言形成了一个只读区域,因此,存储节点可以将该目标地址在第一页表中对应的页表项设置为只读属性。而对于相关技术,由于CPU是通过本地更新方式来访问SCM介质的,所以,在将数据写入某个区域之后,无法将这个 区域的逻辑地址在第一页表对应的页表项设置为只读属性,因而也就无法实现对该区域内的数据的保护。
在将第一页表中的目标地址对应的页表项设置为只读属性之后,当后续存储节点再次接收到通过CPU访问该内存区域的写请求时,可以通过MMU检测接收到的写请求所要访问的逻辑地址在第一页表中对应的页表项的读写属性是否为只读属性。如果是只读属性,则可以生成第一异常信息。其中,第一异常信息用于指示写请求为异常请求。
需要说明的是,对于通过前述方式写入数据的内存区域,由于写入数据之后的内存区域的逻辑地址在第一页表中对应的页表项已经设置为只读属性,因此,如果后续该存储节点再次接收到针对这些已设置为只读的区域的写请求,且需要通过CPU写入数据时,存储节点可以将该写请求中携带的逻辑地址发送至MMU。MMU根据第一页表中这些区域的逻辑地址对应的页表项的读写属性判断出这些区域为只读区域。在这种情况下,MMU将会产生一个异常信号。该存储节点可以根据该异常信号判定该写请求为异常请求。也即,通过CPU对已经写入数据的只读内存区域的访问是非法访问。此时,该存储节点可以拒绝响应该写请求,并根据MMU产生的异常信号生成异常信息,以此来指示该写请求为异常请求。由此可见,通过将写入数据后的内存区域在第一页表中对应的页表项设置为只读属性,避免了这些内存区域中的数据被其他程序意外改写,从而实现了对这些内存区域中的数据的只读保护。
另外,第一异常信息中可以包括用于指示写请求为异常请求的信息,除此之外,还可以包括写请求的进程信息,该进程信息是指用于指示写请求所对应的程序的信息,这样,后续根据该第一异常信息即可以确定出是哪段程序试图对已经写入数据的只读内存区域进行改写,从而方便用户对出错的程序进行定位。
可选地,在另一种可能的实现方式中,如果待写入数据是通过IO设备写入到该内存区域的,则该存储节点可以将该目标地址在IOMMU中存储的第二页表中所对应的页表项设置为只读属性。具体地,该存储节点可以通过syscall将IOMMU存储的第二页表中该目标地址对应的页表项的读写属性设置为只读属性。
在将目标地址在第二页表中对应的页表项设置为只读属性之后,当后续存储节点再次接收到通过IO设备访问内存区域的写请求时,可以通过IOMMU检测再次接收到的写请求所要访问的区域的逻辑地址在第二页表中对应的页表项的读写属性是否为只读属性;如果为只读属性,则生成第二异常信息,第二异常信息用于指示写请求为异常请求,且该第二异常信息包括写请求的进程信息。
需要说明的是,对于通过前述方式写入数据的内存区域,由于写入数据之后的内存区域的逻辑地址在第二页表中对应的页表项已经设置为只读属性,因此,如果后续再次接收到针对这些已设置为只读的区域的写请求,且需要通过IO设备写入数据时,存储节点可以将该写请求所针对的这些区域的逻辑地址发送至IOMMU,IOMMU根据第二页表将会检测到该逻辑地址对应的页表项的读写属性为只读属性。在这种情况下,IOMMU将会产生一个异常信号,该存储节点根据该异常信号即可以判定写请求为异常请求。也即,通过该IO设备对已经写入数据的只读内存区域进行的访问是非法访问。此时,该存储节点可以拒绝响应该写请求,并根据IOMMU的异常信号生成第二异常信息,以此来指示该写请求为异 常请求。由此可见,通过将写入数据后的内存区域在第二页表中对应的页表项设置为只读属性,避免了这些内存区域中的数据被IO设备再次改写,从而实现了对这些内存区域中的数据的只读保护。这样,即使IO设备出现故障,也不会导致对上述这些内存区域中的数据的破坏,并且,相较于相关技术中IO设备通过直接存储器访问(Direct Memory Access,DMA)方式使用物理地址访问SCM,可以避免DMA的恶意攻击。
值得说明的是,在本申请实施例中,该存储节点在将待写入数据写入内存区域之后,可以将第一页表中目标地址对应的页表项设置为只读属性,或者,将第二页表中目标地址对应的页表项设置为只读属性,或者,可以同时将第一页表和第二页表中目标地址对应的页表项设置为只读属性。
在本申请实施例中,接收写请求,根据写请求包括的地址指示信息获得待写入数据对应的内存区域的目标地址,以追加写的方式将待写入数据写入该内存区域中。在将待写入数据写入该内存区域之后,设置目标地址对应的读写属性,使得内存区域中的数据不能被修改,从而有效避免对内存区域内存储的数据的非法篡改,实现了对SCM介质中的数据的只读保护。
上述实施例主要介绍了存储节点进行数据访问的具体实现方式。由上述实施例可知,客户端节点在写入数据时,可以通过存储节点的CPU对SCM介质进行访问,也可以通过存储节点中的IO设备对SCM介质进行访问。接下来,将结合图5来介绍在图1所示的分布式存储系统中,客户端节点向存储节点中写入数据的实现过程。
图5是本申请实施例提供的一种数据访问方法的流程图。该方法可以应用于图1所示的分布式存储系统中。参见图5,该方法包括以下步骤:
步骤501:客户端节点向集群管理节点发送分区申请请求,该分区申请请求用于向集群管理节点申请用于存储多个副本数据的分区。
需要说明的是,在分布式存储系统中,通常采用副本机制来保证存储数据的可靠性。其中,当客户端节点要将数据存入存储节点中时,该客户端节点可以生成多副本数据,其中,各副本数据相同。之后,该客户端节点可以向集群管理节点发送用于申请分区的分区申请请求。在本申请实施例中,以3个副本数据为例来进行说明。
步骤502:集群管理节点向客户端节点反馈分区信息。
集群管理节点在接收到分区申请请求之后,可以根据当前系统中各个分区的负载情况确定一个合适的分区,并从该分区包括的存储节点中为客户端节点的多副本数据分配与副本数据相同数量的存储节点。将确定的该分区的分区信息反馈至客户端节点。其中,该分区信息可以包括确定的分区的分区标识以及为客户端节点分配的该分区内的存储节点的节点标识。
可选地,在一种可能的实现方式中,该集群管理节点在确定合适的分区之后,可以直接将该分区包括的全部存储节点的标识均发送至客户端节点。在这种情况下,可以由客户端节点从中选择与其副本数据的数量相同数量的存储节点。
示例性地,当副本数据为3个时,从确定的分区中选择的存储节点的数量即为3。
步骤503:客户端节点根据分区信息确定每个副本数据对应的存储节点的路由信息。
客户端节点在接收到该分区信息之后,可以根据该分区信息中包含的分区标识以及确定的每个副本数据对应的存储节点的标识,确定每个副本数据对应的存储节点的路由信息。
步骤504:客户端节点根据每个副本数据对应的存储节点的路由信息,向每个副本数据对应的存储节点发送写请求,该写请求中包括对应的副本数据和地址指示信息。
该写请求可以为append_replica_data请求。且该追加请求的具体介绍可以参考前述实施例中的相关内容。
另外,在本申请实施例中,客户端节点向每个副本数据对应的存储节点发送写请求的操作可以并发执行。
步骤505:每个存储节点根据接收到的写请求包括的地址指示信息获得副本数据对应的内存区域的目标地址,并检测内存区域是否允许写入数据。
对于每个副本数据对应的存储节点,该存储节点均可以参照前述实施例中步骤302中介绍的实现方式,根据地址指示信息获得副本数据对应的内存区域,并检测该内存区域是否允许写入数据。
步骤506:如果每个存储节点上的内存区域允许写入数据,则以追加写方式将对应的副本数据写入内存区域。
对于每个副本数据对应的存储节点,如果该存储节点确定内存区域允许写入数据,则存储节点可以参考前述实施例中的步骤303将自身接收到的写请求包括的副本数据写入到对应的内存区域中。本申请实施例在此不再赘述。
步骤507:每个存储节点设置写入数据后的内存区域的目标地址的读写属性,以使内存区域中的数据不能被修改。
对于每个副本数据对应的存储节点而言,在将副本数据写入到对应的内存区域之后,可以参考前述实施例中步骤303中介绍的相关实现方式,将写入数据后的内存区域的逻辑地址对应的页表项设置为只读属性。
步骤508:每个存储节点向客户端节点发送写入成功通知消息。
对于每个副本数据对应的存储节点而言,在将写入数据后的内存区域的逻辑地址对应的页表项设置为只读属性之后,该存储节点可以向客户端节点反馈写入成功通知消息,以此来提示客户端节点相应地副本数据存储成功。
在本申请实施例中,客户端节点可以向存储节点发送写请求,存储节点在接收到写请求之后,根据写请求包括的地址指示信息获取待写入数据对应的内存区域的目标地址,并以追加写的方式将待写入数据写入该内存区域内。在写入数据之后,可以设置目标地址对应的读写属性,以使该内存区域中的数据不能被修改,从而有效避免了对该内存区域内存储的数据的非法篡改,实现了对SCM介质中的数据的只读保护。
接下来对本申请实施例提供的数据访问装置进行介绍。
参见图6,本申请实施例提供了一种数据访问装置600,该装置600包括:
接收模块601,用于执行上述实施例中的步骤301;其中,该接收模块601可以由图2所示的存储节点中的IO设备或者是处理器来执行。
获取模块602,用于执行上述实施例中的步骤302;其中,该获取模块602可以由图2 所示的存储节点中的处理器来执行。
写入模块603,用于执行上述实施例中的步骤303或步骤506;其中,该写入模块603可以由图2所示的CPU和MMU来执行,或者是该写入模块603可以由图2所示的IO设备和IOMMU来执行。
设置模块604,用于执行上述实施例中的步骤304或步骤507;其中,该设置模块604可以图2所示的处理器来执行。
可选地,设置模块604具体用于:将目标地址对应的页表项的读写属性设置为只读属性。
可选地,该装置还包括:
确定模块,用于在写入模块将待写入数据写入内存区域之前,确定目标地址对应的页表项的读写属性不为只读属性。其中,该确定模块可以由图2所示的处理器来执行。
可选地,页表项的读写属性保存在内存管理单元MMU中。
可选地,页表项的读写属性保存在输入输出内存管理单元IOMMU中。
综上所述,在本申请实施例中,接收写请求,根据写请求包括的地址指示信息获得待写入数据对应的内存区域的目标地址,以追加写的方式将待写入数据写入该内存区域中。在将待写入数据写入该内存区域之后,设置目标地址对应的读写属性,使得内存区域中的数据不能被修改,从而有效避免对内存区域内存储的数据的非法篡改,实现了对SCM介质中的数据的只读保护。
需要说明的是:上述实施例提供的数据访问装置在进行数据访问时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的数据访问装置与数据访问方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意结合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如:同轴电缆、光纤、数据用户线(Digital Subscriber Line,DSL))或无线(例如:红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如:软盘、硬盘、磁带)、光介质(例如:数字通用光盘(Digital Versatile Disc,DVD))、或者半导体介质(例如:固态硬盘(Solid State Disk,SSD))等。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述为本申请提供的实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种数据访问方法,其特征在于,所述方法包括:
    接收写请求,所述写请求包括待写入数据和地址指示信息;
    根据所述地址指示信息获得所述待写入数据对应的内存区域的目标地址,所述内存区域是指存储级内存SCM中的空间;
    以追加写的方式将所述待写入数据写入所述目标地址指示的所述内存区域中;
    在将所述待写入数据写入所述内存区域之后,设置所述目标地址对应的读写属性,使得所述内存区域中的数据不能被修改。
  2. 如权利要求1所述的方法,其特征在于,所述设置所述目标地址对应的读写属性包括:
    将所述目标地址对应的页表项的读写属性设置为只读属性。
  3. 如权利要求2所述的方法,其特征在于,在所述将所述待写入数据写入所述内存区域之前,所述方法还包括:
    确定所述目标地址对应的页表项的读写属性不为只读属性。
  4. 如权利要求2所述的方法,其特征在于,所述页表项的读写属性保存在内存管理单元MMU中。
  5. 如权利要求2所述的方法,其特征在于,所述页表项的读写属性保存在输入输出内存管理单元IOMMU中。
  6. 一种数据访问装置,其特征在于,所述装置包括:
    接收模块,用于接收写请求,所述写请求包括待写入数据和地址指示信息;
    获取模块,用于根据所述地址指示信息获得所述待写入数据对应的内存区域的目标地址,所述内存区域是指存储级内存SCM中的空间;
    写入模块,用于以追加写的方式将所述待写入数据写入所述目标地址指示的所述内存区域中;
    设置模块,用于在将所述待写入数据写入所述内存区域之后,设置所述目标地址对应的读写属性,使得所述内存区域中的数据不能被修改。
  7. 如权利要求6所述的装置,其特征在于,所述设置模块具体用于:
    将所述目标地址对应的页表项的读写属性设置为只读属性。
  8. 如权利要求7所述的装置,其特征在于,所述装置还包括:
    确定模块,用于在所述写入模块将所述待写入数据写入所述内存区域之前,确定所述目标地址对应的页表项的读写属性不为只读属性。
  9. 如权利要求7所述的装置,其特征在于,所述页表项的读写属性保存在内存管理单元MMU中。
  10. 如权利要求7所述的装置,其特征在于,所述页表项的读写属性保存在输入输出内存管理单元IOMMU中。
PCT/CN2020/094027 2019-06-19 2020-06-02 数据访问方法及装置 WO2020253523A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20825480.5A EP3964996A4 (en) 2019-06-19 2020-06-02 Database access method and device
US17/554,843 US20220107752A1 (en) 2019-06-19 2021-12-17 Data access method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910533925.5A CN112115521B (zh) 2019-06-19 2019-06-19 数据访问方法及装置
CN201910533925.5 2019-06-19

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/554,843 Continuation US20220107752A1 (en) 2019-06-19 2021-12-17 Data access method and apparatus

Publications (1)

Publication Number Publication Date
WO2020253523A1 true WO2020253523A1 (zh) 2020-12-24

Family

ID=73795665

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/094027 WO2020253523A1 (zh) 2019-06-19 2020-06-02 数据访问方法及装置

Country Status (4)

Country Link
US (1) US20220107752A1 (zh)
EP (1) EP3964996A4 (zh)
CN (1) CN112115521B (zh)
WO (1) WO2020253523A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112948863B (zh) * 2021-03-15 2022-07-29 清华大学 敏感数据的读取方法、装置、电子设备及存储介质
CN115525933B (zh) * 2022-08-26 2023-05-12 杭州杰峰科技有限公司 数据防篡改方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105404596A (zh) * 2015-10-30 2016-03-16 华为技术有限公司 一种数据传输方法、装置及系统
US20170103234A1 (en) * 2014-12-31 2017-04-13 Google Inc. Trusted computing
CN107203330A (zh) * 2016-03-17 2017-09-26 北京忆恒创源科技有限公司 一种面向读写数据流的闪存数据分布方法
CN108628542A (zh) * 2017-03-22 2018-10-09 华为技术有限公司 一种文件合并方法及控制器

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7493454B2 (en) * 2004-10-29 2009-02-17 International Business Machines Corporation Method for achieving reliable worm storage using WMRM storage
US8275927B2 (en) * 2007-12-31 2012-09-25 Sandisk 3D Llc Storage sub-system for a computer comprising write-once memory devices and write-many memory devices and related method
US9558351B2 (en) * 2012-05-22 2017-01-31 Xockets, Inc. Processing structured and unstructured data using offload processors
US8938602B2 (en) * 2012-08-02 2015-01-20 Qualcomm Incorporated Multiple sets of attribute fields within a single page table entry
US9502102B1 (en) * 2014-07-25 2016-11-22 Crossbar, Inc. MLC OTP operation with diode behavior in ZnO RRAM devices for 3D memory
US10795679B2 (en) * 2018-06-07 2020-10-06 Red Hat, Inc. Memory access instructions that include permission values for additional protection
US10969980B2 (en) * 2019-03-28 2021-04-06 Intel Corporation Enforcing unique page table permissions with shared page tables
US11016905B1 (en) * 2019-11-13 2021-05-25 Western Digital Technologies, Inc. Storage class memory access

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170103234A1 (en) * 2014-12-31 2017-04-13 Google Inc. Trusted computing
CN105404596A (zh) * 2015-10-30 2016-03-16 华为技术有限公司 一种数据传输方法、装置及系统
CN107203330A (zh) * 2016-03-17 2017-09-26 北京忆恒创源科技有限公司 一种面向读写数据流的闪存数据分布方法
CN108628542A (zh) * 2017-03-22 2018-10-09 华为技术有限公司 一种文件合并方法及控制器

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3964996A4 *

Also Published As

Publication number Publication date
EP3964996A4 (en) 2022-06-29
CN112115521B (zh) 2023-02-07
US20220107752A1 (en) 2022-04-07
EP3964996A1 (en) 2022-03-09
CN112115521A (zh) 2020-12-22

Similar Documents

Publication Publication Date Title
CN110377436B (zh) 持久性内存的数据存储访问方法、设备及装置
US11074015B2 (en) Memory system and method for controlling nonvolatile memory by a host
US8850158B2 (en) Apparatus for processing remote page fault and method thereof
US8347050B2 (en) Append-based shared persistent storage
TW201915741A (zh) 記憶體系統及控制非揮發性記憶體之控制方法
US20220107752A1 (en) Data access method and apparatus
US12045514B2 (en) Method of controlling nonvolatile memory by managing block groups
CN110554911A (zh) 内存访问与分配方法、存储控制器及系统
US10241934B2 (en) Shared memory controller, shared memory module, and memory sharing system
US9158690B2 (en) Performing zero-copy sends in a networked file system with cryptographic signing
JPWO2014188682A1 (ja) ストレージノード、ストレージノード管理装置、ストレージノード論理容量設定方法、プログラム、記録媒体および分散データストレージシステム
JP2020123040A (ja) メモリシステムおよび制御方法
CN115470156A (zh) 基于rdma的内存使用方法、系统、电子设备和存储介质
WO2020029588A1 (zh) 数据读取方法、装置及系统、分布式系统
WO2019140885A1 (zh) 一种目录处理方法、装置及存储系统
US11734197B2 (en) Methods and systems for resilient encryption of data in memory
US10678453B2 (en) Method and device for checking false sharing in data block deletion using a mapping pointer and weight bits
WO2021238594A1 (zh) 存储介质管理方法、装置、设备以及计算机可读存储介质
CN107305582B (zh) 一种元数据处理方法及装置
WO2024113844A1 (zh) 内存访问方法及相关装置
US11914865B2 (en) Methods and systems for limiting data traffic while processing computer system operations
WO2024193272A1 (zh) 一种数据共享方法、装置及设备
CN115113798B (zh) 一种应用于分布式存储的数据迁移方法、系统及设备
WO2023236629A1 (zh) 数据访问方法、装置、存储系统及存储介质
US20230333769A1 (en) Shared memory protection method for securing mmio commands

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20825480

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020825480

Country of ref document: EP

Effective date: 20211201

NENP Non-entry into the national phase

Ref country code: DE