WO2018107772A1 - 写入请求处理方法、装置及设备 - Google Patents

写入请求处理方法、装置及设备 Download PDF

Info

Publication number
WO2018107772A1
WO2018107772A1 PCT/CN2017/096052 CN2017096052W WO2018107772A1 WO 2018107772 A1 WO2018107772 A1 WO 2018107772A1 CN 2017096052 W CN2017096052 W CN 2017096052W WO 2018107772 A1 WO2018107772 A1 WO 2018107772A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage device
terminal
target service
write request
request
Prior art date
Application number
PCT/CN2017/096052
Other languages
English (en)
French (fr)
Inventor
曾永强
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2018107772A1 publication Critical patent/WO2018107772A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1469Backup restoration techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices

Definitions

  • the present application relates to the field of data storage, and in particular, to a write request processing method, apparatus, and device.
  • the double-active data center processes the write request as follows:
  • the active-active data center provides service processing services based on the same set of logical addresses, and sets a lock server in the primary data center.
  • a terminal in the primary data center or the backup data center initiates a write to a logical address, it first sends a lock request to the lock server to request to lock the logical address. If the logical address is currently in an unlocked state, the server is locked. Lock the logical address and return a lock notification to the terminal. After receiving the lock notification, the terminal sends a write request to the storage devices of the two data centers, and the storage devices of the two data centers respectively write the same data in the requested logical address.
  • the primary data center and the backup data center are usually geographically separated, and in the above solution, the terminal in the backup data center writes data before each time.
  • the lock request needs to be sent to the lock server first, and the return lock notification is received, and the lock request and the lock notification are transmitted across the data center, which causes the delay between the sending of the lock request and the receiving lock notification by the terminal in the backup data center. High, has a greater impact on the processing performance of the backup data center.
  • the embodiment of the present application provides a write request processing method, apparatus, and device.
  • a write request processing method is provided.
  • the method is used in a storage system including a first data center and a second data center, where the first data center includes a first storage device, and the second data center includes a second
  • the storage device, the first data center and/or the second data center further includes at least two terminals, the method being performed by the first terminal of the at least two terminals, the method comprising:
  • the first terminal sends a write request to the first storage device and the second storage device, where the write request is used to indicate that the first storage device and the second storage device are written in the same logical address of the respective storage space. Same data; first end The terminal receives the reject response returned by the storage device, where the storage device is the first storage device or the second storage device, and the reject response is used to indicate that all or part of the addresses in the logical address corresponding to the write request are locked or are about to be locked. The first terminal forwards the write request to the second terminal that triggers the all or part of the address lock, so that the second terminal sends the first storage device and the second storage device after the lock of the all or part of the address is released. The write request is sent separately.
  • the method provided by the solution when the first terminal simultaneously sends a write request to the first storage device and the second storage device, if the second terminal has triggered or is about to trigger the first storage device and the second storage device to be written If the requested logical address is locked, the first terminal forwards the write request to the second terminal, and after the second terminal unlocks the logical address corresponding to the write request, the second terminal sends the write request to the first storage device and The second storage device performs processing.
  • the terminal in any storage device does not need to request address lock across the data center before sending the write request.
  • the terminal in the backup data center sends a write request, there is no slave transmission. The delay between locking the request and receiving the lock notification, thereby greatly improving the processing performance of the backup data center while avoiding the conflict between the address of the write request and the address of other write request or data synchronization request.
  • the method includes: the first terminal forwarding the first write request to the second terminal, where the method includes: the first terminal acquiring the identifier of the second terminal carried in the reject response, where the first terminal is configured according to the second terminal Identifying, forwarding the first write request to the second terminal.
  • a write request processing method is provided.
  • the method is used in a storage system including a first data center and a second data center, where the first data center includes a first storage device, and the second data center includes a second
  • the storage device, the first data center and/or the second data center further includes at least two terminals, the method being performed by any one of the first storage device and the second storage device, the method comprising:
  • the storage device receives a write request sent by the first terminal of the at least two terminals; when all or part of the addresses in the logical address corresponding to the write request are locked or about to be locked, the storage device sends the first terminal Rejecting the response, causing the first terminal to forward the write request to the second terminal, where the second terminal is a terminal that triggers locking all or part of the address in the at least two terminals; the storage device receives the second terminal at the all or part of the address The write request sent after the lock is released.
  • the storage device when all or part of the addresses in the logical address corresponding to the write request are locked or are about to be locked, the storage device sends a reject response to the first terminal, where the storage device detects that the storage space of the storage device is corresponding. Whether the address that has been locked includes the all or part of the address in the logical address; if the detected result is that the locked address includes the all or part of the address, the storage device sends the reject response to the first terminal.
  • the solution determines whether the address that has been locked contains the all or part of the address in the logical address corresponding to the storage space of the storage device, and clarifies that the write request conflicts with the currently locked address, so that the storage device explicitly determines A conflicting write request is generated and a reject response is generated for the write request.
  • the storage device sends a rejection response to the first terminal, which includes: if the detection result is an address that has been locked. If the all-part or part of the address is not included, the storage device detects whether the target service request sent by the second terminal is received, and the logical address corresponding to the target service request includes the same address in the logical address corresponding to the write request; Receiving the target service request, the storage device acquires the priority of the write request and the target service request; when the priority of the target service request is higher than the priority of the write request, the storage device sends the priority to the first terminal. The rejection response.
  • the storage device includes the same address by detecting the logical address corresponding to the target service request and the logical address corresponding to the write request, and determining the priority by the priority of the target service request if the same address exists. Which target service request is made, so that the backup data center in the distributed active-active storage system can flexibly execute the target service request with relatively higher priority according to different priority determination manners on the basis of reducing the delay, thereby improving the system. The ability to prioritize responses to more important tasks.
  • the target service request is used to request to write data in a logical address corresponding to the target service request, and the logical address corresponding to the target service request includes the all or part of the address; or the target service request is used to request the target
  • the data in the logical address corresponding to the service request is synchronized to the other storage device and the other storage device in the second storage device, and the logical address corresponding to the target service request includes all or part of the address.
  • the storage device processes the target service request, and in the process of processing the target service request, the corresponding storage service is locked in the storage space of the storage device.
  • the logical address of the request is not limited to the priority of the target service request.
  • the storage device locks the logical address corresponding to the service request when processing the service request, so that the service request can be correctly executed without being interfered by other operations.
  • the storage device sends the target service request and the lock indication information to the corresponding backup storage device, where the lock indication information is used to indicate that the backup storage device locks the storage of the backup storage device in the process of processing the target service request.
  • the storage device sends the target service request and the lock indication information to the corresponding backup storage device, so that when the storage device fails, the backup storage device can accurately inherit the lock on the logical address.
  • the storage device sends a reject response to the first terminal, where the storage device returns the reject response including the identifier of the second terminal to the first terminal.
  • a terminal comprising: a processor and a communication interface, the communication interface being configured to be controlled by the processor; the processor for implementing the first aspect and the optional aspect of the first aspect The write request processing method provided by the scheme.
  • a storage device comprising: a processor and a communication interface, the communication interface is configured to be controlled by the processor; the processor in the device is configured to implement the second aspect and the second aspect A write request processing method provided by any of the alternatives.
  • a write request processing apparatus comprising at least one unit for implementing the write request processing method provided by the above first aspect and the alternative of the first aspect; or The at least one unit is configured to implement the write request processing method provided by any of the foregoing second aspect or the second aspect.
  • a sixth aspect a computer readable storage medium storing an executable program for implementing the write request processing method provided by the above first aspect and the alternative of the first aspect, or
  • the computer readable storage medium stores an executable program for implementing the write request processing method provided by any of the above second aspect or the second aspect.
  • FIG. 1 is a block diagram of a storage system according to the present application.
  • FIG. 2 is a schematic structural diagram of a storage control device according to an exemplary embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a storage control device according to an exemplary embodiment of the present disclosure
  • FIG. 4 is a flowchart of a write request processing method provided by an exemplary embodiment of the present application.
  • FIG. 5 is a schematic diagram of logical address conflicts involved in the embodiment shown in FIG. 4;
  • FIG. 5 is a schematic diagram of logical address conflicts involved in the embodiment shown in FIG. 4;
  • FIG. 6 is a schematic diagram of a write request process according to the embodiment shown in FIG. 4;
  • FIG. 7 is a schematic diagram of another write request processing according to the embodiment shown in FIG. 4; FIG.
  • FIG. 8 is a block diagram showing the structure of a write request processing apparatus according to an exemplary embodiment of the present application.
  • FIG. 9 is a block diagram showing the structure of a write request processing apparatus according to an exemplary embodiment of the present application.
  • FIG. 1 is a structural diagram of a storage system according to an embodiment of the present application.
  • the storage system includes a first data center 100 and a second data center 110, and the first data center 100 and the second data center 110 form a dual active data center.
  • the first data center 100 and the second data center 110 in the embodiment of the present application may be distinguished according to actual geographic regions where the data center is located, such as a Beijing data center and a Shanghai data center.
  • the first data center 100 communicates through a network interface 106 and a network interface 116 in a second data center 110, which is a cable jack of a wired network, which may be an optical fiber, a twisted pair cable, or a coaxial cable.
  • a physical server may be included, and the physical server may be used to provide the first data center 100 or the second data center 110.
  • Storage, computing, and network resources Embodiments of the present application are primarily implemented by using storage functions in the physical server.
  • a physical server for providing computing and network resources may be referred to as a computing node.
  • a computing node In a data center, several computing nodes may form a cluster of computing nodes.
  • a physical server for providing a storage resource may be referred to as a storage device, and a plurality of storage devices may constitute a cluster of storage nodes.
  • the first storage device 102a and a third storage device 102b includes a first storage device 102a and a third storage device 102b.
  • the two data centers include a second storage device 112a and a fourth storage device 112b.
  • one of the storage devices may be selected as the primary storage device, and another one or several storage devices are set to save the same data as the primary storage node, that is, as the primary storage device.
  • a copy, as a copy of a backup can be referred to as a backup storage device.
  • the third storage device 102b in FIG. 1 is a copy of the first storage device 102a
  • the fourth storage device 112b is a copy of the second storage device 112a.
  • the first data center 100 and the second data center 110 also include at least two terminals.
  • the at least two terminals may be disposed in the first data center 100; or the at least two terminals may be disposed in the second data center 110; At least one terminal is disposed in each of the first data center 100 and the second data center 110.
  • At least one terminal including the terminal 104 is included in the first data center 100, and at least one terminal including the second terminal 114 is included in the second data center 110.
  • Both the terminal 104 and the terminal 114 can initiate a service request including a write request, a read request, or a data synchronization request to the storage device in the first data center 100 or the second data center 110.
  • the terminal 104 can perform cross-data center communication with the second data center 110 through the network interface 106, and can also pass through the data cable inside the first data center 100 and the storage device in the first data center 100 (such as the first The storage device 102a) performs intra-data center communication.
  • the terminal 114 can perform cross-data center communication with the first data center 100 through the network interface 116, and can also pass the data cable and the second number in the second data center 110. Data center communication is performed in accordance with a storage device in the center 110, such as the second storage device 112a.
  • terminal 104 and the terminal 114 in the storage system and other terminals running in the two data centers may be actual hardware devices, such as general-purpose computer devices, or may be virtual software applications, such as running on a computer.
  • the client on the device which can be a virtual machine, virtual machine software, or database software.
  • the first storage device 102a and the second storage device 112a may be configured as a virtual storage unit across a data center, and the storage unit is referred to as a dual live volume.
  • the storage unit is referred to as a dual live volume.
  • the first storage device 102a and the second storage device 112a respectively write the same data locally according to the same logical address.
  • Each storage device in FIG. 1 may include storage management software, and the storage management software may manage the physical storage chip on the storage device, for example, whether the corresponding data address in the physical storage chip has been locked.
  • the storage device may be a network attached storage (NAS) device, or may be a storage area network (SAN) device.
  • NAS network attached storage
  • SAN storage area network
  • the storage device may be a SAN device, and the storage space of the SAN device may be specifically It is represented by a logical unit number (LUN), which is a logical disk displayed in the form of a storage array, and each terminal can run on one or more LUNs of the SAN device.
  • LUN logical unit number
  • the storage management software of the SAN device can control the read and write operations on the LUN, and can also manage the information of the LUN.
  • the first data center 100 and the second data center 110 may be mutually disaster-tolerant data centers, and one of them is one of the data centers. After the data center fails and the terminal it serves stops working, another data center continues to work and records the difference information. After the data center recovers from the fault, the data center that records the difference information is used as the disaster recovery data center to synchronize the difference information, so that the information stored in the data center after the restoration is consistent with the information stored in the disaster recovery data center. .
  • FIG. 2 is a schematic structural diagram of a terminal provided by an exemplary embodiment of the present application.
  • the terminal 20 may be the terminal in the data center shown in FIG. 1 above, or when the terminal shown in FIG. 1 is a virtual software application, the terminal 20 may also be a computer device running the terminal shown in FIG. .
  • the terminal 20 can include a processor 21 and a communication interface 24.
  • the processor 21 may include one or more processing units, which may be a central processing unit (CPU) or a network processor (NP).
  • processing units may be a central processing unit (CPU) or a network processor (NP).
  • CPU central processing unit
  • NP network processor
  • the communication interface 24 can include a local interface for connecting to a storage device or other terminal in the same data center as the terminal, and a network interface for connecting to a storage device or data center in another data center.
  • the terminal 20 may further include a memory 23, and the processor 21 may be connected to the memory 23 and the communication interface 24 via a bus.
  • the memory 23 can be used to store a software program that can be executed by the processor 21.
  • various types of service data or user data can also be stored in the memory 23.
  • the terminal 20 may further include an output device 25 and an input device 27.
  • the output device 25 and the input device 27 are connected to the processor 21.
  • the output device 25 may be a display for displaying information, a power amplifier device or a printer that plays sound, and the output device 25 may further include an output controller for providing output to a display screen, a power amplifier device, or a printer.
  • the input device 27 can be a device such as a mouse, keyboard, electronic stylus or touch panel for user input information, and the input device 27 can also include an output controller for receiving and processing from the mouse, keyboard, electronics Input from devices such as stylus or touch panel.
  • FIG. 3 is a schematic structural diagram of a storage device provided by an exemplary embodiment of the present application.
  • the storage device 30 may be any one of the storage devices shown in FIG. 1 described above.
  • the device 30 can include a processor 31 and a communication interface 34.
  • the processor 31 may include one or more processing units, which may be a central processing unit (CPU) or a network processor (NP).
  • processing units may be a central processing unit (CPU) or a network processor (NP).
  • CPU central processing unit
  • NP network processor
  • the communication interface 34 can include a local interface for connecting to a storage device or other terminal in the same data center as the storage device, and a network interface for connecting to a storage device or data center in another data center.
  • the storage device 30 can also include a memory 33 that can be coupled to the memory 33 and the communication interface 34 via a bus.
  • the memory 33 can be used to store a software program that can be executed by the processor 31.
  • various types of service data or user data can be stored in the memory 33.
  • FIG. 4 shows a flowchart of a write request processing method provided by an exemplary embodiment of the present application. This method can be used in the storage system shown in FIG. As shown in FIG. 4, the data storage method may include:
  • step 401 the first terminal generates a write request.
  • the first terminal may be the terminal 104, the terminal 114, or any other terminal not shown in the storage system shown in FIG. 1, that is, the first terminal may be the terminal of the first data center, or may be the first terminal. Two terminals in the data center.
  • a write request is generated when the first terminal needs to initiate data writing.
  • This step can be implemented by a processor in the terminal shown in FIG. 2.
  • Step 402 The first terminal sends a write request to the first storage device and the second storage device according to the data storage system view.
  • the first storage device may be the first storage device 102a in the storage system shown in FIG. 1, and the second storage device may be the second storage device 112a in the second storage device.
  • a storage device and a second storage device write the same data in the same logical address of the respective storage space.
  • the first storage device and the second storage device form a double live volume.
  • the first terminal After the first terminal generates the write request, it sends the write request to the first storage device and the second storage device respectively.
  • the write request may contain data to be written and a logical address to be written.
  • This step can be implemented by a processor control communication interface in the terminal shown in FIG. 2.
  • Step 403 The storage device receives a write request sent by the first terminal.
  • the storage device in this step may be any one of the first storage device and the second storage device.
  • the first storage device and the second storage device may perform step 403 separately. That is, step 403 may be two parallel steps of the first storage device receiving the write request sent by the first terminal and the second storage device receiving the write request sent by the first terminal.
  • This step can be implemented by a processor control communication interface in the storage device shown in FIG.
  • Step 404 When all or part of the addresses in the logical address corresponding to the write request have been locked or are about to be locked, the storage device sends a reject response to the first terminal.
  • the storage device when a logical address in the storage device is being written to the data, or According to synchronization (for example, when the data in the logical address is synchronized to the same logical address in other storage devices), the storage device can lock the logical address, wherein the manner in which the storage device locks the logical address can be referred to as an optimistic lock.
  • the optimistic lock is used to lock the storage space corresponding to the specified logical address, and the storage space corresponding to the specified logical address can only be read and written by the terminal having the optimistic lock, and the other terminal is prohibited from writing to the logical address.
  • the storage device (only the illustration of the angle of the first storage device is shown in FIG. 4, in the actual application, the second storage device also performs the same step at the same time). After receiving the write request, the write request may be detected. Whether some or all of the logical addresses have been locked or are about to be locked, and if so, the storage device rejects the write request and sends a reject response to the first terminal.
  • the storage device may further acquire a second triggering to lock the all or part of the address.
  • the identifier of the terminal is carried in the rejection response and sent to the first terminal.
  • the second terminal may be a terminal that is in the same data center as the first terminal, or the second terminal may also be a terminal that is in a different data center from the first terminal.
  • This step can be implemented by a processor control communication interface in the storage device shown in FIG.
  • step 404a and step 404b may be performed.
  • Step 404a The storage device detects, in the logical address corresponding to the storage space of the storage device, whether the locked address includes all or part of the address of the logical address corresponding to the write request.
  • This step can be implemented by a processor in the storage device shown in FIG.
  • Step 404b If the detection result is that the locked address includes the all or part of the address, the storage device sends a reject response to the first terminal.
  • the storage device before receiving the write request sent by the first terminal, if the storage device has received the target service request sent by the second terminal and starts processing the target service request, the storage device may process the The logical address corresponding to the target service request is locked in the process of the target service request, that is, the logical address corresponding to the target service request is locked when the target service request is started, and the target is processed when the target service request is completed. The lock of the logical address corresponding to the service request is released.
  • the storage device receives the write request sent by the first terminal in the process of processing the target service request, and the logical address corresponding to the target service request and the logical address corresponding to the write request include the same address (that is, all of the above or Partial address) means that there is a conflict between the logical addresses of the two requests. At this time, the storage device sends a reject response including the identifier of the second terminal to the first terminal.
  • the above logical address conflicts may be classified into three types: the first is that the logical address corresponding to the write request overlaps with the logical address corresponding to the target service request, and the second is that the logical address corresponding to the write request is corresponding to the target service request. The logical address is completely covered, and the third is that the logical address corresponding to the write request crosses the logical address corresponding to the target service request.
  • FIG. 5 is a schematic diagram of logical address conflicts involved in an embodiment of the present application.
  • Overlap means that the logical addresses corresponding to two or more requests are identical. As shown in the overlay diagram 510 in FIG. 5, it is assumed that the LBA value of the logical address 511 corresponding to the write request is 0x1000 to 0x1004, and the LBA value of the logical address 512 corresponding to the target service request is also 0x1000 to 0x1004. The corresponding logical address 511 of the request overlaps with the logical address 512 corresponding to the target service request.
  • the overlay refers to a logical address corresponding to one of the two write requests, and is included in a logical address corresponding to another write request, as shown in the full coverage diagram 520 in FIG. 5, assuming a logical address corresponding to the write request.
  • the LBA value of 522 is 0x1000 to 0x1002
  • the LBA value of the logical address 521 corresponding to the target service request is 0x1001 to 0x1004. Then, the logical address 521 corresponding to the target service request covers the logical address 522 corresponding to the write request.
  • the cross-covering refers to the existence of a logical address, which is a part of the logical address corresponding to the two service requests, and there are different parts between the logical addresses corresponding to the two service requests, as shown in FIG. 530, assuming that the LBA value of the logical address 531 corresponding to the write request is 0x1000 to 0x1004, and the LBA value of the logical address 532 corresponding to the target service request is 0x1003 to 0x1006, the logical address 531 corresponding to the write request and the target service are displayed. The corresponding logical address 532 is requested to be cross-overwritten, and the LBA value of the intersection is 0x1003 to 0x1004.
  • the target service request may be used to request that data be written in a logical address corresponding to the target service request.
  • the target service request may be used to request that the data in the logical address corresponding to the target service request in the storage device be synchronized to another storage device in the first storage device and the second storage device.
  • the storage device that performs the step is the first storage device.
  • the second terminal After the second storage device in the second data center fails and recovers, the second terminal sends data synchronization to the first storage device when the background synchronization occurs.
  • the request, the data synchronization request is used to request that the difference data written in the first storage device is synchronized to the second storage device during the failure of the second storage device.
  • This step can be implemented by a processor control communication interface in the storage device shown in FIG.
  • the storage device when the storage device detects that the locked address does not include all or part of the address of the logical address corresponding to the write request, the storage device may further detect the logical address corresponding to the write request.
  • step 404c, step 404d, and step 404e are performed.
  • Step 404c The storage device detects whether the target service request sent by the second terminal is received.
  • the storage device may also receive the write request sent by the first terminal and the target service request sent by the second terminal, and the logical address of the write request and the target service request conflict with each other. Processing, at this time, the storage device needs to perform step 404c to determine which request to process first.
  • This step can be implemented by a processor in the storage device shown in FIG.
  • Step 404d If the detection result is that the target service request is received, the storage device acquires the priority of the write request and the target service request.
  • the priority of a service request received by a storage device may include a priority of a service type corresponding to the service request, a priority of a data center where the terminal that sends the service request is located, and the service request is sent. At least one of the priorities of the terminals.
  • the write request and the target service request may carry respective priorities, and the storage device directly parses the priority of the write request and the target service request from the received write request and the target service request.
  • the storage device may also obtain information such as the service type of the write request and the target service request, the sending terminal, and the data center where the sending terminal is located, and query the respective priorities of the write request and the target service request according to the obtained information.
  • This step can be implemented by a processor in the storage device shown in FIG.
  • step 404c When the storage device performs the completion of step 404c, if it is detected that the target service request is not received, the storage device may perform the subsequent step 409.
  • Step 404e When the priority of the target service request is higher than the priority of the write request, the storage device sends a reject response to the first terminal.
  • the reject response includes an identifier of the second terminal.
  • the embodiment of the present invention will perform step 405.
  • the storage device can compare the priority of each of the write request and the target service request, when the priority of the target service request is higher than the priority of the write request, Storage device The target service request will be processed preferentially and a rejection response will be sent to the first terminal; when the priority of the write request is higher than the priority of the target service request, the storage device will preferentially process the write request and perform the subsequent step 409.
  • the storage device may sequentially compare the priority of the write request and the target service request in a predetermined order until the write request is determined. Until the priority of the target business request is high.
  • the priority of the service request includes the priority of the service type corresponding to the service request, the priority of the data center where the terminal that sends the service request is located, and the priority of the terminal that sends the service request.
  • the priority of the service type of the write request and the priority of the service type of the target service request are first compared, and if the service type of the target service request has a higher priority than the write.
  • the priority of the requested service type for example, the target service request is a data synchronization request, and the priority of the data synchronization is higher than the written priority
  • the storage device determines that the priority of the target service request is higher than the priority of the write request.
  • the storage device determines that the priority of the target service request is lower than the priority of the write request.
  • the storage device further compares the priority and the data center of the first terminal The priority of the data center where the second terminal is located, if the priority of the data center where the first terminal is located is higher than the priority of the data center where the second terminal is located (for example, if the first terminal is in the first data center, The second terminal is in the second data center, and the priority of the first data center is higher than the priority of the second data center, and the priority of the write request is determined to be higher than the priority of the target service request;
  • the priority of the data center is lower than the priority of the data center where the second terminal is located (for example, assuming that the first terminal is in the first data center, the second terminal is in the second data center, and the priority of the first data center is Lower than the priority of the second data center, determining that the priority of the write request is lower than the priority of the target service request; if the first terminal is assuming that the first terminal is in the first data center, the second terminal is in the second data center, and the priority of the first data center is Lower than the
  • the storage device determines that the priority of the write request is higher than the priority of the target service request, if the priority of the first terminal is lower than the second The priority of the terminal, the storage device determines that the priority of the write request is lower than the priority of the target service request.
  • different priorities may be set between different terminals in the same data center.
  • This step can be implemented by a processor control communication interface in the storage device shown in FIG.
  • the storage device processes the target service request, and in the process of processing the target service request, locks the logical address corresponding to the target service request in the storage space of the storage device.
  • the step of the storage device processing the target service request and locking the logical address of the target service request in the process of processing the target service request may be implemented by a processor control communication interface in the storage device shown in FIG. 3.
  • the storage device may further send a response to the target service request processing completion to the second terminal to indicate the target service of the second terminal.
  • the request has been processed, and the lock of some or all of the addresses in the write request has been released.
  • the storage device may send the target service request and the lock indication information to the corresponding backup storage device, where the lock indication information is used to indicate that the backup storage device is locked in the process of processing the target service request.
  • the storage device After the storage device processes the target service request and releases the lock of the logical address corresponding to the target service request, the storage device sends a response to the second terminal to indicate that the target service request processing is completed, to indicate that the second terminal Target business request processing has been completed, the return
  • the response to the second terminal is intended to enable the sender of the target service request to determine that its own request has been executed to ensure that the processing of the target service request satisfies the distributed strong consistency replication protocol.
  • the second terminal may consider that the logical address corresponding to the target service request has been unlocked.
  • the distributed strong consistency replication protocol is a protocol introduced to ensure the data consistency of each replica data in the active data center.
  • the active-active data center involved in the embodiment of the present application supports the protocol.
  • the protocol specifically requires the active data center to ensure that the specified data update is successful when the specified data update operation is completed.
  • the distributed strong consistency replication requires that when the copy of the specified data is failed, the data in the failed copy and the data in each of the other copies of the specified data are restored to be consistent. The failed copy is reinvested into the business process.
  • Step 405 The first terminal receives a rejection response returned by the storage device.
  • the first terminal when receiving the rejection response sent by any one of the first storage device or the second storage device, the first terminal may consider that both the first storage device and the second storage device issue a rejection. response.
  • the first terminal may obtain the identifier of the second terminal carried in the reject response when the first terminal receives the reject response returned by the storage device.
  • This step can be implemented by a processor control communication interface in the terminal shown in FIG. 2.
  • Step 406 The first terminal forwards the write request to the second terminal.
  • the first terminal may obtain the identifier of the second terminal carried in the reject response, and forward the write request to the second terminal according to the identifier of the second terminal.
  • This step can be implemented by a processor control communication interface in the terminal shown in FIG. 2.
  • Step 407 The second terminal receives the write request forwarded by the first terminal.
  • This step can be implemented by a processor control communication interface in the terminal shown in FIG. 2.
  • Step 408 After the second terminal unlocks the all or part of the address, the second terminal separately sends the write request to the first storage device and the second storage device.
  • the second terminal when the storage system involved in the embodiment of the present application is running, if the second terminal receives the write request forwarded by the first terminal, and receives the write request forwarded by the other terminal, the second The terminal can determine which write request to send first based on the priority of the terminal that sent the write request.
  • This step can be implemented by a processor control communication interface in the terminal shown in FIG. 2.
  • the storage device receives the write request. After receiving the write request, the storage device continues to monitor whether some or all of the addresses in the logical address of the write request are locked or about to be locked, if some or all of the addresses in the logical address of the write request are locked. Or is about to be locked, then a rejection response is returned to the second terminal, otherwise the storage device will perform step 409.
  • the action of the storage device receiving the write request may be implemented by a processor control communication interface implemented in the storage control device of the first storage device or the second storage device.
  • Step 409 The storage device writes the data indicated by the write request according to the logical address corresponding to the write request.
  • step 409 is performed by two storage devices constituting the dual live volume, in the embodiment shown in the present application, the first storage device and the second storage device may be regarded as being in the respective storage spaces. The same data is written in the same logical address. After the first storage device and the second storage device both store the same data, step 409 is deemed to be complete.
  • This step can be implemented by a processor in the storage device shown in FIG.
  • Step 410 The storage device locks a logical address corresponding to the write request in the storage space of the storage device.
  • This step can be implemented by a processor in the storage device shown in FIG.
  • Step 411 The storage device sends the write request and the lock indication information to the corresponding backup storage device, where the lock indication information is used to indicate that the backup storage device locks the storage space of the backup storage device in the process of processing the write request.
  • the logical address that should be written to the request is used to indicate that the backup storage device locks the storage space of the backup storage device in the process of processing the write request.
  • the storage device when the storage device writes the data according to the write request, the storage device sends the write request and the lock indication to the corresponding backup storage device. information.
  • the storage indication device may not send the lock indication information, and the backup storage device receives and processes the service.
  • the logical address corresponding to the service request is automatically locked.
  • the backup storage device when the backup storage device completes the processing of the service request, the backup device returns a successful response to the terminal that sends the service request.
  • This step can be implemented by a processor control communication interface in the storage device shown in FIG.
  • FIG. 6 illustrates a write request processing process according to an embodiment of the present application, such as As shown in FIG. 6, the data writing case can be divided into the following steps.
  • the first terminal in the first data center generates a write request 1 upon receiving an instruction to write data.
  • the first terminal acquires the logical address corresponding to the write request 1 according to the storage system view, and sends the write request 1 to the first storage device and the second storage device for processing.
  • the first storage device receives the write request 1
  • the logical address corresponding to the write request 1 is locked, and the data is written according to the instruction of the write request 1;
  • the second storage device receives the After the request 1 is written, the logical address corresponding to the write request 1 is also locked, and the data is written in accordance with the instruction of the write request 1.
  • the third storage device is the backup device of the first storage device
  • the fourth storage device is the backup device of the second message device
  • the third storage device locks the same logical address and the same write request as the first storage device. 1
  • the data is written.
  • the fourth storage device also performs the same operation as the second storage device.
  • the second terminal generates another write request 2, and the write request 2 generated by the second terminal is partially or completely the same as the logical address corresponding to the write request 1 generated by the first terminal.
  • the second terminal sends the generated write request 2 to the first storage device and the second storage device, respectively.
  • both storage devices After the first storage device and the second storage device receive the write request 2, both storage devices will detect whether the locked logical address corresponds to the write request 2 sent by the second terminal. The logical address is the same. Since the write request 2 sent by the second terminal is partially or completely the same as the write request 1 sent by the first terminal, both the first storage device and the second storage device generate a reject response, and feed back the rejection response to the The second terminal. At the same time, the above two storage devices add the identifier of the first terminal in the generated rejection response.
  • the second terminal After receiving the reject response, the second terminal acquires the identifier of the first terminal in the reject response, and forwards the write request 2 to the first terminal.
  • the first terminal After receiving the write request 2 forwarded by the second terminal, the first terminal discharges the write request 2 into the task queue, and the first terminal receives the first storage device and the second storage device to execute the write request. After the completion of the response.
  • the first terminal sends the write request 2 to the first storage device and the second storage device, respectively.
  • the background data synchronization request may also be used as the request sent by the terminal, and the first storage device and the second storage device still lock the same logical address, and the difference is only in the logical address where one of the storage devices is locked.
  • the data is used to read data, and the logical address locked by another storage device is used to write data.
  • FIG. 7 illustrates another processing of write request processing according to an embodiment of the present application. As shown in Figure 7, the process is as follows:
  • the first terminal initiates background data synchronization.
  • the first terminal instructs the first storage device to read the data to be synchronized based on the granularity of the data cursor as the starting point, starting from the position where the data cursor is currently aligned with the logical address of the dual live volume space.
  • the first storage device locks the 1 MB logical address aligned from the current data cursor position, and reads the data within the 1 MB, while the third storage device locks the same 1 MB corresponding logical address.
  • the first storage device sends the read 1MB data to the first terminal.
  • the first terminal sends a write request 1 to the second storage device in the second data center, the write request 1 is used to request the 1MB data to be written into the same logical address in the second storage device, and the second storage device receives After the 1MB data is received, the logical address corresponding to the 1MB data is locked, and the 1MB data is written into the corresponding logical address in the second storage device, and the fourth storage device locks the same 1MB corresponding logical address. And writing the 1MB data into a corresponding logical address in the fourth storage device.
  • the second terminal generates a write request 2.
  • the second terminal sends the write request 2 to the first storage device and the second storage device respectively.
  • the first storage device and the second storage device respectively detect whether the logical address corresponding to the write request 2 is the same as the current 1MB locked logical address. If the difference is completely different, the write request 2 is directly executed.
  • the first storage device and the second storage device acquire the identifier of the first terminal that triggers the locking of the logical address, and will carry A rejection response with the identity of the first terminal is returned to the second terminal.
  • the second terminal After receiving the rejection response, the second terminal forwards the write request 2 to the first terminal.
  • the first terminal sends the write request 2 forwarded by the second terminal to the first storage device and the second storage device, respectively.
  • the first storage device and the second storage device After receiving the write request 2 forwarded by the second terminal, the first storage device and the second storage device process the write request 2 forwarded by the second terminal.
  • the present application updates the data that needs to be synchronized by using a data cursor in the background data synchronization.
  • the data to be synchronized may be divided into a plurality of data segments having a length of a cursor, and the logical addresses corresponding to each segment of the cursor are sequentially locked and synchronized.
  • the storage devices on both sides are read and written.
  • the logical address corresponding to the data segment is released, and the logical address in the storage device on both sides of the read/write device corresponding to the next data segment of the data segment is locked, and the synchronization of the data segment is performed.
  • the data segments that need to be synchronized may exist continuously or in a double live volume space.
  • the data cursor is 1MB long, and the data to be synchronized is 3MB, so that the cursor is currently aligned with the active double volume.
  • the inter-position is the starting point, and the data to be synchronized is at the double-volume space position where the 3MB, 5MB, and 7MB are located, respectively.
  • the data cursor is detected in the first MB and the second MB respectively, and after the data that needs to be synchronized is not detected, it moves to the position of the third MB.
  • the data Since the data of the location needs to be synchronized, the data indicates the first storage device and the first The two storage devices respectively lock the logical address and perform data synchronization work at the location. After synchronizing the data of the 3MB position, the data cursor will receive the corresponding synchronization to the first storage device and the second storage device respectively, and then the data cursor will detect to detect the 4MB, 5MB... in turn until it determines the preset After the 3 MB length of the data to be synchronized is synchronized, the synchronization work is ended, and the response of the synchronization success is returned to the terminal that initiated the synchronization work.
  • the write request processing method shown in the embodiment of the present application acquires a data write request by using the first terminal, and separately sends the write to the first storage device and the second storage device according to the data storage system view.
  • the first storage device and the second storage device respectively detect whether all or part of the addresses in the logical address corresponding to the write request have been locked or are about to be locked; when the detection result is negative
  • each of the logical addresses corresponding to the write request will be locked and the write request will be executed, and in order to back up the data in the first storage device and the second storage device, the third message device will also be instructed to perform completely with the first storage device.
  • the second terminal detects the write request corresponding to the logical address unlocked, respectively, and transmits the write request to the first storage device and a second storage device to indicate that both the storage device to execute the write request.
  • the terminal in any storage device does not need to request an address lock across the data center before each write request is sent, when the terminal in the backup data center sends a write request, there is no slave lock.
  • the delay between the request and the reception of the lock notification thereby greatly improving the processing performance of the backup data center while avoiding the conflict between the address of the write request and the address of other write request or data synchronization request.
  • the embodiment detects the identifier of the terminal that locks the logical address, and forwards the current write request to the lock.
  • the terminal of the logical address causes the request for generating the write conflict to be converted into the serial processing, and on the basis of reducing the delay of the distributed active-active storage system, the write request for performing the logical address conflict without error is ensured.
  • FIG. 8 is a structural block diagram of a write request processing apparatus according to an embodiment of the present application, which may be implemented as part or all of a write request processing apparatus by a combination of hardware circuits or software and hardware.
  • the write request processing means may include a write request transmitting unit 801, a reject response receiving corresponding 802, and a forwarding unit 803.
  • the write request transmitting unit 801 is configured to perform the same or similar steps as the above step 402.
  • the reject response receiving unit 802 is configured to perform the same or similar steps as those in the above step 405.
  • the forwarding unit 803 is configured to perform the same or similar steps as the above step 406.
  • FIG. 9 is a structural block diagram of another write request processing apparatus according to an embodiment of the present application.
  • the write request processing apparatus may be implemented as part or all of a storage device by a combination of hardware circuits or software and hardware.
  • the write request processing means may include a write request receiving unit 901, a reject response transmitting unit 902, a process lock unit 903, and a lock instruction transmitting unit 904.
  • a write request receiving unit 901 configured to perform the same or similar steps as step 403 above, and for executing the save
  • the storage device receives a write request.
  • the reject response sending unit 902 is configured to perform the same or similar steps as the above step 404 (including steps 404a, 404b, 404c, 404d, and 404e).
  • the processing locking unit 903 is configured to perform the same or similar steps as the step of processing the target service request in the step 404, and the logical address of the target service request is locked in the process of processing the target service request, or The same or similar steps as steps 409 and 410 above.
  • the lock indication transmitting unit 904 is configured to perform the same or similar steps as the above step 411.
  • the write request processing device provided by the foregoing embodiment only uses the division of each functional module described above when storing data. In actual applications, the function distribution may be completed by different functional modules as needed. The internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the write request processing device and the method for the write request processing method provided by the foregoing embodiments are in the same concept, and the specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • the steps performed by the processor to implement the above embodiments may be performed by hardware, or may be controlled by related hardware, which may be stored in a computer readable storage.
  • the above-mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Abstract

一种写入请求处理方法、装置及设备,属于数据存储领域。该方法包括:第一终端向第一存储设备和第二存储设备分别发送写入请求(S402),当写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定时,存储设备向第一终端发送拒绝响应(S404b),所述拒绝响应用于指示所述写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定,第一终端向第二终端转发所述写入请求(S406-S407),使得第二终端在所述全部或部分地址的锁定被解除后,向第一存储设备和第二存储设备分别发送所述写入请求(S408),从而在避免写入请求的地址与其它写入请求或数据同步请求的地址产生冲突的同时,极大的提高了备份数据中心的处理性能。

Description

写入请求处理方法、装置及设备
本申请要求于2016年12月14日提交中国专利局、申请号为201611155528.1、发明名称为“写入请求处理方法、装置及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据存储领域,特别涉及一种写入请求处理方法、装置及设备。
背景技术
随着信息技术的不断发展,基于数据中心的信息系统在各行业中承担的关键业务也越来越多越多,一旦数据中心崩溃或出错,将造成重要数据的丢失。为了提高数据的安全性,目前的信息系统通常会设置主数据中心和备份数据中心来进行数据的保存和备份,同时为了提高硬件利用率,这两个数据中心可以同时处理业务请求,比如写入请求、读取请求以及数据同步请求等。上述同时处理业务请求的主备两个数据中心通常被称作双活数据中心。
数据中心在处理业务请求时,需要对写入请求做特定的处理,以避免写入请求与写入请求,或者写入请求与数据同步请求之间的地址冲突。在相关技术中,双活数据中心对写入请求的处理方式如下:
双活数据中心基于同一组逻辑地址提供业务处理服务,且在主数据中心设置锁服务器。主数据中心或者备份数据中心中的终端对某个逻辑地址发起写入时,首先向锁服务器发送锁定请求,以请求对该逻辑地址进行锁定,若该逻辑地址当前处于未锁定状态,则锁服务器锁定该逻辑地址并向终端返回锁定通知。该终端接收到锁定通知后,向两个数据中心的存储设备分别发送写入请求,由两个数据中心的存储设备各自在请求的逻辑地址中写入相同的数据。
在实际应用中,为了达到较好的备灾效果,主数据中心和备份数据中心通常在地理位置上相隔较远,而在上述方案中,备份数据中心中的终端每次写入数据之前,都需要先向锁服务器发送锁定请求并接收返回的锁定通知,且该锁定请求和锁定通知均为跨数据中心传输,导致备份数据中心中的终端从发送锁定请求到接收锁定通知之间的时延较高,对备份数据中心的处理性能影响较大。
发明内容
为了提高双活数据中心中的备份数据中心的处理性能,本申请的实施例提供了一种写入请求处理方法、装置及设备。
第一方面,提供了一种写入请求处理方法,该方法用于包括第一数据中心和第二数据中心的存储系统中,第一数据中心包含第一存储设备,第二数据中心包含第二存储设备,第一数据中心和/或第二数据中心还包含至少两个终端,该方法由该至少两个终端中的第一终端执行,该方法包括:
第一终端向该第一存储设备和所述第二存储设备分别发送写入请求,该写入请求用于指示第一存储设备和第二存储设备在各自的存储空间的相同逻辑地址中写入相同数据;第一终 端接收存储设备返回的拒绝响应,该存储设备是第一存储设备或者第二存储设备,该拒绝响应用于指示该写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定;第一终端向触发该全部或部分地址锁定的第二终端转发该写入请求,使得第二终端在该全部或部分地址的锁定被解除后,向该第一存储设备和该第二存储设备分别发送该写入请求。
本方案提供的方法,当第一终端同时向第一存储设备和第二存储设备发送写入请求时,若第二终端已经触发或即将触发该第一存储设备和第二存储设备对应该写入请求的逻辑地址锁定,则该第一终端将写入请求转发给第二终端,由第二终端在该写入请求对应的逻辑地址解除锁定后,将该写入请求发送给第一存储设备和第二存储设备进行处理,任意存储设备中的终端都不需要在每次发送写入请求之前都跨数据中心请求地址锁定,备份数据中心中的终端发送写入请求时,也就不存在从发送锁定请求到接收锁定通知之间的时延,从而在避免写入请求的地址与其它写入请求或数据同步请求的地址产生冲突的同时,极大的提高了备份数据中心的处理性能。
可选的,所述方法包括:第一终端向第二终端转发第一写入请求,具体包括:第一终端获取该拒绝响应中携带的第二终端的标识,第一终端根据第二终端的标识,向第二终端转发第一写入请求。
第二方面,提供了一种写入请求处理方法,该方法用于包括第一数据中心和第二数据中心的存储系统中,第一数据中心包含第一存储设备,第二数据中心包含第二存储设备,第一数据中心和/或第二数据中心还包含至少两个终端,该方法由第一存储设备和第二存储设备中的任一存储设备执行,该方法包括:
存储设备接收该至少两个终端中的第一终端发送的写入请求;当该写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定时,存储设备向第一终端发送拒绝响应,使得第一终端向第二终端转发该写入请求,第二终端是该至少两个终端中触发锁定该全部或部分地址的终端;存储设备接收第二终端在该全部或部分地址的锁定被解除后发送的该写入请求。
可选的,当该写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定时,存储设备向第一终端发送拒绝响应,具体包括:存储设备检测存储设备的存储空间对应的逻辑地址中,已被锁定的地址是否包含该全部或部分地址;若检测结果为该已被锁定的地址包含该全部或部分地址,则存储设备向第一终端发送所述拒绝响应。
该方案通过检测存储设备的存储空间对应的逻辑地址中,已被锁定的地址是否包含该全部或部分地址,明确了写入请求和当前已锁定的地址产生冲突的条件,使得存储设备明确地确定产生冲突的写入请求,并对该写入请求生成拒绝响应。
可选的,当该写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定时,存储设备向第一终端发送拒绝响应,具体包括:若检测结果为已被锁定的地址不包含所述全部或部分地址,则存储设备检测是否接收到第二终端发送的目标业务请求,目标业务请求对应的逻辑地址与该写入请求对应的逻辑地址中包含相同地址;若检测结果为接收到该目标业务请求,则存储设备获取该写入请求和该目标业务请求的优先级;当该目标业务请求的优先级高于该写入请求的优先级时,存储设备向第一终端发送所述拒绝响应。
该方案中,存储设备通过检测目标业务请求对应的逻辑地址与该写入请求对应的逻辑地址中包含相同地址,并在存在相同地址的情况下,通过目标业务请求的优先级来确定优先执 行哪一个目标业务请求,使得分布式双活存储系统中的备份数据中心在减小时延的基础上,能够根据不同的优先级确定方式灵活执行优先级相对较高的目标业务请求,提高了系统优先响应较重要的任务的能力。
可选的,该目标业务请求用于请求在目标业务请求对应的逻辑地址中写入数据,且目标业务请求对应的逻辑地址包含该全部或部分地址;或者,该目标业务请求用于请求将目标业务请求对应的逻辑地址中的数据同步至第一存储设备和第二存储设备中的另一存储设备,且目标业务请求对应的逻辑地址包含全部或部分地址。
可选的,当目标业务请求的优先级高于写入请求的优先级时,存储设备处理目标业务请求,并在处理该目标业务请求的过程中,锁定存储设备的存储空间中对应该目标业务请求的逻辑地址。
该方案中,存储设备在处理业务请求时锁定了该业务请求对应的逻辑地址,使得该业务请求能够不被其他操作干扰而正确地执行。
可选的,存储设备向对应的备份存储设备发送该目标业务请求以及锁定指示信息,该锁定指示信息用于指示该备份存储设备在处理该目标业务请求的过程中,锁定该备份存储设备的存储空间中对应该目标业务请求的逻辑地址。
该方案中,存储设备通过向对应的备份存储设备发送该目标业务请求以及锁定指示信息,使得存储设备在发生故障时,备份存储设备能够准确的继承对逻辑地址的锁定。
可选的,该存储设备向第一终端发送拒绝响应,包括:存储设备向第一终端返回包含有第二终端的标识的该拒绝响应。
第三方面,提供了一种终端,该终端包括:处理器和通信接口,所述通信接口被配置为由所述处理器控制;该处理器用于实现上述第一方面及第一方面的可选方案所提供的写入请求处理方法。
第四方面,提供了一种存储设备,该设备包括:处理器和通信接口,该通信接口被配置为由该处理器控制;该设备中的处理器用于实现上述第二方面及第二方面的任意一种可选方案所提供的写入请求处理方法。
第五方面,提供了一种写入请求处理装置,该装置包括至少一个单元,该至少一个单元用于实现上述第一方面及第一方面的可选方案所提供的写入请求处理方法;或者,该至少一个单元用于实现上述第二方面或第二方面的任意一种可选方案所提供的写入请求处理方法。
第六方面,提供了一种计算机可读存储介质,该计算机可读存储介质存储有用于实现上述第一方面及第一方面的可选方案所提供的写入请求处理方法的可执行程序,或者,该计算机可读存储介质存储有用于实现上述第二方面或第二方面的任意一种可选方案所提供的写入请求处理方法的可执行程序。
附图说明
图1是本申请所涉及的存储系统的架构图;
图2是本申请一示例性实施例提供的存储控制设备的结构示意图;
图3是本申请一示例性实施例提供的存储控制设备的结构示意图;
图4是本申请一示例性实施例提供的写入请求处理方法的流程图;
图5是图4所示实施例涉及的逻辑地址冲突的示意图;
图6是图4所示实施例涉及的一种写入请求处理示意图;
图7是图4所示实施例涉及的另一种写入请求处理示意图;
图8是本申请一示例性实施例提供的一种写入请求处理装置的结构方框图;
图9是本申请一示例性实施例提供的一种写入请求处理装置的结构方框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
图1是本申请实施例所涉及的存储系统的架构图。该存储系统包括第一数据中心100和第二数据中心110,该第一数据中心100和第二数据中心110组成双活数据中心。
可选的,在本申请的实施例中的第一数据中心100和第二数据中心110可以根据数据中心所在的实际地理区域进行区别,例如北京数据中心和上海数据中心。第一数据中心100通过网络接口106和第二数据中心110中的网络接口116进行通信,该网络接口是有线网络的线缆插口,该线缆可以是光纤、双绞线或同轴电缆。
可选的,在本申请中的第一数据中心100和第二数据中心110的具体实现时,都可以包括物理服务器,该物理服务器可以用于为第一数据中心100或第二数据中心110提供存储、计算和网络资源。在本申请的实施例主要通过使用该物理服务器中的存储功能来实现。对于物理服务器,可以按照上述功能进行分类。比如,用于提供计算和网络资源的物理服务器可以称之为计算节点,在一个数据中心中,若干个计算节点可以组成计算节点集群。用于提供存储资源的物理服务器可以称之为存储设备,若干个存储设备可以组成存储节点集群,例如图1中的第一数据中心100包含有第一存储设备102a和第三存储设备102b,第二数据中心包含有第二存储设备112a和第四存储设备112b。其中,为了提高被存储的数据的安全性,可选定其中一个存储设备作为主存储设备,并且设置另外一个或若干个存储设备与该主存储节点保存同样的数据,即作为该主存储设备的副本,作为备份的副本可称为备份存储设备。例如图1中的第三存储设备102b是第一存储设备102a的副本,第四存储设备112b是第二存储设备112a的副本。
第一数据中心100和第二数据中心110中还至少包括两个终端。本存储系统具体设置上述两个终端时,可以将该至少两个终端均设置在第一数据中心100中;也可以将该至少两个终端均设置在第二数据中心110中;还可以在该第一数据中心100和该第二数据中心110中各设置至少一个终端。
如图1所示,第一数据中心100中包含终端104在内的至少一个终端,第二数据中心110中包含第二终端114在内的至少一个终端。上述终端104和终端114都可以向第一数据中心100或第二数据中心110中的存储设备发起包括写入请求、读取请求或者数据同步请求等业务请求。具体的,终端104能够通过网络接口106与第二数据中心110进行跨数据中心通信,同时也可以通过第一数据中心100内部的数据线缆与第一数据中心100中的存储设备(比如第一存储设备102a)进行数据中心内通信。同理,终端114能够通过网络接口116与第一数据中心100进行跨数据中心通信,同时也能通过第二数据中心110内部的数据线缆与第二数 据中心110中的存储设备(比如第二存储设备112a)进行数据中心内通信。
需要说明的是,本存储系统中的终端104和终端114以及其它运行在两个数据中心中的终端可以是实际的硬件设备,比如通用计算机设备,也可以是虚拟的软件应用,比如运行在计算机设备上的客户端,该客户端可以是虚拟机、虚拟机软件或者数据库软件等等。
其中,在图1所示的存储系统中,第一存储设备102a和第二存储设备112a可以被配置为一个跨数据中心的虚拟存储单元,该存储单元称为双活卷。对于使用该存储系统的用户而言,其可将需要存储的数据写入该双活卷中,而无需关注该数据具体存储在哪一个数据中心的物理存储设备中。对于该存储系统而言,当有数据写入该双活卷时,第一存储设备102a和第二存储设备112a按照相同的逻辑地址分别在本地写入相同的数据。
其中,图1中的各个存储设备都可包含存储管理软件,存储管理软件可以对该存储设备上的物理存储芯片进行管理,比如判断物理存储芯片中相应的数据地址是否已被锁定。上述存储设备可以是网络连接式存储(Network Attached Storage,NAS)设备,还可以是存储网络(Storage Area Network,SAN)设备,例如,该存储设备可以是SAN设备,该SAN设备的存储空间具体可以由逻辑单元(Logical Unit Number,LUN)表示,该LUN是一存储阵列形式显示的逻辑磁盘,每台终端可以运行在在该SAN设备的一个或者多个LUN上。该SAN设备的存储管理软件可以控制对该LUN的读写操作,也可以对该LUN的信息进行管理。
另外,在本申请的实施例的一种实现方式中,当该存储系统实现为双活数据存储系统时,第一数据中心100和第二数据中心110可以互为容灾数据中心,当其中一个数据中心发生故障导致其服务的终端停止工作后,另一个数据中心继续工作并记录差异信息。在发生故障的数据中心恢复工作后,记录差异信息的数据中心作为容灾数据中心对其进行差量信息同步,使恢复工作后的数据中心中存储的信息和容灾数据中心中存储的信息一致。
请参考图2,其示出了本申请的一个示例性实施例提供的终端的结构示意图。该终端20可以是上述图1所示的数据中心中的终端,或者,当上述图1所示终端是虚拟的软件应用时,该终端20也可以是运行上述图1所示的终端的计算机设备。
该终端20可以包括:处理器21和通信接口24。
处理器21可以包括一个或者一个以上处理单元,该处理单元可以是中央处理单元(英文:central processing unit,CPU)或者网络处理器(英文:network processor,NP)等。
通信接口24可以包括本地接口以及网络接口,其中,本地接口用于连接与该终端处于同一数据中心中的存储设备或者其它终端,网络接口用于连接其他数据中心中的存储设备或者数据中心。
可选地,该终端20还可以包括存储器23,处理器21可以通过总线与存储器23和通信接口24相连。存储器23可用于存储软件程序,该软件程序可以由处理器21执行。此外,该存储器23中还可以存储各类业务数据或者用户数据。
可选地,该终端20还可以包括输出设备25以及输入设备27。输出设备25和输入设备27与处理器21相连。输出设备25可以是用于显示信息的显示器、播放声音的功放设备或者打印机等,输出设备25还可以包括输出控制器,用以提供输出到显示屏、功放设备或者打印机。输入设备27可以是用于用户输入信息的诸如鼠标、键盘、电子触控笔或者触控面板之类的设备,输入设备27还可以包括输出控制器以用于接收和处理来自鼠标、键盘、电子 触控笔或者触控面板等设备的输入。
请参考图3,其示出了本申请一个示例性实施例提供的存储设备的结构示意图。该存储设备30可以是上述图1所示的各个存储设备中的任意一个设备。
该设备30可以包括:处理器31和通信接口34。
处理器31可以包括一个或者一个以上处理单元,该处理单元可以是中央处理单元(英文:central processing unit,CPU)或者网络处理器(英文:network processor,NP)等。
通信接口34可以包括本地接口以及网络接口,其中,本地接口用于连接与该存储设备处于同一数据中心中的存储设备或者其它终端,网络接口用于连接其他数据中心中的存储设备或者数据中心。
该存储设备30还可以包括存储器33,处理器31可以通过总线与存储器33和通信接口34相连。存储器33可用于存储软件程序,该软件程序可以由处理器31执行。此外,该存储器33中还可以存储各类业务数据或者用户数据。
请参考图4,其示出了本申请一个示例性实施例提供的写入请求处理方法的流程图。该方法可以用于图1所示的存储系统中。如图4所示,该数据存储方法可以包括:
步骤401,第一终端生成写入请求。
其中,该第一终端可以是图1所示的存储系统中的终端104、终端114或者其它未示出的任一终端,即该第一终端可以是第一数据中心的终端,也可以是第二数据中心中的终端。在本申请实施例中,第一终端在需要发起数据写入时,首先生成一个写入请求。
该步骤可以由图2所示的终端中的处理器来实现。
步骤402,第一终端根据数据存储系统视图,向第一存储设备和第二存储设备分别发送写入请求。
其中,该第一存储设备可以是图1所示的存储系统中的第一存储设备102a,第二存储设备可以是第二存储设备中的第二存储设备112a,该写入请求用于指示第一存储设备和第二存储设备在各自的存储空间的相同逻辑地址中写入相同数据。
在本申请实施例中,第一存储设备和第二存储设备组成一个双活卷。该第一终端生成写入请求后,其会向第一存储设备和第二存储设备分别发送该写入请求。该写入请求中可以包含待写入的数据以及待写入的逻辑地址。
该步骤可以由图2所示的终端中的处理器控制通信接口来实现。
步骤403,存储设备接收第一终端发送的写入请求。
该步骤中的存储设备可以是第一存储设备和第二存储设备中的任意一个存储设备。在本申请的实施例实现的过程中,第一存储设备和第二存储设备可以分别执行步骤403。即步骤403可以是第一存储设备接收该第一终端发送的写入请求以及第二存储设备接收该第一终端发送的写入请求的两个并列步骤。
该步骤可以由图3所示的存储设备中的处理器控制通信接口来实现。
步骤404,当该写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定时,存储设备向该第一终端发送拒绝响应。
在本申请实施例中,当存储设备中某一个逻辑地址正在被写入数据,或者,正在进行数 据同步(比如,将该逻辑地址中的数据同步至其它存储设备中同样的逻辑地址中)时,存储设备可以将该逻辑地址锁定,其中,存储设备锁定该逻辑地址的方式可以称为乐观锁。该乐观锁用于锁定指定逻辑地址对应的存储空间,并使得该指定逻辑地址对应的存储空间仅能由拥有该乐观锁的终端读写,并禁止其它终端对该逻辑地址的写操作。
存储设备(图4中仅示出了第一存储设备角度的图示说明,在实际应用中,第二存储设备同时也执行相同的步骤)接收到写入请求后,可以检测该写入请求对应的逻辑地址中的部分或全部地址是否已经被锁定或者即将被锁定,若是,则存储设备拒绝该写入请求,并向第一终端发送拒绝响应。
可选的,在本申请实施例中,当该写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定时,存储设备还可以获取触发锁定该全部或部分地址的第二终端的标识,并将该第二终端的标识携带在拒绝响应中发送给第一终端。
其中,该第二终端可以是与第一终端处于同一数据中心的终端,或者,该第二终端也可以是与第一终端处于不同的数据中心的终端。
该步骤可以由图3所示的存储设备中的处理器控制通信接口来实现。
如图4所示,在本申请实施例的一种可能实现的方式中,存储设备在检测写入请求对应的逻辑地址中的全部或部分地址是否已被锁定时,可以执行步骤404a和步骤404b。
步骤404a,存储设备检测该存储设备的存储空间对应的逻辑地址中,已被锁定的地址是否包含该写入请求对应的逻辑地址的全部或部分地址。
该步骤可以由图3所示的存储设备中的处理器来实现。
步骤404b,若检测结果为该已被锁定的地址包含该全部或部分地址,则存储设备向第一终端发送拒绝响应。
在本申请实施例中,存储设备在接收到该第一终端发送的写入请求之前,若已经接收到第二终端发送的目标业务请求并开始处理该目标业务请求,则存储设备可以在处理该目标业务请求的过程中将该目标业务请求对应的逻辑地址锁定,即在开始处理该目标业务请求时该目标业务请求对应的逻辑地址锁定,并在处理完成该目标业务请求时,将对该目标业务请求对应的逻辑地址的锁定解除。存储设备在处理该目标业务请求的过程中,若接收到第一终端发送的写入请求,且目标业务请求对应的逻辑地址与该写入请求对应的逻辑地址中包含相同地址(即上述全部或者部分地址),即意味着这两个请求的逻辑地址之间存在冲突,此时,存储设备向第一终端发送包含第二终端的标识的拒绝响应。
上述逻辑地址冲突的情形可以分为三种:第一种是写入请求对应的逻辑地址与目标业务请求对应的逻辑地址重叠,第二种是写入请求对应的逻辑地址被目标业务请求对应的逻辑地址完全覆盖,第三种是写入请求对应的逻辑地址与目标业务请求对应的逻辑地址交叉覆盖。具体可参见图5,其示出了本申请实施例涉及的逻辑地址冲突的示意图。
重叠指两个或两个以上的请求对应的逻辑地址完全相同。如图5中的重叠示意图510所示,假设写入请求对应的逻辑地址511的LBA值为0x1000至0x1004,且目标业务请求对应的逻辑地址512的LBA值同样是0x1000至0x1004,则第写入请求对应的逻辑地址511与目标业务请求对应的逻辑地址512重叠。
覆盖指两个写入请求中的一个写入请求对应的逻辑地址包含于另一个写入请求对应的逻辑地址中,如图5中的完全覆盖示意图520所示,假设写入请求对应的逻辑地址522的LBA值是0x1000至0x1002,而目标业务请求对应的逻辑地址521的LBA值是0x1001至0x1004, 则目标业务请求对应的逻辑地址521覆盖写入请求对应的逻辑地址522。
交叉覆盖指存在一段逻辑地址,该逻辑地址分别是两个业务请求对应的逻辑地址的一部分,且这两个业务请求对应的逻辑地址之间还存在不同的部分,如图5中的交叉覆盖示意图530所示,假设写入请求对应的逻辑地址531的LBA值是0x1000至0x1004,且目标业务请求对应的逻辑地址532的LBA值是0x1003至0x1006,则写入请求对应的逻辑地址531和目标业务请求对应的逻辑地址532交叉覆盖,交叉部分的LBA值为0x1003至0x1004。
可选的,在本申请实施例中,该目标业务请求可以用于请求在该目标业务请求对应的逻辑地址中写入数据。
或者,该目标业务请求也可以用于请求将该存储设备中,该目标业务请求对应的逻辑地址中的数据同步至第一存储设备和第二存储设备中的另一个存储设备。例如,以执行该步骤的存储设备是第一存储设备为例,在第二数据中心中的第二存储设备发生故障并恢复后,第二终端在后台同步时,向第一存储设备发送数据同步请求,该数据同步请求用于请求将第二存储设备发生故障过程中,第一存储设备中写入的差异数据同步至第二存储设备。
该步骤可以由图3所示的存储设备中的处理器控制通信接口来实现。
可选的,存储设备在执行步骤404a时,若检测出已被锁定的地址不包含该写入请求对应的逻辑地址的全部或部分地址,则存储设备可以进一步检测写入请求对应的逻辑地址中的全部或部分地址是否即将被锁定时,即执行步骤404c、步骤404d和步骤404e。
步骤404c,存储设备检测是否接收到第二终端发送的目标业务请求。
在本申请实施例中,存储设备也有可能同时接收到第一终端发送的写入请求和第二终端发送的该目标业务请求,而写入请求和目标业务请求的逻辑地址又存在冲突,无法同时处理,此时,存储设备需要执行步骤404c,以确定先处理哪一个请求。
该步骤可以由图3所示的存储设备中的处理器来实现。
步骤404d,若检测结果为接收到该目标业务请求,则该存储设备获取该写入请求和该目标业务请求的优先级。
在本申请实施例中,一个存储设备接收到的一个业务请求的优先级可以包括该业务请求对应的业务类型的优先级、发送该业务请求的终端所在的数据中心的优先级以及发送该业务请求的终端的优先级中的至少一种。
可选的,该写入请求和目标业务请求中可以携带各自的优先级,存储设备直接从接收到的写入请求和目标业务请求中解析出写入请求和目标业务请求的优先级。
或者,存储设备也可以获取写入请求和目标业务请求的业务类型、发送终端以及发送终端所在的数据中心等信息,并根据获取到的信息查询该写入请求和目标业务请求各自的优先级。
该步骤可以由图3所示的存储设备中的处理器来实现。
存储设备在执行完成步骤404c时,若检测出没有接收到该目标业务请求,则存储设备可以执行后续步骤409。
步骤404e,当目标业务请求的优先级高于写入请求的优先级时,存储设备向所述第一终端发送拒绝响应。可选的,该拒绝响应中包含有该第二终端的标识。此时,本发明实施例将执行步骤405。
获取到写入请求和目标业务请求各自的优先级后,存储设备可以比较写入请求和目标业务请求各自的优先级的高低,当目标业务请求的优先级高于写入请求的优先级时,存储设备 将优先处理目标业务请求,并向第一终端发送拒绝响应;当写入请求的优先级高于目标业务请求的优先级时,存储设备将优先处理该写入请求,并执行后续步骤409。
可选的,当业务请求的优先级包含两种或两种以上的优先级时,存储设备可以按照预定的顺序依次比较写入请求和目标业务请求的各种优先级,直至确定出写入请求和目标业务请求的优先级的高低为止。
比如,以业务请求的优先级包含上述业务请求对应的业务类型的优先级、发送该业务请求的终端所在的数据中心的优先级以及发送该业务请求的终端的优先级为例,存储设备获取到写入请求和目标业务请求各自的优先级后,首先比对写入请求的业务类型的优先级和目标业务请求的业务类型的优先级,若目标业务请求的业务类型的优先级高于写入请求的业务类型的优先级(比如,目标业务请求是数据同步请求,且数据同步的优先级高于写入的优先级),则存储设备确定目标业务请求的优先级高于写入请求的优先级;若目标业务请求的业务类型的优先级低于写入请求的业务类型的优先级(比如,目标业务请求是数据同步请求,且数据同步的优先级低于写入的优先级),则存储设备确定目标业务请求的优先级低于写入请求的优先级。若目标业务请求的业务类型的优先级与写入请求的业务类型的优先级(比如,目标业务请求也是一个写入请求),则存储设备进一步比较第一终端所在的数据中心的优先级与第二终端所在的数据中心的优先级的高低,若第一终端所在的数据中心的优先级高于第二终端所在的数据中心的优先级的高低(比如,假设第一终端处于第一数据中心,第二终端处于第二数据中心,且第一数据中心的优先级高于第二数据中心的优先级),则确定写入请求的优先级高于目标业务请求的优先级;若第一终端所在的数据中心的优先级低于第二终端所在的数据中心的优先级的高低(比如,假设第一终端处于第一数据中心,第二终端处于第二数据中心,且第一数据中心的优先级低于第二数据中心的优先级),则确定写入请求的优先级低于目标业务请求的优先级;若第一终端所在的数据中心的优先级与第二终端所在的数据中心的优先级的相同(比如,假设第一终端和第二终端处于同一个第一数据中心),则存储设备进一步比较第一终端和第二终端的优先级,若第一终端的优先级高于第二终端的优先级,则存储设备确定写入请求的优先级高于目标业务请求的优先级,若第一终端的优先级低于第二终端的优先级,则存储设备确定写入请求的优先级低于目标业务请求的优先级。可选的,在本申请实施例中,处于同一个数据中心中的不同终端之间可以设置不同的优先级。
该步骤可以由图3所示的存储设备中的处理器控制通信接口来实现。
当目标业务请求的优先级高于写入请求的优先级时,存储设备处理目标业务请求,并在处理目标业务请求的过程中,锁定存储设备的存储空间中对应目标业务请求的逻辑地址。其中,存储设备处理目标业务请求,并在处理目标业务请求的过程中,锁定目标业务请求的逻辑地址的步骤,可以由图3所示的存储设备中的处理器控制通信接口来实现。
可选的,存储设备在处理完目标业务请求并解除对目标业务请求对应的逻辑地址的锁定之后,还可以向第二终端发送目标业务请求处理完成的响应,以指示该第二终端该目标业务请求已经处理完成,写入请求中的部分或者全部地址的锁定已经解除。
可选的,该存储设备还可以将该目标业务请求以及锁定指示信息发送到其对应的备份存储设备中,其中,该锁定指示信息用于指示备份存储设备在处理目标业务请求的过程中,锁定备份存储设备的存储空间中对应目标业务请求的逻辑地址。在存储设备处理完成该目标业务请求并解除对该目标业务请求对应的逻辑地址的锁定后,存储设备将向第二终端发送用于指示该目标业务请求处理完成的响应,以指示第二终端该目标业务请求处理已经完成,该返 回给第二终端的响应目的在于使目标业务请求的发送方能够确定自身的请求已经被执行完成,以保证目标业务请求的处理满足分布式强一致性复制协议。相应的,第二终端在获知存储设备执行完成该目标业务请求后,可以认为该目标业务请求对应的逻辑地址已解除锁定。
其中,分布式强一致性复制协议是为保证双活数据中心中的各个副本数据的数据一致性而引入的协议。本申请实施例涉及的双活数据中心支持该协议。该协议具体要求双活数据中心在指定数据更新操作完成时,两个保存数据的数据中心均要保证指定数据更新成功。并且,该分布式强一致性复制要求在保存指定数据的副本发生故障时,需要在该发生故障的副本中的数据和其它各个保存该指定数据的副本中的数据恢复到一致后,才允许将该发生过故障的副本重新投入到业务处理中。
步骤405,该第一终端接收存储设备返回的拒绝响应。
在本申请的实施例中,在接收到第一存储设备或者第二存储设备中任意一个存储设备发送的拒绝响应时,第一终端可以视为第一存储设备和第二存储设备都发出了拒绝响应。
其中,在该第一终端接收存储设备返回的拒绝响应时,该第一终端可以获取该拒绝响应中携带的第二终端的标识。
该步骤可以由图2所示的终端中的处理器控制通信接口来实现。
步骤406,第一终端向第二终端转发该写入请求。
可选的,第一终端可以获取拒绝响应中携带的第二终端的标识,并根据第二终端的标识向第二终端转发该写入请求。
该步骤可以由图2所示的终端中的处理器控制通信接口来实现。
步骤407,第二终端接收第一终端转发的该写入请求。
该步骤可以由图2所示的终端中的处理器控制通信接口来实现。
步骤408,第二终端在该全部或部分地址的锁定被解除后,向该第一存储设备和该第二存储设备分别发送该写入请求。
可选的,在本申请实施例所涉及的存储系统运行时,如果第二终端中在接收到第一终端转发的写入请求的同时,还接收到了其它终端转发的写入请求,则第二终端可以根据发送写入请求的终端的优先级确定先发送哪一个写入请求。
该步骤可以由图2所示的终端中的处理器控制通信接口来实现。
相应的,存储设备接收该写入请求。存储设备在接收到该写入请求后,继续监测该写入请求的逻辑地址中的部分或者全部地址是否已锁定或即将被锁定,如果该写入请求的逻辑地址中的部分或者全部地址已锁定或即将被锁定,则向第二终端返回拒绝响应,否则该存储设备将执行步骤409。
该存储设备接收写入请求的动作,可以由实现为第一存储设备或第二存储设备的存储控制设备中的处理器控制通信接口来实现。
步骤409,存储设备按照该写入请求对应的逻辑地址写入该写入请求指示的数据。
需要特别说明的是,当步骤409是由组成双活卷的两个存储设备一同执行的,在本申请所示实施例中可以视为第一存储设备和第二存储设备在各自的存储空间的相同逻辑地址中写入相同数据。在第一存储设备和第二存储设备均存储完成该相同的数据后,步骤409视为执行完成。
该步骤可以由图3所示的存储设备中的处理器来实现。
步骤410,存储设备锁定该存储设备的存储空间中对应该写入请求的逻辑地址。
该步骤可以由图3所示的存储设备中的处理器来实现。
步骤411,存储设备向对应的备份存储设备发送该写入请求以及锁定指示信息,该锁定指示信息用于指示该备份存储设备在处理该写入请求的过程中,锁定该备份存储设备的存储空间中对应该写入请求的逻辑地址。
可选的,为了防止存储设备因故障造成该存储设备中存储的数据丢失,在该存储设备按照写入请求写入数据时,该存储设备向对应的备份存储设备发送该写入请求以及锁定指示信息。
可选的,该存储设备向对应的备份存储设备发送该正在处理的业务请求(比如写入请求或者目标业务请求)时,也可以不发送锁定指示信息,由备份存储设备在接收并处理该业务请求时,自动将该业务请求对应的逻辑地址锁定。
可选的,按照分布式强一致性复制要求,备份存储设备在完成对业务请求的处理时,将向发出该业务请求的终端返回处理成功的响应。
该步骤可以由图3所示的存储设备中的处理器控制通信接口来实现。
下面,通过两个具体的数据写入例子介绍本申请能够实现的两个写入请求处理流程。
在本申请能够实现的一个数据写入案例中,以图1所示的存储系统的架构为例,请参考图6,其示出了本申请实施例涉及的一种写入请求处理示意图,如图6所示,本数据写入案例可分为以下步骤来实现。
1)第一数据中心中的第一终端在接收到写入数据的指令时,生成写入请求1。
2)该第一终端根据存储系统视图获取到该写入请求1对应的逻辑地址,将该写入请求1分别发送到第一存储设备和第二存储设备中进行处理。当第一存储设备接收到该写入请求1后,对该写入请求1对应的逻辑地址进行锁定,并按照该写入请求1的指示写入数据;同时,当第二存储设备接收到该写入请求1后,对该写入请求1对应的逻辑地址也进行锁定,并按照该写入请求1的指示写入数据。由于第三存储设备是第一存储设备的备份设备,第四存储设备是第二消息设备的备份设备,所以第三存储设备与第一存储设备锁定了相同的逻辑地址并按照相同的写入请求1写入了数据,同理,第四存储设备也执行了与第二存储设备相同的操作。
3)第二终端生成另一个写入请求2,该第二终端生成的写入请求2和第一终端生成的写入请求1对应的逻辑地址部分或全部相同。
4)第二终端将生成的写入请求2分别发送给第一存储设备和第二存储设备。
5)当第一存储设备和第二存储设备接收到该写入请求2后,两个存储设备都将检测已被锁定的逻辑地址中是否和该第二终端发送来的写入请求2对应的逻辑地址相同。由于该第二终端发送来的写入请求2和第一终端发送的写入请求1部分或全部相同,所以第一存储设备和第二存储设备都将生成拒绝响应,并将该拒绝响应反馈给第二终端。同时,上述两个存储设备在生成的拒绝响应中添加第一终端的标识。
6)第二终端在接收到拒绝响应后,获取该拒绝响应中的第一终端的标识,将写入请求2转发给该第一终端。
7)第一终端在接收到该第二终端转发来的写入请求2后,将其排入到任务队列中,第一终端接收到第一存储设备和第二存储设备执行完写入请求1后发出的完成响应。
8)第一终端向用户反馈该写入请求1已完成的响应。
9)第一终端将写入请求2分别发送给第一存储设备和第二存储设备。
可选的,在本例中还可以以后台数据同步请求作为终端发出的请求,第一存储设备和第二存储设备仍将锁定相同的逻辑地址,差异仅在于其中一个存储设备锁定的逻辑地址中的数据是用于被读取数据的,而另一个存储设备锁定的逻辑地址是用于写入数据。
在本申请能够实现的另一个数据写入案例中,本申请能够通过数据游标的方式来避免数据写入和后台数据同步之间因逻辑地址相同产生的冲突。按照图1所示的存储系统架构,请参考图7,其示出了本申请实施例涉及的另一种写入请求处理示意图。如图7所示,该处理过程具体如下:
1)第一终端发起后台数据同步。
在第一终端发起后台同步时,第一终端指示第一存储设备以数据游标当前与双活卷空间的逻辑地址对齐的位置为起点,以数据游标的粒度大小为基准,读取待同步的数据。例如,当游标粒度为1MB时,第一存储设备锁定从当前数据游标所在位置对齐的1MB的逻辑地址,并读取该1MB内的数据,同时第三存储设备锁定同样的1MB对应的逻辑地址。
2)第一存储设备将读取到的1MB数据发送给第一终端。
3)第一终端向第二数据中心中的第二存储设备发送写入请求1,该写入请求1用于请求该1MB数据写入第二存储设备中的相同逻辑地址,第二存储设备接收到该1MB数据后,将锁定该1MB数据对应的逻辑地址,并将该1MB数据写入到该第二存储设备中的对应的逻辑地址中,同时第四存储设备锁定同样的1MB对应的逻辑地址,并将该1MB数据写入到该第四存储设备中的对应的逻辑地址中。
4)第二终端生成写入请求2。
5)第二终端将该写入请求2分别发送到第一存储设备和第二存储设备中。第一存储设备和第二存储设备分别检测该写入请求2对应的逻辑地址是否和当前1MB锁定的逻辑地址相同,若完全不同,则直接执行该写入请求2。
6)当该写入请求2对应的逻辑地址和当前1MB锁定的逻辑地址部分相同或者完全相同,第一存储设备和第二存储设备获取触发锁定该逻辑地址的第一终端的标识,并将携带有该第一终端的标识的拒绝响应返回给第二终端。
7)第二终端在接收到该拒绝响应后,向该第一终端转发该写入请求2。
8)第二存储设备完成写入该1MB数据并解除了该1MB对应的逻辑地址的锁定后,将解除该1MB对应的逻辑地址的锁定,并向该第一终端返回写入成功的响应。
9)第一终端分别向第一存储设备和第二存储设备发送第二终端转发的写入请求2。
第一存储设备和第二存储设备在接收到上述第二终端转发的写入请求2后,处理该第二终端转发来的写入请求2。
可选的,在上述后台数据同步的案例中,由于一次同步的数据量可能较大,例如10GB数据,若直接将该为10GB的数据锁定来进行数据同步,则会对该存储系统中其它需要对该数据段读写的业务造成影响。为了避免上述影响,本申请在进行后台数据同步时,利用数据游标的方式对需要同步的数据进行更新。具体执行时可将需要同步的数据分割为若干段长为游标长度的数据段,依次锁定每一段游标所对应的逻辑地址并进行同步,当同步完成一个数据段后,读写两侧的存储设备都将解除该数据段对应的逻辑地址,并将紧随该数据段的下一个数据段对应的读写两侧存储设备中的逻辑地址锁定,进行该数据段的同步工作。
需要说明的是,需要同步的数据段在双活卷空间中可以是连续存在的,也可以是不连续存在的。例如,数据游标长为1MB,需要同步的数据为3MB,以游标当前所对齐的双活卷空 间位置为起点,该需要同步的数据分别在第3MB、第5MB和第7MB所在的双活卷空间位置。则数据游标会在第1MB和第2MB的分别进行检测,未检测到需要同步的数据后其会移动到第3MB的位置,由于该位置的数据需要同步,所以该数据指示第一存储设备和第二存储设备分别锁定该逻辑地址,并进行该位置的数据同步工作。在同步完该第3MB位置的数据后,数据游标将接收分别到第一存储设备和第二存储设备完成同步的相应,之后数据游标将检测依次检测第4MB、第5MB…直到其确定预先设定的3MB长度的待同步数据都同步完成后,结束同步工作,并向发起该同步工作的终端返回同步成功的响应。
综上所述,本申请实施例所示的一种写入请求处理方法,通过第一终端获取数据写入请求,并根据数据存储系统视图向第一存储设备和第二存储设备分别发送写入请求,第一存储设备和第二存储设备分别接收到该写入请求后,将各自检测该写入请求对应的逻辑地址中的全部或部分地址是否已被锁定或即将锁定;当检测结果为否定时,将各自锁定该写入请求对应的逻辑地址并执行该写入请求,同时为了备份第一存储设备和第二存储设备中的数据,还将指示第三消息设备执行与第一存储设备完全相同的操作,并指示第四存储设备执行与第二存储设备完全相同的操作;当检测结果为肯定时,向发出该写入请求的第一终端返回带有触发该写入请求对应的全部或部分逻辑地址锁定或即将锁定的第二终端的标识,由该第一终端将该写入请求转发到该第二终端中,第二终端检测到该写入请求对应的逻辑地址解除锁定后,将该写入请求分别发送给第一存储设备和第二存储设备,指示这两个存储设备执行该写入请求。在本申请中,由于任意存储设备中的终端都不需要在每次发送写入请求之前都跨数据中心请求地址锁定,备份数据中心中的终端发送写入请求时,也就不存在从发送锁定请求到接收锁定通知之间的时延,从而在避免写入请求的地址与其它写入请求或数据同步请求的地址产生冲突的同时,极大的提高了备份数据中心的处理性能。
另外,由于本申请实施例在存储设备检测到当前写入请求对应的逻辑地址已被部分或者全部锁定时,再行检测锁定该逻辑地址的终端的标识,通过将该当前写入请求转发给锁定该逻辑地址的终端,使得产生写入冲突的请求转变为串行处理,在减小了分布式双活存储系统的时延的基础上,保证了无差错地执行逻辑地址冲突的写入请求。
下述为本申请的装置实施例,可以用于执行本申请的方法实施例。对于本申请的装置实施例中未披露的细节,请参照本申请的方法实施例。
图8是本申请的实施例提供的一种写入请求处理装置的结构方框图,该写入请求处理装置可以通过硬件电路或者软件硬件的结合实现成为写入请求处理设备的部分或者全部。该写入请求处理装置可以包括:写入请求发送单元801、拒绝响应接收对应802和转发单元803。
写入请求发送单元801,用于执行与上述步骤402相同或者相似的步骤。
拒绝响应接收单元802,用于执行与上述步骤405中的部分相同或者相似的步骤。
转发单元803,用于执行与上述步骤406相同或者相似的步骤。
图9是本申请的实施例提供的另一种写入请求处理装置的结构方框图,该写入请求处理装置可以通过硬件电路或者软件硬件的结合实现成为存储设备的部分或者全部。该写入请求处理装置可以包括:写入请求接收单元901、拒绝响应发送单元902、处理锁定单元903和锁定指示发送单元904。
写入请求接收单元901,用于执行与上述步骤403相同或相似的步骤,以及用于执行存 储设备接收写入请求。
拒绝响应发送单元902,用于执行与上述步骤404(包括步骤404a、404b、404c、404d和404e)相同或相似的步骤。
处理锁定单元903,用于执行与上述步骤404下,存储设备处理目标业务请求,并在处理目标业务请求的过程中,锁定目标业务请求的逻辑地址的步骤相同或相似的步骤,或者,用于与上述步骤409及410相同或相似的步骤。
锁定指示发送单元904,用于执行与上述步骤411相同或相似的步骤。
需要说明的是:上述实施例提供的写入请求处理装置在存储数据时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的写入请求处理装置与写入请求处理方法的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
上述本申请的实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例由处理器执行的全部或部分步骤可以通过硬件来完成,也可以通过指令来控制相关的硬件完成,所述的指令可以存储于一种计算机可读存储介质中,上述提到的计算机可读存储介质可以是只读存储器,磁盘或光盘等。
以上所述,仅为本申请能够实现的一种具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。

Claims (27)

  1. 一种终端,其特征在于,用于包括第一数据中心和第二数据中心的存储系统中,所述第一数据中心包含第一存储设备,所述第二数据中心包含第二存储设备,所述第一数据中心和/或所述第二数据中心还包含所述终端在内的至少两个终端,所述终端包括处理器和通信接口,所述通信接口被配置为由所述处理器控制,
    所述处理器,用于通过所述通信接口向所述第一存储设备和所述第二存储设备分别发送写入请求,所述写入请求用于指示所述第一存储设备和所述第二存储设备在各自的存储空间的相同逻辑地址中写入相同数据;
    所述处理器,用于通过所述通信接口接收存储设备返回的拒绝响应,所述存储设备是所述第一存储设备或者所述第二存储设备,所述拒绝响应用于指示所述写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定;
    所述处理器,用于通过所述通信接口向第二终端转发所述写入请求,使得所述第二终端在所述全部或部分地址的锁定被解除后,向所述第一存储设备和所述第二存储设备分别发送所述写入请求,所述第二终端是所述至少两个终端中触发锁定所述全部或部分地址的终端。
  2. 根据权利要求1所述的终端,其特征在于,
    所述处理器,还用于获取所述拒绝响应中携带的所述第二终端的标识,根据所述第二终端的标识,通过所述通信接口向所述第二终端转发所述第一写入请求。
  3. 一种存储设备,其特征在于,用于包括第一数据中心和第二数据中心的存储系统中,所述第一数据中心包含第一存储设备,所述第二数据中心包含第二存储设备,所述第一数据中心和/或所述第二数据中心还包含至少两个终端,所述设备为所述第一存储设备和所述第二存储设备中的任一存储设备,所述设备包括处理器和通信接口,所述通信接口被配置为由所述处理器控制,
    所述处理器,用于通过所述通信接口接收所述至少两个终端中的第一终端发送的写入请求;
    所述处理器,用于当所述写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定时,通过所述通信接口向所述第一终端发送拒绝响应,使得所述第一终端向第二终端转发所述写入请求,所述第二终端是所述至少两个终端中触发锁定所述全部或部分地址的终端;
    所述处理器,用于通过所述通信接口接收所述第二终端在所述全部或部分地址的锁定被解除后发送的所述写入请求。
  4. 根据权利要求3所述的设备,其特征在于,在通过所述通信接口向所述第一终端发送拒绝响应时,所述处理器具体用于,
    检测所述存储设备的存储空间对应的逻辑地址中,已被锁定的地址是否包含所述全部或部分地址;
    若检测结果为所述已被锁定的地址包含所述全部或部分地址,则通过所述通信接口向所述第一终端发送所述拒绝响应。
  5. 根据权利要求4所述的设备,其特征在于,在通过所述通信接口向所述第一终端发送拒绝响应时,所述处理器,还用于
    若所述已被锁定的地址不包含所述全部或部分地址,则检测是否接收到所述第二终端发送的目标业务请求,所述目标业务请求对应的逻辑地址与所述写入请求对应的逻辑地址中包含相同地址;
    若检测结果为接收到所述目标业务请求,则通过所述通信接口获取所述写入请求和所述目标业务请求的优先级;
    当所述目标业务请求的优先级高于所述写入请求的优先级时,通过所述通信接口向所述第一终端发送所述拒绝响应。
  6. 根据权利要求5所述的设备,其特征在于,
    所述目标业务请求用于请求在所述目标业务请求对应的逻辑地址中写入数据,且所述目标业务请求对应的逻辑地址包含所述全部或部分地址中写入数据;
    或者,
    所述目标业务请求用于请求将所述目标业务请求对应的逻辑地址中的数据同步至所述第一存储设备和所述第二存储设备中的另一存储设备,且所述目标业务请求对应的逻辑地址包含所述全部或部分地址。
  7. 根据权利要求5所述的设备,其特征在于,
    所述处理器,还用于当所述目标业务请求的优先级高于所述写入请求的优先级时,所述目标业务请求,并在处理所述目标业务请求的过程中,锁定所述存储设备的存储空间中对应所述目标业务请求的逻辑地址。
  8. 根据权利要求3至7任一所述的设备,其特征在于,
    所述处理器,还用于通过所述通信接口向对应的备份存储设备发送所述目标业务请求以及锁定指示信息,所述锁定指示信息用于指示所述备份存储设备在处理所述目标业务请求的过程中,锁定所述备份存储设备的存储空间中对应所述目标业务请求的逻辑地址。
  9. 根据权利要求3至7任一所述的设备,其特征在于,
    所述处理器,还用于通过所述通信接口向所述第一终端返回包含有所述第二终端的标识的所述拒绝响应。
  10. 一种写入请求处理装置,其特征在于,用于包括第一数据中心和第二数据中心的存储系统中,所述第一数据中心包含第一存储设备,所述第二数据中心包含第二存储设备,所述第一数据中心和/或所述第二数据中心还包含至少两个终端,所述装置应用于所述至少两个终端中的一个终端中,所述装置包括:
    写入请求发送单元,用于向所述第一存储设备和所述第二存储设备分别发送写入请求,所述写入请求用于指示所述第一存储设备和所述第二存储设备在各自的存储空间的相同逻辑地址中写入相同数据;
    拒绝响应接收单元,用于接收存储设备返回的拒绝响应,所述存储设备是所述第一存储设备或者所述第二存储设备,所述拒绝响应用于指示所述写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定;
    转发单元,用于向第二终端转发所述写入请求,使得所述第二终端在所述全部或部分地址的锁定被解除后,向所述第一存储设备和所述第二存储设备分别发送所述写入请求,所述第二终端是所述至少两个终端中触发锁定所述全部或部分地址的终端。
  11. 根据权利要求10所述的装置,其特征在于,
    所述转发单元,还用于获取所述拒绝响应中携带的所述第二终端的标识,根据所述第二终端的标识,向所述第二终端转发所述第一写入请求。
  12. 一种写入请求处理装置,其特征在于,用于包括第一数据中心和第二数据中心的存储系统中,所述第一数据中心包含第一存储设备,所述第二数据中心包含第二存储设备,所述第一数据中心和/或所述第二数据中心还包含至少两个终端,所述装置应用于所述第一存储设备和所述第二存储设备中的任一存储设备中,所述装置包括:
    写入请求接收单元,用于接收所述至少两个终端中的第一终端发送的写入请求;
    拒绝响应发送单元,用于当所述写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定时,向所述第一终端发送拒绝响应,使得所述第一终端向第二终端转发所述写入请求,所述第二终端是所述至少两个终端中触发锁定所述全部或部分地址的终端;
    所述写入请求接收单元,还用于接收所述第二终端在所述全部或部分地址的锁定被解除后发送的所述写入请求。
  13. 根据权利要求12所述的装置,其特征在于,所述拒绝响应发送单元,具体用于
    检测所述存储设备的存储空间对应的逻辑地址中,已被锁定的地址是否包含所述全部或部分地址;
    若检测结果为所述已被锁定的地址包含所述全部或部分地址,则向所述第一终端发送所述拒绝响应。
  14. 根据权利要求13所述的装置,其特征在于,所述拒绝响应发送单元,还用于
    若检测结果为所述已被锁定的地址不包含所述全部或部分地址,则检测是否接收到所述第二终端发送的目标业务请求,所述目标业务请求对应的逻辑地址与所述写入请求对应的逻辑地址中包含相同地址;
    若检测结果为接收到所述目标业务请求,则获取所述写入请求和所述目标业务请求的优先级;
    当所述目标业务请求的优先级高于所述写入请求的优先级时,向所述第一终端发送所述拒绝响应。
  15. 根据权利要求14所述的装置,其特征在于,
    所述目标业务请求用于请求在所述目标业务请求对应的逻辑地址中写入数据,且所述目标业务请求对应的逻辑地址包含所述全部或部分地址中写入数据;
    或者,
    所述目标业务请求用于请求将所述目标业务请求对应的逻辑地址中的数据同步至所述第一存储设备和所述第二存储设备中的另一存储设备,且所述目标业务请求对应的逻辑地址包含所述全部或部分地址。
  16. 根据权利要求14所述的装置,其特征在于,所述装置还包括:
    处理锁定单元,用于当所述目标业务请求的优先级高于所述写入请求的优先级时,处理所述目标业务请求,并在处理所述目标业务请求的过程中,锁定所述存储设备的存储空间中对应所述目标业务请求的逻辑地址。
  17. 根据权利要求12至16任一所述的装置,其特征在于,所述装置还包括:
    锁定指示发送单元,用于向对应的备份存储设备发送所述目标业务请求以及锁定指示信息,所述锁定指示信息用于指示所述备份存储设备在处理所述目标业务请求的过程中,锁定所述备份存储设备的存储空间中对应所述目标业务请求的逻辑地址。
  18. 根据权利要求12至16任一所述的装置,其特征在于,
    所述拒绝响应发送单元,用于向所述第一终端返回包含有所述第二终端的标识的所述拒绝响应。
  19. 一种写入请求处理方法,其特征在于,用于包括第一数据中心和第二数据中心的存储系统中,所述第一数据中心包含第一存储设备,所述第二数据中心包含第二存储设备,所述第一数据中心和/或所述第二数据中心还包含至少两个终端,所述方法由所述至少两个终端中的第一终端执行,所述方法包括:
    所述第一终端向所述第一存储设备和所述第二存储设备分别发送写入请求,所述写入请求用于指示所述第一存储设备和所述第二存储设备在各自的存储空间的相同逻辑地址中写入相同数据;
    所述第一终端接收存储设备返回的拒绝响应,所述存储设备是所述第一存储设备或者所述第二存储设备,所述拒绝响应用于指示所述写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定;
    所述第一终端向第二终端转发所述写入请求,使得所述第二终端在所述全部或部分地址的锁定被解除后,向所述第一存储设备和所述第二存储设备分别发送所述写入请求,所述第二终端是所述至少两个终端中触发锁定所述全部或部分地址的终端。
  20. 根据权利要求19所述的方法,其特征在于,所述第一终端向所述第二终端转发所述第一写入请求,包括:
    所述第一终端获取所述拒绝响应中携带的所述第二终端的标识;
    所述第一终端根据所述第二终端的标识,向所述第二终端转发所述第一写入请求。
  21. 一种写入请求处理方法,其特征在于,用于包括第一数据中心和第二数据中心的存储系统中,所述第一数据中心包含第一存储设备,所述第二数据中心包含第二存储设备,所 述第一数据中心和/或所述第二数据中心还包含至少两个终端,所述方法由所述第一存储设备和所述第二存储设备中的任一存储设备执行,所述方法包括:
    所述存储设备接收所述至少两个终端中的第一终端发送的写入请求;
    当所述写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定时,所述存储设备向所述第一终端发送拒绝响应,使得所述第一终端向第二终端转发所述写入请求,所述第二终端是所述至少两个终端中触发锁定所述全部或部分地址的终端;
    所述存储设备接收所述第二终端在所述全部或部分地址的锁定被解除后发送的所述写入请求。
  22. 根据权利要求21所述的方法,其特征在于,当所述写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定时,所述存储设备向所述第一终端发送拒绝响应,包括:
    所述存储设备检测所述存储设备的存储空间对应的逻辑地址中,已被锁定的地址是否包含所述全部或部分地址;
    若检测结果为所述已被锁定的地址包含所述全部或部分地址,则所述存储设备向所述第一终端发送所述拒绝响应。
  23. 根据权利要求22所述的方法,其特征在于,当所述写入请求对应的逻辑地址中的全部或部分地址已被锁定或即将被锁定时,所述存储设备向所述第一终端发送拒绝响应,还包括:
    若检测结果为所述已被锁定的地址不包含所述全部或部分地址,则所述存储设备检测是否接收到所述第二终端发送的目标业务请求,所述目标业务请求对应的逻辑地址与所述写入请求对应的逻辑地址中包含相同地址;
    若检测结果为接收到所述目标业务请求,则所述存储设备获取所述写入请求和所述目标业务请求的优先级;
    当所述目标业务请求的优先级高于所述写入请求的优先级时,所述存储设备向所述第一终端发送所述拒绝响应。
  24. 根据权利要求23所述的方法,其特征在于,
    所述目标业务请求用于请求在所述目标业务请求对应的逻辑地址中写入数据,且所述目标业务请求对应的逻辑地址包含所述全部或部分地址;
    或者,
    所述目标业务请求用于请求将所述目标业务请求对应的逻辑地址中的数据同步至所述第一存储设备和所述第二存储设备中的另一存储设备,且所述目标业务请求对应的逻辑地址包含所述全部或部分地址。
  25. 根据权利要求23所述的方法,其特征在于,所述方法还包括:
    当所述目标业务请求的优先级高于所述写入请求的优先级时,所述存储设备处理所述目标业务请求,并在处理所述目标业务请求的过程中,锁定所述存储设备的存储空间中对应所述目标业务请求的逻辑地址。
  26. 根据权利要求21至25任一所述的方法,其特征在于,所述方法还包括:
    所述存储设备向对应的备份存储设备发送所述目标业务请求以及锁定指示信息,所述锁定指示信息用于指示所述备份存储设备在处理所述目标业务请求的过程中,锁定所述备份存储设备的存储空间中对应所述目标业务请求的逻辑地址。
  27. 根据权利要求21至25任一所述的方法,其特征在于,所述存储设备向所述第一终端发送拒绝响应,包括:
    所述存储设备向所述第一终端返回包含有所述第二终端的标识的所述拒绝响应。
PCT/CN2017/096052 2016-12-14 2017-08-04 写入请求处理方法、装置及设备 WO2018107772A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611155528.1A CN106843749B (zh) 2016-12-14 2016-12-14 写入请求处理方法、装置及设备
CN201611155528.1 2016-12-14

Publications (1)

Publication Number Publication Date
WO2018107772A1 true WO2018107772A1 (zh) 2018-06-21

Family

ID=59139496

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/096052 WO2018107772A1 (zh) 2016-12-14 2017-08-04 写入请求处理方法、装置及设备

Country Status (2)

Country Link
CN (1) CN106843749B (zh)
WO (1) WO2018107772A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084200A (zh) * 2020-08-24 2020-12-15 中国银联股份有限公司 数据读写处理方法、数据中心、容灾系统及存储介质

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106843749B (zh) * 2016-12-14 2020-01-21 华为技术有限公司 写入请求处理方法、装置及设备
CN107329701A (zh) * 2017-06-29 2017-11-07 郑州云海信息技术有限公司 一种存储系统中双活卷的创建方法、装置及系统
CN107329698B (zh) * 2017-06-29 2020-08-11 杭州宏杉科技股份有限公司 一种数据保护方法及存储设备
CN107643961A (zh) * 2017-09-26 2018-01-30 郑州云海信息技术有限公司 普通卷转镜像卷的数据同步方法、系统、装置及存储介质
CN110209641A (zh) * 2018-02-12 2019-09-06 杭州宏杉科技股份有限公司 一种应用于多控存储系统中的集群业务处理方法及装置
CN110519312A (zh) * 2018-05-21 2019-11-29 视联动力信息技术股份有限公司 一种基于视联网的资源同步方法和装置
CN110519313A (zh) * 2018-05-21 2019-11-29 视联动力信息技术股份有限公司 一种基于视联网的资源同步方法和装置
CN109240601B (zh) * 2018-07-24 2021-08-27 中国建设银行股份有限公司 云存储的存储空间处理方法、设备和存储介质
CN109842685A (zh) * 2019-02-14 2019-06-04 视联动力信息技术股份有限公司 一种数据同步方法和装置
CN109918208A (zh) * 2019-02-28 2019-06-21 新华三技术有限公司成都分公司 一种io操作处理方法及装置
CN109976672B (zh) * 2019-03-22 2022-02-22 深信服科技股份有限公司 一种读写冲突优化方法、装置、电子设备及可读存储介质
CN113360081A (zh) * 2020-03-06 2021-09-07 华为技术有限公司 数据处理方法及其设备
CN113268483A (zh) * 2021-05-24 2021-08-17 北京金山云网络技术有限公司 请求处理方法和装置、电子设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101562761A (zh) * 2009-05-18 2009-10-21 杭州华三通信技术有限公司 一种光网络中的备份存储方法和系统
US20130007368A1 (en) * 2011-06-29 2013-01-03 Lsi Corporation Methods and systems for improved miorroring of data between storage controllers using bidirectional communications
CN103827843A (zh) * 2013-11-28 2014-05-28 华为技术有限公司 一种写数据方法、装置和系统
CN104516793A (zh) * 2013-09-27 2015-04-15 三星电子株式会社 数据镜像控制设备和方法
CN106843749A (zh) * 2016-12-14 2017-06-13 华为技术有限公司 写入请求处理方法、装置及设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004171411A (ja) * 2002-11-21 2004-06-17 Hitachi Global Storage Technologies Netherlands Bv データ記憶装置及びバッファメモリの管理方法
US8433695B2 (en) * 2010-07-02 2013-04-30 Futurewei Technologies, Inc. System architecture for integrated hierarchical query processing for key/value stores
CN105068771A (zh) * 2015-09-17 2015-11-18 浪潮(北京)电子信息产业有限公司 一种统一存储方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101562761A (zh) * 2009-05-18 2009-10-21 杭州华三通信技术有限公司 一种光网络中的备份存储方法和系统
US20130007368A1 (en) * 2011-06-29 2013-01-03 Lsi Corporation Methods and systems for improved miorroring of data between storage controllers using bidirectional communications
CN104516793A (zh) * 2013-09-27 2015-04-15 三星电子株式会社 数据镜像控制设备和方法
CN103827843A (zh) * 2013-11-28 2014-05-28 华为技术有限公司 一种写数据方法、装置和系统
CN106843749A (zh) * 2016-12-14 2017-06-13 华为技术有限公司 写入请求处理方法、装置及设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084200A (zh) * 2020-08-24 2020-12-15 中国银联股份有限公司 数据读写处理方法、数据中心、容灾系统及存储介质

Also Published As

Publication number Publication date
CN106843749A (zh) 2017-06-13
CN106843749B (zh) 2020-01-21

Similar Documents

Publication Publication Date Title
WO2018107772A1 (zh) 写入请求处理方法、装置及设备
CN109951331B (zh) 用于发送信息的方法、装置和计算集群
WO2016070375A1 (zh) 一种分布式存储复制系统和方法
US8707085B2 (en) High availability data storage systems and methods
US20120023066A1 (en) Initialization protocol for a peer-to-peer replication environment
JP5686034B2 (ja) クラスタシステム、同期制御方法、サーバ装置および同期制御プログラム
US20140059315A1 (en) Computer system, data management method and data management program
US10185636B2 (en) Method and apparatus to virtualize remote copy pair in three data center configuration
JP2010186472A (ja) スプリットブレイン状況におけるメジャーグループを決定するための方法、システム、及びコンピュータ読み取り可能な記録媒体
US10048978B2 (en) Apparatus and method for identifying a virtual machine having changeable settings
US11409711B2 (en) Barriers for dependent operations among sharded data stores
US11953997B2 (en) Systems and methods for cross-regional back up of distributed databases on a cloud service
US20210165768A1 (en) Replication Barriers for Dependent Data Transfers between Data Stores
JP2023541298A (ja) トランザクション処理方法、システム、装置、機器、及びプログラム
CN110633046A (zh) 一种分布式系统的存储方法、装置、存储设备及存储介质
WO2018157605A1 (zh) 一种集群文件系统中消息传输的方法及装置
US20210176315A1 (en) Cross Storage Protocol Access Response for Object Data Stores
EP3896571B1 (en) Data backup method, apparatus and system
US10169441B2 (en) Synchronous data replication in a content management system
WO2015196692A1 (zh) 一种云计算系统以及云计算系统的处理方法和装置
CN111737063B (zh) 双控脑裂的磁盘锁仲裁方法、装置、设备及介质
CN105205160A (zh) 一种数据写入方法及装置
CN110659303A (zh) 一种数据库节点的读写控制方法及装置
US20150135004A1 (en) Data allocation method and information processing system
KR20210044281A (ko) 클라우드 저하 모드에서 지속적인 디바이스 동작 안정성을 보장하기 위한 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17880798

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17880798

Country of ref document: EP

Kind code of ref document: A1