CN106843749B - Write request processing method, device and equipment - Google Patents

Write request processing method, device and equipment Download PDF

Info

Publication number
CN106843749B
CN106843749B CN201611155528.1A CN201611155528A CN106843749B CN 106843749 B CN106843749 B CN 106843749B CN 201611155528 A CN201611155528 A CN 201611155528A CN 106843749 B CN106843749 B CN 106843749B
Authority
CN
China
Prior art keywords
storage device
terminal
target service
service request
write request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611155528.1A
Other languages
Chinese (zh)
Other versions
CN106843749A (en
Inventor
曾永强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201611155528.1A priority Critical patent/CN106843749B/en
Publication of CN106843749A publication Critical patent/CN106843749A/en
Priority to PCT/CN2017/096052 priority patent/WO2018107772A1/en
Application granted granted Critical
Publication of CN106843749B publication Critical patent/CN106843749B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1469Backup restoration techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a write request processing method, a write request processing device and write request processing equipment, and belongs to the field of data storage. The method comprises the following steps: the method comprises the steps that a first terminal sends write requests to a first storage device and a second storage device respectively, when all or part of addresses in a logical address corresponding to the write requests are locked or are about to be locked, the storage devices send rejection responses to the first terminal, the rejection responses are used for indicating that all or part of addresses in the logical address corresponding to the write requests are locked or are about to be locked, the first terminal forwards the write requests to the second terminal, and the second terminal sends the write requests to the first storage device and the second storage device respectively after the locking of all or part of addresses is released, so that the address of the write requests is prevented from colliding with addresses of other write requests or data synchronization requests, and meanwhile the processing performance of a backup data center is greatly improved.

Description

Write request processing method, device and equipment
Technical Field
The present application relates to the field of data storage, and in particular, to a method, an apparatus, and a device for processing a write request.
Background
With the continuous development of information technology, the number of key services undertaken by data center-based information systems in various industries is increasing, and once a data center is crashed or has errors, important data is lost. In order to improve data security, a current information system generally sets a main data center and a backup data center to store and backup data, and in order to improve hardware utilization, the two data centers may process service requests, such as a write request, a read request, and a data synchronization request, at the same time. The two active and standby data centers that simultaneously process the service request are generally called dual active data centers.
When the data center processes the service request, the data center needs to perform specific processing on the write request to avoid address conflict between the write request and the write request or between the write request and the data synchronization request. In the related art, the write request is processed by the dual active data center in the following manner:
the double-active data center provides service processing service based on the same group of logical addresses, and a lock server is arranged in the main data center. When a terminal in a main data center or a backup data center initiates writing to a certain logical address, a locking request is sent to a locking server to request to lock the logical address, and if the logical address is in an unlocked state currently, the locking server locks the logical address and returns a locking notification to the terminal. After receiving the locking notification, the terminal sends writing requests to the storage devices of the two data centers respectively, and the storage devices of the two data centers write the same data in the requested logical addresses respectively.
In practical application, in order to achieve a better disaster recovery effect, the main data center and the backup data center are usually far apart from each other in geographic location, and in the above scheme, before a terminal in the backup data center writes data, it is necessary to send a lock request to the lock server and receive a returned lock notification, and both the lock request and the lock notification are transmitted across the data centers, which results in a high time delay from sending the lock request to receiving the lock notification by the terminal in the backup data center, and a large influence is exerted on the processing performance of the backup data center.
Disclosure of Invention
In order to improve the processing performance of a backup data center in a dual-active data center, embodiments of the present application provide a method, an apparatus, and a device for processing a write request.
In a first aspect, a write request processing method is provided, where the method is used in a storage system including a first data center and a second data center, the first data center includes a first storage device, the second data center includes a second storage device, and the first data center and/or the second data center further includes at least two terminals, and the method is performed by a first terminal of the at least two terminals, and the method includes:
the first terminal sends a write request to the first storage device and the second storage device respectively, wherein the write request is used for indicating the first storage device and the second storage device to write the same data in the same logical address of the respective storage space; the first terminal receives a rejection response returned by the storage device, wherein the storage device is the first storage device or the second storage device, and the rejection response is used for indicating that all or part of the logical addresses corresponding to the write request are locked or are about to be locked; and the first terminal forwards the write request to a second terminal which triggers the locking of all or part of the addresses, so that the second terminal respectively sends the write request to the first storage device and the second storage device after the locking of all or part of the addresses is released.
When a first terminal sends a write request to a first storage device and a second storage device simultaneously, if the second terminal triggers or is about to trigger the locking of the logical addresses of the first storage device and the second storage device corresponding to the write request, the first terminal forwards the write request to the second terminal, the second terminal sends the write request to the first storage device and the second storage device for processing after unlocking the logical address corresponding to the write request, the terminal in any storage device does not need to lock the address across a data center before sending the write request every time, when the terminal in a backup data center sends the write request, the time delay from sending the lock request to receiving the lock notification does not exist, thereby avoiding the conflict between the address of the write request and the addresses of other write requests or data synchronization requests, the processing performance of the backup data center is greatly improved.
Optionally, the method includes: the forwarding, by the first terminal, the first write request to the second terminal specifically includes: and the first terminal acquires the identifier of the second terminal carried in the refusal response, and forwards the first write-in request to the second terminal according to the identifier of the second terminal.
In a second aspect, a write request processing method is provided, where the method is used in a storage system including a first data center and a second data center, the first data center includes a first storage device, the second data center includes a second storage device, and the first data center and/or the second data center further includes at least two terminals, and the method is performed by any one of the first storage device and the second storage device, and the method includes:
the storage device receives a write request sent by a first terminal of the at least two terminals; when all or part of the addresses in the logical addresses corresponding to the write request are locked or are about to be locked, the storage device sends a rejection response to the first terminal, so that the first terminal forwards the write request to a second terminal, wherein the second terminal is a terminal which triggers to lock all or part of the addresses in the at least two terminals; the storage device receives the write request sent by the second terminal after the locking of all or part of the addresses is released.
Optionally, when all or part of the logical addresses corresponding to the write request are locked or are to be locked, the storage device sends a rejection response to the first terminal, which specifically includes: the storage device detects whether the locked address contains the whole or partial address in the logic addresses corresponding to the storage space of the storage device; and if the detection result is that the locked address contains all or part of the address, the storage equipment sends the rejection response to the first terminal.
The scheme is that whether the locked address comprises all or part of addresses in the logical addresses corresponding to the storage space of the storage device is detected, the condition that the write request and the current locked address conflict is clarified, so that the storage device definitely determines the write request which conflicts, and generates a refusal response to the write request.
Optionally, when all or part of the logical addresses corresponding to the write request are locked or are to be locked, the storage device sends a rejection response to the first terminal, which specifically includes: if the detection result is that the locked address does not contain all or part of the address, the storage device detects whether a target service request sent by the second terminal is received, and the logical address corresponding to the target service request and the logical address corresponding to the write-in request contain the same address; if the detection result is that the target service request is received, the storage device acquires the write-in request and the priority of the target service request; and when the priority of the target service request is higher than that of the write request, the storage equipment sends the rejection response to the first terminal.
In the scheme, the storage device determines which target service request is preferentially executed by detecting that a logical address corresponding to a target service request and a logical address corresponding to the write-in request contain the same address and determining the target service request according to the priority of the target service request under the condition that the same address exists, so that a backup data center in the distributed dual active storage system can flexibly execute the target service request with relatively higher priority according to different priority determination modes on the basis of reducing time delay, and the capability of the system for preferentially responding to relatively important tasks is improved.
Optionally, the target service request is used to request to write data in a logical address corresponding to the target service request, and the logical address corresponding to the target service request includes the whole or part of the address; or the target service request is used for requesting to synchronize data in a logical address corresponding to the target service request to another storage device in the first storage device and the second storage device, and the logical address corresponding to the target service request includes all or part of the address.
Optionally, when the priority of the target service request is higher than the priority of the write request, the storage device processes the target service request, and locks a logical address corresponding to the target service request in a storage space of the storage device in a process of processing the target service request.
In the scheme, the storage device locks the logical address corresponding to the service request when processing the service request, so that the service request can be correctly executed without being interfered by other operations.
Optionally, the storage device sends the target service request and locking indication information to a corresponding backup storage device, where the locking indication information is used to indicate that the backup storage device locks a logical address corresponding to the target service request in a storage space of the backup storage device in a process of processing the target service request.
In the scheme, the storage device sends the target service request and the locking indication information to the corresponding backup storage device, so that the backup storage device can accurately inherit the locking of the logical address when the storage device fails.
Optionally, the sending, by the storage device, the rejection response to the first terminal includes: the storage device returns the rejection response containing the identity of the second terminal to the first terminal.
In a third aspect, a terminal is provided, which includes: a processor and a communication interface configured to be controlled by the processor; the processor is configured to implement the write request processing method provided by the first aspect and the alternative of the first aspect.
In a fourth aspect, there is provided a memory device, the device comprising: a processor and a communication interface configured to be controlled by the processor; the processor in the apparatus is configured to implement the write request processing method provided by any one of the second aspect and the second alternative.
In a fifth aspect, a write request processing apparatus is provided, where the apparatus includes at least one unit, where the at least one unit is configured to implement the write request processing method provided in the first aspect and the alternative of the first aspect; alternatively, the at least one unit is configured to implement the write request processing method provided by any one of the second aspect and the second alternative.
A sixth aspect provides a computer-readable storage medium storing an executable program for implementing the write request processing method provided by the first aspect and the alternatives of the first aspect, or an executable program for implementing the write request processing method provided by the second aspect or the alternatives of the second aspect.
Drawings
FIG. 1 is an architecture diagram of a memory system to which the present application is directed;
FIG. 2 is a schematic structural diagram of a storage control device provided in an exemplary embodiment of the present application;
FIG. 3 is a schematic structural diagram of a storage control device provided in an exemplary embodiment of the present application;
FIG. 4 is a flow chart of a method for processing a write request according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a logical address conflict involved in the embodiment shown in FIG. 4;
FIG. 6 is a diagram illustrating a write request process according to the embodiment shown in FIG. 4;
FIG. 7 is a diagram illustrating another write request process according to the embodiment shown in FIG. 4;
fig. 8 is a block diagram illustrating a structure of a write request processing apparatus according to an exemplary embodiment of the present application;
fig. 9 is a block diagram illustrating a structure of a write request processing apparatus according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is an architecture diagram of a memory system according to an embodiment of the present application. The storage system comprises a first data center 100 and a second data center 110, wherein the first data center 100 and the second data center 110 form a double-activity data center.
Alternatively, the first data center 100 and the second data center 110 in the embodiment of the present application may be distinguished according to the actual geographic region where the data centers are located, for example, a beijing data center and a shanghai data center. The first data center 100 communicates with the network interface 116 in the second data center 110 through the network interface 106, which is a cable jack of a wired network, which may be fiber optic, twisted pair, or coaxial cable.
Optionally, when the first data center 100 and the second data center 110 are implemented specifically in the present application, each data center may include a physical server, and the physical server may be used to provide storage, computation, and network resources for the first data center 100 or the second data center 110. Embodiments in the present application are implemented primarily by using storage functions in the physical server. For physical servers, classification can be made according to the above-described functions. For example, a physical server providing computing and network resources may be referred to as a compute node, and in a data center, several compute nodes may form a compute node cluster. The physical servers for providing storage resources may be referred to as storage devices, and several storage devices may form a storage node cluster, for example, the first data center 100 in fig. 1 includes a first storage device 102a and a third storage device 102b, and the second data center includes a second storage device 112a and a fourth storage device 112 b. In order to improve the security of the stored data, one of the storage devices may be selected as a primary storage device, and another one or several storage devices are set to store the same data as the primary storage node, that is, as a copy of the primary storage device, and the copy as a backup may be called a backup storage device. For example, in FIG. 1 the third storage device 102b is a copy of the first storage device 102a and the fourth storage device 112b is a copy of the second storage device 112 a.
The first data center 100 and the second data center 110 also include at least two terminals. When the storage system is specifically provided with the two terminals, the at least two terminals can be both arranged in the first data center 100; the at least two terminals may also be both located in the second data center 110; at least one terminal may be further provided in each of the first data center 100 and the second data center 110.
As shown in fig. 1, a first data center 100 includes at least one terminal including terminal 104, and a second data center 110 includes at least one terminal including a second terminal 114. Both the terminal 104 and the terminal 114 may initiate a service request including a write request, a read request, or a data synchronization request to a storage device in the first data center 100 or the second data center 110. Specifically, the terminal 104 can perform inter-data center communication with the second data center 110 through the network interface 106, and can also perform intra-data center communication with a storage device (such as the first storage device 102a) in the first data center 100 through a data cable inside the first data center 100. Similarly, the terminal 114 can communicate with the first data center 100 across data centers through the network interface 116, and can also communicate with a storage device (such as the second storage device 112a) in the second data center 110 within the data center through a data cable inside the second data center 110.
It should be noted that the terminal 104 and the terminal 114 in the storage system and other terminals running in the two data centers may be actual hardware devices, such as general-purpose computer devices, or virtual software applications, such as clients running on computer devices, where the clients may be virtual machines, virtual machine software, or database software, and so on.
In the storage system shown in fig. 1, the first storage device 102a and the second storage device 112a may be configured as a virtual storage unit across data centers, which is called a dual live volume. For a user using the storage system, the user can write the data to be stored into the dual live volumes without being concerned about which data center's physical storage device the data is specifically stored in. For the storage system, when data is written to the dual live volume, the first storage device 102a and the second storage device 112a each write the same data locally according to the same logical address.
Each of the storage devices in fig. 1 may include storage management software, and the storage management software may manage the physical storage chips on the storage device, for example, determine whether corresponding data addresses in the physical storage chips are locked. The Storage device may be a Network Attached Storage (NAS) device, and may also be a Storage Network Area Network (SAN) device, for example, the Storage device may be a SAN device, a Storage space of the SAN device may be specifically represented by a Logical Unit Number (LUN), the LUN is a Logical disk displayed in a Storage array, and each terminal may run on one or more LUNs of the SAN device. The storage management software of the SAN device may control the read-write operation on the LUN, and may also manage information of the LUN.
In addition, in an implementation manner of the embodiment of the present application, when the storage system is implemented as a dual active data storage system, the first data center 100 and the second data center 110 may be disaster recovery data centers, and when one of the data centers fails to cause a terminal served by the data center to stop working, the other data center continues to work and records difference information. After the data center with the fault recovers to work, the data center recording the difference information is used as a disaster recovery data center to carry out difference information synchronization on the data center, so that the information stored in the data center after the recovery work is consistent with the information stored in the disaster recovery data center.
Please refer to fig. 2, which illustrates a schematic structural diagram of a terminal according to an exemplary embodiment of the present application. The terminal 20 may be a terminal in the data center shown in fig. 1, or when the terminal shown in fig. 1 is a virtual software application, the terminal 20 may also be a computer device running the terminal shown in fig. 1.
The terminal 20 may include: a processor 21 and a communication interface 24.
The processor 21 may include one or more processing units, which may be a Central Processing Unit (CPU), a Network Processor (NP), or the like.
The communication interface 24 may include a local interface for connecting storage devices in the same data center as the terminal or other terminals, and a network interface for connecting storage devices in other data centers or data centers.
Optionally, the terminal 20 may further comprise a memory 23, and the processor 21 may be connected to the memory 23 and the communication interface 24 via a bus. The memory 23 may be used to store software programs that may be executed by the processor 21. In addition, various service data or user data may be stored in the memory 23.
Optionally, the terminal 20 may also include an output device 25 and an input device 27. An output device 25 and an input device 27 are connected to the processor 21. The output device 25 may be a display for displaying information, a power amplifier device for playing sound, a printer, or the like, and the output device 25 may further include an output controller for providing output to the display, the power amplifier device, or the printer. The input device 27 may be a device such as a mouse, keyboard, electronic stylus, or touch panel for user input of information, and the input device 27 may also include an output controller for receiving and processing input from the mouse, keyboard, electronic stylus, or touch panel.
Referring to fig. 3, a schematic structural diagram of a storage device according to an exemplary embodiment of the present application is shown. The storage device 30 may be any of the various storage devices shown in FIG. 1 and described above.
The apparatus 30 may include: a processor 31 and a communication interface 34.
The processor 31 may include one or more processing units, which may be a Central Processing Unit (CPU) or a Network Processor (NP), etc.
The communication interface 34 may include a local interface for connecting storage devices or other terminals in the same data center as the storage device, and a network interface for connecting storage devices or data centers in other data centers.
The storage device 30 may also include a memory 33, and the processor 31 may be connected to the memory 33 and the communication interface 34 via a bus. The memory 33 may be used to store software programs that may be executed by the processor 31. In addition, various service data or user data may be stored in the memory 33.
Referring to fig. 4, a flowchart of a write request processing method according to an exemplary embodiment of the present application is shown. The method may be used in the storage system shown in fig. 1. As shown in fig. 4, the data storage method may include:
in step 401, the first terminal generates a write request.
The first terminal may be the terminal 104, the terminal 114 in the storage system shown in fig. 1, or any other terminal not shown, that is, the first terminal may be a terminal in the first data center, or may be a terminal in the second data center. In the embodiment of the application, when the first terminal needs to initiate data writing, a write request is generated first.
This step may be implemented by a processor in the terminal shown in fig. 2.
Step 402, the first terminal sends write requests to the first storage device and the second storage device respectively according to the data storage system view.
The first storage device may be the first storage device 102a in the storage system shown in fig. 1, and the second storage device may be the second storage device 112a in the second storage device, where the write request is used to instruct the first storage device and the second storage device to write the same data in the same logical address of the respective storage space.
In the embodiment of the application, the first storage device and the second storage device form a double live volume. After the first terminal generates the write request, it will send the write request to the first storage device and the second storage device, respectively. The write request may include data to be written and a logical address to be written.
This step may be implemented by a processor in the terminal controlling the communication interface as shown in fig. 2.
In step 403, the storage device receives a write request sent by the first terminal.
The storage device in this step may be any one of the first storage device and the second storage device. In the implementation process of the embodiment of the present application, the first storage device and the second storage device may respectively perform step 403. That is, step 403 may be two parallel steps in which the first storage device receives the write request sent by the first terminal and the second storage device receives the write request sent by the first terminal.
This step may be implemented by a processor controlled communication interface in the memory device shown in fig. 3.
In step 404, when all or a part of the logical addresses corresponding to the write request are already locked or are about to be locked, the storage device sends a rejection response to the first terminal.
In this embodiment, when data is being written to a certain logical address in a storage device, or data synchronization is being performed (for example, data in the logical address is synchronized to the same logical address in another storage device), the storage device may lock the logical address, where the manner in which the storage device locks the logical address may be referred to as an optimistic lock. The optimistic lock is used for locking a storage space corresponding to a specified logical address, enabling the storage space corresponding to the specified logical address to be read and written only by a terminal with the optimistic lock, and forbidding other terminals to write the logical address.
After receiving a write request, a storage device (only an illustration of a first storage device angle is shown in fig. 4, and in an actual application, a second storage device also performs the same steps), may detect whether some or all addresses in a logical address corresponding to the write request are already locked or are about to be locked, and if so, the storage device rejects the write request and sends a rejection response to the first terminal.
Optionally, in this embodiment of the application, when all or part of the addresses in the logical address corresponding to the write request are already locked or are about to be locked, the storage device may further obtain an identifier of a second terminal that triggers locking of all or part of the addresses, and send the identifier of the second terminal to the first terminal in a reject response.
The second terminal may be a terminal in the same data center as the first terminal, or the second terminal may be a terminal in a different data center from the first terminal.
This step may be implemented by a processor controlled communication interface in the memory device shown in fig. 3.
As shown in fig. 4, in a possible implementation manner of the embodiment of the present application, the storage device may perform steps 404a and 404b when detecting whether all or part of the logical addresses corresponding to the write request are locked.
In step 404a, the storage device detects whether the locked address in the logical addresses corresponding to the storage space of the storage device includes all or part of the address of the logical address corresponding to the write request.
This step may be implemented by a processor in the memory device shown in fig. 3.
In step 404b, if the detection result is that the locked address includes the all or part of address, the storage device sends a rejection response to the first terminal.
In this embodiment of the present application, before receiving a write request sent by the first terminal, if a target service request sent by the second terminal has been received and processing of the target service request is started, the storage device may lock a logical address corresponding to the target service request in a process of processing the target service request, that is, lock the logical address corresponding to the target service request when processing of the target service request is started, and unlock the logical address corresponding to the target service request when processing of the target service request is completed. In the process of processing the target service request, if the storage device receives a write request sent by the first terminal, and the logical address corresponding to the target service request and the logical address corresponding to the write request include the same address (i.e., all or part of the above addresses), that is, it means that there is a conflict between the logical addresses of the two requests, at this time, the storage device sends a reject response including an identifier of the second terminal to the first terminal.
The above situations of logical address conflict can be divided into three types: the first is that the logical address corresponding to the write request overlaps with the logical address corresponding to the target service request, the second is that the logical address corresponding to the write request is completely covered by the logical address corresponding to the target service request, and the third is that the logical address corresponding to the write request and the logical address corresponding to the target service request are cross-covered. Referring specifically to fig. 5, a schematic diagram of a logical address conflict according to an embodiment of the present application is shown.
Overlapping means that the logical addresses corresponding to two or more requests are identical. As shown in the overlay diagram 510 in fig. 5, assuming that the LBA value of the logical address 511 corresponding to the write request is 0x1000 to 0x1004 and the LBA value of the logical address 512 corresponding to the target service request is also 0x1000 to 0x1004, the logical address 511 corresponding to the second write request overlaps the logical address 512 corresponding to the target service request.
The overwriting indicates that the logical address corresponding to one of the two write requests is included in the logical address corresponding to the other write request, as shown in the full overwriting diagram 520 in fig. 5, assuming that the LBA value of the logical address 522 corresponding to the write request is 0x1000 to 0x1002, and the LBA value of the logical address 521 corresponding to the target service request is 0x1001 to 0x1004, the logical address 521 corresponding to the target service request overwrites the logical address 522 corresponding to the write request.
The cross-over coverage indicates that there exists a segment of logical address, the logical address is a part of the logical addresses corresponding to the two service requests, and there also exists a different part between the logical addresses corresponding to the two service requests, as shown in a cross-over coverage diagram 530 in fig. 5, assuming that the LBA value of the logical address 531 corresponding to the write request is 0x1000 to 0x1004 and the LBA value of the logical address 532 corresponding to the target service request is 0x1003 to 0x1006, the logical address 531 corresponding to the write request and the logical address 532 corresponding to the target service request are cross-over covered, and the LBA value of the cross-over part is 0x1003 to 0x 1004.
Optionally, in this embodiment of the present application, the target service request may be used to request that data is written in a logical address corresponding to the target service request.
Or, the target service request may also be used to request to synchronize data in a logical address corresponding to the target service request in the storage device to another storage device of the first storage device and the second storage device. For example, taking the example that the storage device performing this step is the first storage device, when the second terminal synchronizes in the background after the second storage device in the second data center fails and recovers, the second terminal sends a data synchronization request to the first storage device, where the data synchronization request is used to request that the difference data written in the first storage device is synchronized to the second storage device during the failure of the second storage device.
This step may be implemented by a processor controlled communication interface in the memory device shown in fig. 3.
Optionally, when the storage device performs step 404a, if it is detected that the locked address does not include all or part of the address of the logical address corresponding to the write request, the storage device may further detect whether all or part of the address in the logical address corresponding to the write request is to be locked, that is, perform step 404c, step 404d, and step 404 e.
Step 404c, the storage device detects whether a target service request sent by the second terminal is received.
In this embodiment, it is also possible that the storage device receives the write request sent by the first terminal and the target service request sent by the second terminal at the same time, and the logical addresses of the write request and the target service request are conflicting and cannot be processed at the same time, at this time, the storage device needs to execute step 404c to determine which request is processed first.
This step may be implemented by a processor in the memory device shown in fig. 3.
In step 404d, if the detection result is that the target service request is received, the storage device obtains the priority of the write request and the priority of the target service request.
In this embodiment, the priority of a service request received by a storage device may include at least one of a priority of a service type corresponding to the service request, a priority of a data center in which a terminal sending the service request is located, and a priority of a terminal sending the service request.
Optionally, the write request and the target service request may carry respective priorities, and the storage device directly parses the priorities of the write request and the target service request from the received write request and the target service request.
Or, the storage device may also obtain the service types of the write request and the target service request, the sending terminal, the data center where the sending terminal is located, and the like, and query the priorities of the write request and the target service request according to the obtained information.
This step may be implemented by a processor in the memory device shown in fig. 3.
When the storage device performs the completion step 404c, if it is detected that the target service request is not received, the storage device may perform the subsequent step 409.
Step 404e, when the priority of the target service request is higher than the priority of the write request, the storage device sends a rejection response to the first terminal. Optionally, the reject response includes an identifier of the second terminal. At this point, the embodiment of the present invention will execute step 405.
After acquiring the respective priorities of the write request and the target service request, the storage device may compare the priorities of the write request and the target service request, and when the priority of the target service request is higher than the priority of the write request, the storage device will preferentially process the target service request and send a reject response to the first terminal; when the priority of the write request is higher than the priority of the target service request, the storage device will process the write request preferentially and perform the following step 409.
Optionally, when the priorities of the service requests include two or more priorities, the storage device may sequentially compare the various priorities of the write request and the target service request according to a predetermined order until the priorities of the write request and the target service request are determined.
For example, taking the priority of the service request including the priority of the service type corresponding to the service request, the priority of the data center where the terminal sending the service request is located, and the priority of the terminal sending the service request as an example, after the storage device obtains the respective priorities of the write request and the target service request, the storage device first compares the priority of the service type of the write request with the priority of the service type of the target service request, and if the priority of the service type of the target service request is higher than the priority of the service type of the write request (for example, the target service request is a data synchronization request, and the priority of data synchronization is higher than the priority of writing), the storage device determines that the priority of the target service request is higher than the priority of the write request; if the priority of the service type of the target service request is lower than that of the service type of the write request (for example, the target service request is a data synchronization request, and the priority of the data synchronization is lower than that of the write), the storage device determines that the priority of the target service request is lower than that of the write request. If the priority of the service type of the target service request is higher than the priority of the service type of the write request (for example, the target service request is also a write request), the storage device further compares the priority of the data center where the first terminal is located with the priority of the data center where the second terminal is located, and if the priority of the data center where the first terminal is located is higher than the priority of the data center where the second terminal is located (for example, if the first terminal is located in the first data center, the second terminal is located in the second data center, and the priority of the first data center is higher than the priority of the second data center), the priority of the write request is determined to be higher than the priority of the target service request; if the priority of the data center where the first terminal is located is lower than the priority of the data center where the second terminal is located (for example, if the first terminal is located in the first data center, the second terminal is located in the second data center, and the priority of the first data center is lower than the priority of the second data center), determining that the priority of the write-in request is lower than the priority of the target service request; if the priority of the data center where the first terminal is located is the same as that of the data center where the second terminal is located (for example, it is assumed that the first terminal and the second terminal are located in the same first data center), the storage device further compares the priorities of the first terminal and the second terminal, if the priority of the first terminal is higher than that of the second terminal, the storage device determines that the priority of the write request is higher than that of the target service request, and if the priority of the first terminal is lower than that of the second terminal, the storage device determines that the priority of the write request is lower than that of the target service request. Optionally, in this embodiment of the present application, different priorities may be set between different terminals in the same data center.
This step may be implemented by a processor controlled communication interface in the memory device shown in fig. 3.
And when the priority of the target service request is higher than that of the write-in request, the storage equipment processes the target service request, and locks a logic address corresponding to the target service request in a storage space of the storage equipment in the process of processing the target service request. The step of processing the target service request by the storage device and locking the logical address of the target service request in the process of processing the target service request may be implemented by controlling the communication interface by the processor in the storage device shown in fig. 3.
Optionally, after the storage device has processed the target service request and released the lock on the logical address corresponding to the target service request, the storage device may further send a response that the processing of the target service request is completed to the second terminal, so as to indicate that the target service request has been processed by the second terminal and the lock on part or all of the addresses in the write request has been released.
Optionally, the storage device may further send the target service request and locking indication information to a corresponding backup storage device, where the locking indication information is used to indicate that the backup storage device locks a logical address corresponding to the target service request in a storage space of the backup storage device in a process of processing the target service request. After the storage device processes and completes the target service request and releases the locking of the logical address corresponding to the target service request, the storage device sends a response for indicating that the processing of the target service request is completed to the second terminal to indicate that the processing of the target service request is completed by the second terminal, and the response returned to the second terminal aims to enable a sender of the target service request to determine that the request of the sender is completed, so as to ensure that the processing of the target service request meets the distributed strong consistency replication protocol. Correspondingly, after learning that the storage device completes the target service request, the second terminal may consider that the logical address corresponding to the target service request is unlocked.
The distributed strong consistency replication protocol is introduced for ensuring the data consistency of each replica data in the double-active data center. The dual active data center related to the embodiment of the application supports the protocol. The protocol specifically requires that when the data updating operation of the double-activity data center is completed, the two data centers for storing data need to ensure that the specified data is updated successfully. Moreover, the distributed strong-consistency replication requires that when a copy for storing specified data fails, the failed copy is allowed to be put into service processing again after data in the failed copy and data in other copies for storing the specified data are restored to be consistent.
Step 405, the first terminal receives a rejection response returned by the storage device.
In the embodiment of the application, when receiving the rejection response sent by any one of the first storage device and the second storage device, the first terminal may regard both the first storage device and the second storage device as having sent the rejection response.
When the first terminal receives a rejection response returned by the storage device, the first terminal may obtain the identifier of the second terminal carried in the rejection response.
This step may be implemented by a processor in the terminal controlling the communication interface as shown in fig. 2.
In step 406, the first terminal forwards the write request to the second terminal.
Optionally, the first terminal may obtain the identifier of the second terminal carried in the rejection response, and forward the write request to the second terminal according to the identifier of the second terminal.
This step may be implemented by a processor in the terminal controlling the communication interface as shown in fig. 2.
Step 407, the second terminal receives the write request forwarded by the first terminal.
This step may be implemented by a processor in the terminal controlling the communication interface as shown in fig. 2.
In step 408, the second terminal sends the write request to the first storage device and the second storage device respectively after the lock of all or part of the addresses is released.
Optionally, when the storage system according to the embodiment of the present application is in operation, if the second terminal receives the write request forwarded by the first terminal and also receives the write request forwarded by another terminal at the same time, the second terminal may determine which write request to send first according to the priority of the terminal that sends the write request.
This step may be implemented by a processor in the terminal controlling the communication interface as shown in fig. 2.
Accordingly, the storage device receives the write request. After receiving the write request, the storage device continues to monitor whether part or all of the addresses in the logical address of the write request are locked or are about to be locked, and if the part or all of the addresses in the logical address of the write request are locked or are about to be locked, a reject response is returned to the second terminal, otherwise, the storage device executes step 409.
The act of the storage device receiving the write request may be implemented by a processor controlled communication interface in a storage control device implemented as the first storage device or the second storage device.
In step 409, the storage device writes the data indicated by the write request according to the logical address corresponding to the write request.
It should be noted that, when the step 409 is executed by two storage devices that form a dual live volume together, in the embodiment shown in this application, it can be considered that the first storage device and the second storage device write the same data in the same logical address of their respective storage spaces. After both the first storage device and the second storage device store the same data, step 409 is considered to be execution complete.
This step may be implemented by a processor in the memory device shown in fig. 3.
In step 410, the storage device locks the logical address corresponding to the write request in the storage space of the storage device.
This step may be implemented by a processor in the memory device shown in fig. 3.
Step 411, the storage device sends the write request and locking indication information to the corresponding backup storage device, where the locking indication information is used to indicate that the backup storage device locks the logical address corresponding to the write request in the storage space of the backup storage device in the process of processing the write request.
Optionally, in order to prevent the storage device from losing data stored in the storage device due to a failure, when the storage device writes data according to a write request, the storage device sends the write request and the locking indication information to the corresponding backup storage device.
Optionally, when the storage device sends the service request being processed (for example, a write request or a target service request) to the corresponding backup storage device, the storage device may not send the locking indication information, and when receiving and processing the service request, the backup storage device automatically locks the logical address corresponding to the service request.
Optionally, according to the requirement of distributed strong-consistency replication, when the backup storage device completes processing of the service request, a response of successful processing is returned to the terminal that sent the service request.
This step may be implemented by a processor controlled communication interface in the memory device shown in fig. 3.
Two write request processing flows that can be realized by the present application are described below by two specific data write examples.
In a data writing case that can be implemented by the present application, taking the architecture of the storage system shown in fig. 1 as an example, please refer to fig. 6, which shows a schematic diagram of a write request processing according to an embodiment of the present application, and as shown in fig. 6, the data writing case can be implemented by the following steps.
1) When a first terminal in the first data center receives an instruction to write data, a write request 1 is generated.
2) And the first terminal acquires a logical address corresponding to the write-in request 1 according to the storage system view, and respectively sends the write-in request 1 to the first storage device and the second storage device for processing. After receiving the write request 1, the first storage device locks a logical address corresponding to the write request 1 and writes data according to the indication of the write request 1; meanwhile, after the second storage device receives the write request 1, the logical address corresponding to the write request 1 is also locked, and data is written according to the instruction of the write request 1. Since the third storage device is a backup device of the first storage device and the fourth storage device is a backup device of the second message device, the third storage device and the first storage device lock the same logical address and write data according to the same write request 1, and similarly, the fourth storage device also performs the same operation as the second storage device.
3) And the second terminal generates another write request 2, and the logical address corresponding to the write request 2 generated by the second terminal and the logical address corresponding to the write request 1 generated by the first terminal are partially or totally the same.
4) And the second terminal respectively sends the generated write request 2 to the first storage device and the second storage device.
5) After the first storage device and the second storage device receive the write request 2, both storage devices will detect whether the locked logical addresses are the same as the logical address corresponding to the write request 2 sent by the second terminal. Since the write request 2 sent by the second terminal is partially or completely the same as the write request 1 sent by the first terminal, both the first storage device and the second storage device will generate a reject response and feed back the reject response to the second terminal. Meanwhile, the two storage devices add the identifier of the first terminal in the generated rejection response.
6) And after receiving the rejection response, the second terminal acquires the identifier of the first terminal in the rejection response and forwards the write request 2 to the first terminal.
7) And the first terminal arranges the write request 2 forwarded by the second terminal into a task queue after receiving the write request, and the first terminal receives a completion response sent by the first storage device and the second storage device after the write request 1 is executed.
8) The first terminal feeds back to the user a response that the write request 1 has been completed.
9) The first terminal sends the write request 2 to the first storage device and the second storage device, respectively.
Optionally, in this example, a background data synchronization request may also be used as a request sent by the terminal, and the first storage device and the second storage device still lock the same logical address, where the difference is that data in the logical address locked by one of the storage devices is used for data to be read, and the logical address locked by the other storage device is used for data to be written.
In another data writing case that can be realized by the present application, the present application can avoid the conflict between the data writing and the background data synchronization due to the same logical address by means of the data cursor. Referring to fig. 7, a schematic diagram of another write request processing according to the storage system architecture shown in fig. 1 is shown. As shown in fig. 7, the process is as follows:
1) the first terminal initiates background data synchronization.
When the first terminal initiates background synchronization, the first terminal indicates the first storage device to read the data to be synchronized by taking the current position of the data cursor aligned with the logical address of the double live volume space as a starting point and taking the granularity of the data cursor as a reference. For example, when the cursor granularity is 1MB, the first storage device locks the logical address of 1MB aligned from the position where the current data cursor is located, and reads the data in the 1MB, and the third storage device locks the corresponding logical address of the same 1 MB.
2) The first storage device transmits the read 1MB of data to the first terminal.
3) The first terminal sends a write request 1 to a second storage device in a second data center, wherein the write request 1 is used for requesting that 1MB of data is written into the same logical address in the second storage device, after the second storage device receives the 1MB of data, the logical address corresponding to the 1MB of data is locked, the 1MB of data is written into the corresponding logical address in the second storage device, and meanwhile, a fourth storage device locks the same logical address corresponding to 1MB of data, and the 1MB of data is written into the corresponding logical address in the fourth storage device.
4) The second terminal generates a write request 2.
5) The second terminal sends the write request 2 to the first storage device and the second storage device, respectively. The first storage device and the second storage device respectively detect whether the logical address corresponding to the write request 2 is the same as the current logical address locked by 1MB, and if the logical addresses are completely different, the write request 2 is directly executed.
6) When the logical address corresponding to the write request 2 is partially or completely the same as the logical address locked by the current 1MB, the first storage device and the second storage device acquire the identifier of the first terminal triggering the locking of the logical address, and return a reject response carrying the identifier of the first terminal to the second terminal.
7) The second terminal forwards the write request 2 to the first terminal after receiving the reject response.
8) After the second storage device finishes writing the 1MB data and releases the locking of the logical address corresponding to the 1MB, the second storage device releases the locking of the logical address corresponding to the 1MB and returns a response of successful writing to the first terminal.
9) And the first terminal sends the write request 2 forwarded by the second terminal to the first storage device and the second storage device respectively.
And after receiving the write request 2 forwarded by the second terminal, the first storage device and the second storage device process the write request 2 forwarded by the second terminal.
Optionally, in the above background data synchronization case, since the data size of one synchronization may be large, for example, 10GB data, if the 10GB data is directly locked to perform data synchronization, other services that need to read and write the data segment in the storage system may be affected. In order to avoid the influence, when background data synchronization is carried out, the data needing to be synchronized is updated in a data cursor mode. When the data segment is synchronized, the storage devices at the two sides of reading and writing will release the logic address corresponding to the data segment, and lock the logic address in the storage devices at the two sides of reading and writing corresponding to the next data segment following the data segment, so as to synchronize the data segment.
It should be noted that the data segments to be synchronized may or may not exist continuously in the dual live volume space. For example, the data cursor is 1MB long, the data to be synchronized is 3MB, and the data to be synchronized is at the position of the double live volume space where the 3MB, 5MB and 7MB are respectively located, starting from the position of the double live volume space where the cursor is currently aligned. The data cursor will detect at 1MB and 2MB respectively, and move to the 3MB position after not detecting the data that needs to be synchronized, and since the data at this position needs to be synchronized, the data instructs the first storage device and the second storage device to lock the logical address respectively and perform the data synchronization operation at this position. After the data at the 3MB position is synchronized, the data cursor receives the data corresponding to the synchronization completed by the first storage device and the second storage device respectively, then the data cursor detects the 4MB and the 5MB … in sequence until the data to be synchronized with the preset length of 3MB are synchronized completely, the synchronization work is finished, and a response of successful synchronization is returned to the terminal initiating the synchronization work.
To sum up, in the write request processing method shown in the embodiment of the present application, a data write request is obtained through a first terminal, and write requests are respectively sent to a first storage device and a second storage device according to a data storage system view, and after the first storage device and the second storage device respectively receive the write requests, the first storage device and the second storage device respectively detect whether all or part of addresses in a logical address corresponding to the write request are locked or are to be locked; when the detection result is negative, locking the logic address corresponding to the write request and executing the write request, and simultaneously, in order to back up the data in the first storage device and the second storage device, instructing the third message device to execute the same operation as the first storage device and instructing the fourth storage device to execute the same operation as the second storage device; and when the detection result is positive, returning an identifier of a second terminal which triggers all or part of the logic address lock corresponding to the write request or is about to lock to the first terminal sending the write request, forwarding the write request to the second terminal by the first terminal, and after detecting that the logic address corresponding to the write request is unlocked, sending the write request to the first storage device and the second storage device respectively by the second terminal to indicate the two storage devices to execute the write request. In the application, because the terminal in any storage device does not need to request address locking across the data center before sending the write request every time, when the terminal in the backup data center sends the write request, the time delay from sending the lock request to receiving the lock notification does not exist, so that the processing performance of the backup data center is greatly improved while the conflict between the address of the write request and the addresses of other write requests or data synchronization requests is avoided.
In addition, according to the embodiment of the application, when the storage device detects that the logical address corresponding to the current write request is partially or completely locked, the identifier of the terminal locking the logical address is detected, and the current write request is forwarded to the terminal locking the logical address, so that the request for generating the write conflict is converted into serial processing, and the write request of the logical address conflict is guaranteed to be executed without errors on the basis of reducing the time delay of the distributed double-active storage system.
The following are embodiments of an apparatus of the present application that may be used to perform embodiments of the methods of the present application. For details which are not disclosed in the device embodiments of the present application, reference is made to the method embodiments of the present application.
Fig. 8 is a block diagram of a write request processing apparatus according to an embodiment of the present application, where the write request processing apparatus may be implemented as part or all of a write request processing device by a combination of hardware circuits or software hardware. The write request processing apparatus may include: a write request sending unit 801, a reject response reception correspondence 802, and a forwarding unit 803.
A write request sending unit 801, configured to perform the same or similar steps as step 402 described above.
A reject response receiving unit 802 for performing the same or similar steps as in the above-described step 405.
A forwarding unit 803, configured to perform the same or similar steps as step 406.
Fig. 9 is a block diagram of another write request processing apparatus according to an embodiment of the present application, where the write request processing apparatus may be implemented as part of or all of a storage device by a combination of hardware circuits or software hardware. The write request processing apparatus may include: a write request receiving unit 901, a reject response transmitting unit 902, a processing lock unit 903, and a lock instruction transmitting unit 904.
A write request receiving unit 901, configured to perform the same or similar steps as step 403 described above, and configured to perform that the storage device receives a write request.
A reject response sending unit 902, configured to perform the same or similar steps as the above-described step 404 (including steps 404a, 404b, 404c, 404d, and 404 e).
A processing locking unit 903, configured to execute the same or similar steps as that in the step 404, where the storage device processes the target service request, and locks the logical address of the target service request in the process of processing the target service request, or the same or similar steps as those in the steps 409 and 410.
A lock indication sending unit 904, configured to perform the same or similar steps as step 411.
It should be noted that: in the write request processing apparatus provided in the foregoing embodiment, when storing data, only the division of the functional modules is described as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the method embodiments of the write request processing apparatus and the write request processing method provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
The above example numbers of the present application are for description only and do not represent the merits of the examples.
It will be understood by those of ordinary skill in the art that all or part of the steps executed by the processor to implement the above embodiments may be implemented by hardware, or may be implemented by instructions controlling the associated hardware, and the instructions may be stored in a computer-readable storage medium, which may be a read-only memory, a magnetic disk, an optical disk, or the like.
The above description is only one specific embodiment that can be realized by the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (27)

1. A terminal for use in a storage system comprising a first data center containing a first storage device and a second data center containing a second storage device, the first data center and/or the second data center further containing at least two terminals including the terminal, the terminals comprising a processor and a communication interface configured to be controlled by the processor,
the processor is configured to send a write request to the first storage device and the second storage device through the communication interface, where the write request is used to instruct the first storage device and the second storage device to write the same data in the same logical address of their respective storage spaces;
the processor is configured to receive, through the communication interface, a rejection response returned by a storage device, where the storage device is the first storage device or the second storage device, and the rejection response is used to indicate that all or part of the logical addresses corresponding to the write request are locked or are about to be locked;
the processor is configured to forward the write request to a second terminal through the communication interface, so that the second terminal sends the write request to the first storage device and the second storage device respectively after the lock of all or part of the addresses is released, where the second terminal is a terminal that triggers locking of all or part of the addresses in the at least two terminals.
2. The terminal of claim 1,
the processor is further configured to obtain an identifier of the second terminal carried in the rejection response, and forward the write request to the second terminal through the communication interface according to the identifier of the second terminal.
3. A storage device, for use in a storage system comprising a first data center and a second data center, the first data center including a first storage device, the second data center including a second storage device, the first data center and/or the second data center further including at least two terminals, the device being any one of the first storage device and the second storage device, the device comprising a processor and a communication interface, the communication interface configured to be controlled by the processor,
the processor is used for receiving a write request sent by a first terminal of the at least two terminals through the communication interface;
the processor is configured to send a rejection response to the first terminal through the communication interface when all or part of addresses in the logical address corresponding to the write request are locked or are about to be locked, so that the first terminal forwards the write request to a second terminal, where the second terminal is a terminal that triggers locking of all or part of addresses in the at least two terminals;
the processor is configured to receive, through the communication interface, the write request sent by the second terminal after the lock on all or part of the addresses is released.
4. The apparatus of claim 3, wherein the processor is further configured to, when sending a rejection response to the first terminal over the communication interface,
detecting whether the locked address in the logical addresses corresponding to the storage space of the storage device contains the whole or partial address;
and if the detection result is that the locked address contains all or part of the address, sending the rejection response to the first terminal through the communication interface.
5. The device of claim 4, wherein the processor is further configured to transmit a rejection response to the first terminal via the communication interface
If the locked address does not contain all or part of the address, detecting whether a target service request sent by the second terminal is received, wherein a logical address corresponding to the target service request and a logical address corresponding to the write request contain the same address;
if the detection result is that the target service request is received, acquiring the priority of the writing request and the priority of the target service request through the communication interface;
and when the priority of the target service request is higher than that of the write request, sending the rejection response to the first terminal through the communication interface.
6. The apparatus of claim 5,
the target service request is used for requesting to write data in a logical address corresponding to the target service request, and the logical address corresponding to the target service request comprises the data written in all or part of the addresses;
alternatively, the first and second electrodes may be,
the target service request is used for requesting to synchronize data in a logical address corresponding to the target service request to another storage device of the first storage device and the second storage device, and the logical address corresponding to the target service request includes all or part of the address.
7. The apparatus of claim 5,
the processor is further configured to lock a logical address corresponding to the target service request in a storage space of the storage device in a process of processing the target service request when the priority of the target service request is higher than the priority of the write request.
8. The apparatus according to any one of claims 3 to 7,
the processor is further configured to send a target service request and locking indication information to a corresponding backup storage device through the communication interface, where the locking indication information is used to indicate that the backup storage device locks a logical address corresponding to the target service request in a storage space of the backup storage device in a process of processing the target service request, and the target service request is a request sent by the second terminal.
9. The apparatus according to any one of claims 3 to 7,
the processor is further configured to return the rejection response including the identifier of the second terminal to the first terminal through the communication interface.
10. A write request processing apparatus, configured to be used in a storage system including a first data center and a second data center, where the first data center includes a first storage device, the second data center includes a second storage device, and the first data center and/or the second data center further includes at least two terminals, and the apparatus is applied to one terminal of the at least two terminals, and the apparatus includes:
a write request sending unit, configured to send a write request to the first storage device and the second storage device, where the write request is used to instruct the first storage device and the second storage device to write the same data in the same logical address of their respective storage spaces;
a reject response receiving unit, configured to receive a reject response returned by a storage device, where the storage device is the first storage device or the second storage device, and the reject response is used to indicate that all or part of the logical addresses corresponding to the write request are locked or are about to be locked;
a forwarding unit, configured to forward the write request to a second terminal, so that the second terminal sends the write request to the first storage device and the second storage device respectively after the lock of all or part of the addresses is released, where the second terminal is a terminal that triggers locking of all or part of the addresses in the at least two terminals.
11. The apparatus of claim 10,
the forwarding unit is further configured to obtain an identifier of the second terminal carried in the rejection response, and forward the write request to the second terminal according to the identifier of the second terminal.
12. A write request processing apparatus, configured to be used in a storage system including a first data center and a second data center, where the first data center includes a first storage device, the second data center includes a second storage device, and the first data center and/or the second data center further includes at least two terminals, and the apparatus is applied to any one of the first storage device and the second storage device, and the apparatus includes:
a write request receiving unit, configured to receive a write request sent by a first terminal of the at least two terminals;
a reject response sending unit, configured to send a reject response to the first terminal when all or part of the addresses in the logical address corresponding to the write request are locked or are about to be locked, so that the first terminal forwards the write request to a second terminal, where the second terminal is a terminal that triggers locking of all or part of the addresses in the at least two terminals;
the write request receiving unit is further configured to receive the write request sent by the second terminal after the lock of all or part of the addresses is released.
13. Device according to claim 12, wherein said rejection response sending unit is specifically configured to send said rejection response to said user
Detecting whether the locked address in the logical addresses corresponding to the storage space of the storage device contains the whole or partial address;
and if the detection result is that the locked address contains all or part of the address, sending the rejection response to the first terminal.
14. The apparatus of claim 13, wherein the reject response sending unit is further configured to send the reject response
If the detection result is that the locked address does not contain all or part of the address, detecting whether a target service request sent by the second terminal is received, wherein a logical address corresponding to the target service request and a logical address corresponding to the write-in request contain the same address;
if the detection result is that the target service request is received, acquiring the priority of the writing request and the priority of the target service request;
and when the priority of the target service request is higher than that of the write request, sending the rejection response to the first terminal.
15. The apparatus of claim 14,
the target service request is used for requesting to write data in a logical address corresponding to the target service request, and the logical address corresponding to the target service request comprises the data written in all or part of the addresses;
alternatively, the first and second electrodes may be,
the target service request is used for requesting to synchronize data in a logical address corresponding to the target service request to another storage device of the first storage device and the second storage device, and the logical address corresponding to the target service request includes all or part of the address.
16. The apparatus of claim 14, further comprising:
and the processing locking unit is used for processing the target service request when the priority of the target service request is higher than that of the write request, and locking a logic address corresponding to the target service request in a storage space of the storage device in the process of processing the target service request.
17. The apparatus of any one of claims 12 to 16, further comprising:
a locking indication sending unit, configured to send a target service request and locking indication information to a corresponding backup storage device, where the locking indication information is used to indicate that the backup storage device locks a logical address corresponding to the target service request in a storage space of the backup storage device in a process of processing the target service request, and the target service request is a request sent by the second terminal.
18. The apparatus of any one of claims 12 to 16,
the reject response sending unit is configured to return the reject response including the identifier of the second terminal to the first terminal.
19. A write request processing method, used in a storage system including a first data center and a second data center, where the first data center includes a first storage device, the second data center includes a second storage device, and the first data center and/or the second data center further includes at least two terminals, and the method is performed by a first terminal of the at least two terminals, and the method includes:
the first terminal sends write requests to the first storage device and the second storage device respectively, wherein the write requests are used for indicating the first storage device and the second storage device to write the same data in the same logical address of the storage space of the first storage device and the second storage device;
the first terminal receives a rejection response returned by a storage device, wherein the storage device is the first storage device or the second storage device, and the rejection response is used for indicating that all or part of the logical addresses corresponding to the write request are locked or are about to be locked;
and the first terminal forwards the write request to a second terminal, so that the second terminal sends the write request to the first storage device and the second storage device respectively after the locking of all or part of the addresses is released, and the second terminal is a terminal which triggers the locking of all or part of the addresses in the at least two terminals.
20. The method of claim 19, wherein forwarding the write request to the second terminal by the first terminal comprises:
the first terminal acquires the identifier of the second terminal carried in the rejection response;
and the first terminal forwards the write request to the second terminal according to the identifier of the second terminal.
21. A write request processing method, used in a storage system including a first data center and a second data center, where the first data center includes a first storage device, the second data center includes a second storage device, and the first data center and/or the second data center further includes at least two terminals, the method being performed by any one of the first storage device and the second storage device, the method including:
the storage device receives a write request sent by a first terminal of the at least two terminals;
when all or part of the addresses in the logical addresses corresponding to the write request are locked or are about to be locked, the storage device sends a rejection response to the first terminal, so that the first terminal forwards the write request to a second terminal, wherein the second terminal is a terminal which triggers locking of all or part of the addresses in the at least two terminals;
and the storage equipment receives the write request sent by the second terminal after the locking of all or part of the addresses is released.
22. The method of claim 21, wherein when all or part of the logical addresses corresponding to the write request are locked or are about to be locked, the storage device sends a reject response to the first terminal, and the method comprises:
the storage device detects whether the locked address comprises all or part of addresses in the logical addresses corresponding to the storage space of the storage device;
and if the detection result is that the locked address contains all or part of the address, the storage device sends the rejection response to the first terminal.
23. The method according to claim 22, wherein when all or part of the logical addresses corresponding to the write request are locked or are about to be locked, the storage device sends a reject response to the first terminal, and further comprising:
if the detection result is that the locked address does not contain all or part of the address, the storage device detects whether a target service request sent by the second terminal is received, and the logical address corresponding to the target service request and the logical address corresponding to the write request contain the same address;
if the detection result is that the target service request is received, the storage device acquires the priority of the write request and the priority of the target service request;
and when the priority of the target service request is higher than that of the write request, the storage equipment sends the rejection response to the first terminal.
24. The method of claim 23,
the target service request is used for requesting to write data in a logical address corresponding to the target service request, and the logical address corresponding to the target service request comprises all or part of the addresses;
alternatively, the first and second electrodes may be,
the target service request is used for requesting to synchronize data in a logical address corresponding to the target service request to another storage device of the first storage device and the second storage device, and the logical address corresponding to the target service request includes all or part of the address.
25. The method of claim 23, further comprising:
and when the priority of the target service request is higher than that of the write-in request, the storage equipment processes the target service request, and locks a logic address corresponding to the target service request in a storage space of the storage equipment in the process of processing the target service request.
26. The method of any one of claims 21 to 25, further comprising:
the storage device sends a target service request and locking indication information to a corresponding backup storage device, the locking indication information is used for indicating the backup storage device to lock a logical address corresponding to the target service request in a storage space of the backup storage device in the process of processing the target service request, and the target service request is a request sent by the second terminal.
27. The method according to any of claims 21 to 25, wherein the sending, by the storage device, a rejection response to the first terminal comprises:
and the storage equipment returns the rejection response containing the identification of the second terminal to the first terminal.
CN201611155528.1A 2016-12-14 2016-12-14 Write request processing method, device and equipment Active CN106843749B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611155528.1A CN106843749B (en) 2016-12-14 2016-12-14 Write request processing method, device and equipment
PCT/CN2017/096052 WO2018107772A1 (en) 2016-12-14 2017-08-04 Method, device and apparatus for processing write request

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611155528.1A CN106843749B (en) 2016-12-14 2016-12-14 Write request processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN106843749A CN106843749A (en) 2017-06-13
CN106843749B true CN106843749B (en) 2020-01-21

Family

ID=59139496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611155528.1A Active CN106843749B (en) 2016-12-14 2016-12-14 Write request processing method, device and equipment

Country Status (2)

Country Link
CN (1) CN106843749B (en)
WO (1) WO2018107772A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106843749B (en) * 2016-12-14 2020-01-21 华为技术有限公司 Write request processing method, device and equipment
CN107329698B (en) * 2017-06-29 2020-08-11 杭州宏杉科技股份有限公司 Data protection method and storage device
CN107329701A (en) * 2017-06-29 2017-11-07 郑州云海信息技术有限公司 Creation method, the apparatus and system of dual-active volume in a kind of storage system
CN107643961A (en) * 2017-09-26 2018-01-30 郑州云海信息技术有限公司 Common volume turns method of data synchronization, system, device and the storage medium of mirrored volume
CN110209641A (en) * 2018-02-12 2019-09-06 杭州宏杉科技股份有限公司 A kind of trunking service processing method and device applied in more controlled storage systems
CN110519312A (en) * 2018-05-21 2019-11-29 视联动力信息技术股份有限公司 A kind of resource synchronization method and device based on view networking
CN110519313A (en) * 2018-05-21 2019-11-29 视联动力信息技术股份有限公司 A kind of resource synchronization method and device based on view networking
CN109240601B (en) * 2018-07-24 2021-08-27 中国建设银行股份有限公司 Storage space processing method and device for cloud storage and storage medium
CN109842685A (en) * 2019-02-14 2019-06-04 视联动力信息技术股份有限公司 A kind of method of data synchronization and device
CN109918208A (en) * 2019-02-28 2019-06-21 新华三技术有限公司成都分公司 A kind of I/O operation processing method and processing device
CN109976672B (en) * 2019-03-22 2022-02-22 深信服科技股份有限公司 Read-write conflict optimization method and device, electronic equipment and readable storage medium
CN113360081A (en) * 2020-03-06 2021-09-07 华为技术有限公司 Data processing method and apparatus thereof
CN112084200A (en) * 2020-08-24 2020-12-15 中国银联股份有限公司 Data read-write processing method, data center, disaster recovery system and storage medium
CN113268483A (en) * 2021-05-24 2021-08-17 北京金山云网络技术有限公司 Request processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102971732A (en) * 2010-07-02 2013-03-13 华为技术有限公司 System architecture for integrated hierarchical query processing for key/value stores
CN103827843A (en) * 2013-11-28 2014-05-28 华为技术有限公司 Method, device, and system for writing data
CN105068771A (en) * 2015-09-17 2015-11-18 浪潮(北京)电子信息产业有限公司 Unified storage method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004171411A (en) * 2002-11-21 2004-06-17 Hitachi Global Storage Technologies Netherlands Bv Data storage device and management method of buffer memory
CN101562761B (en) * 2009-05-18 2012-02-01 杭州华三通信技术有限公司 Method and system for backup storage in optical network
US20130007368A1 (en) * 2011-06-29 2013-01-03 Lsi Corporation Methods and systems for improved miorroring of data between storage controllers using bidirectional communications
KR102116702B1 (en) * 2013-09-27 2020-05-29 삼성전자 주식회사 Apparatus and method for data mirroring control
CN106843749B (en) * 2016-12-14 2020-01-21 华为技术有限公司 Write request processing method, device and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102971732A (en) * 2010-07-02 2013-03-13 华为技术有限公司 System architecture for integrated hierarchical query processing for key/value stores
CN103827843A (en) * 2013-11-28 2014-05-28 华为技术有限公司 Method, device, and system for writing data
CN105068771A (en) * 2015-09-17 2015-11-18 浪潮(北京)电子信息产业有限公司 Unified storage method and system

Also Published As

Publication number Publication date
CN106843749A (en) 2017-06-13
WO2018107772A1 (en) 2018-06-21

Similar Documents

Publication Publication Date Title
CN106843749B (en) Write request processing method, device and equipment
US20210004355A1 (en) Distributed storage system, distributed storage system control method, and storage medium
KR101602312B1 (en) Data sending method, data receiving method, and storage device
US8464101B1 (en) CAS command network replication
US6950915B2 (en) Data storage subsystem
CN109388630B (en) Database switching method, system, electronic device and computer readable medium
CN106062717A (en) Distributed storage replication system and method
US20170168756A1 (en) Storage transactions
US10185636B2 (en) Method and apparatus to virtualize remote copy pair in three data center configuration
US11748215B2 (en) Log management method, server, and database system
US11409711B2 (en) Barriers for dependent operations among sharded data stores
JP2023541298A (en) Transaction processing methods, systems, devices, equipment, and programs
WO2018157605A1 (en) Message transmission method and device in cluster file system
CN108352995B (en) SMB service fault processing method and storage device
EP3896571B1 (en) Data backup method, apparatus and system
CN110121712B (en) Log management method, server and database system
CN105323271B (en) Cloud computing system and processing method and device thereof
CN107168774B (en) Virtual machine migration method and system based on local storage
CN111737063B (en) Disk lock arbitration method, device, equipment and medium for double-control brain fracture
CN105205160A (en) Data write-in method and device
US11669516B2 (en) Fault tolerance for transaction mirroring
US10896103B2 (en) Information processing system
CN108268210B (en) Information processing method, computing node and storage node
CN115705269A (en) Data synchronization method, system, server and storage medium
EP3627359B1 (en) Transaction processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant