CN111274205A - Data block access method and device and storage medium - Google Patents

Data block access method and device and storage medium Download PDF

Info

Publication number
CN111274205A
CN111274205A CN202010014783.4A CN202010014783A CN111274205A CN 111274205 A CN111274205 A CN 111274205A CN 202010014783 A CN202010014783 A CN 202010014783A CN 111274205 A CN111274205 A CN 111274205A
Authority
CN
China
Prior art keywords
node
fault
data block
storage
failed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010014783.4A
Other languages
Chinese (zh)
Other versions
CN111274205B (en
Inventor
周应超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Pinecone Electronic Co Ltd
Original Assignee
Beijing Pinecone Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pinecone Electronics Co Ltd filed Critical Beijing Pinecone Electronics Co Ltd
Priority to CN202010014783.4A priority Critical patent/CN111274205B/en
Publication of CN111274205A publication Critical patent/CN111274205A/en
Application granted granted Critical
Publication of CN111274205B publication Critical patent/CN111274205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/13File access structures, e.g. distributed indices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/14Details of searching files based on file metadata
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/1734Details of monitoring file system events, e.g. by the use of hooks, filter drivers, logs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Library & Information Science (AREA)
  • Computer Hardware Design (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to a data access method and device and a storage medium. The data access method comprises the following steps: determining a storage node of a target data block to be accessed; determining a fault node according to the fault identification information; accessing the target data block from the storage node other than the failed node. In the embodiment of the application, compared with the case that a single file is used as the fault marking granularity, the whole storage node is used as the fault marking granularity, so that once a fault of one storage node is found, all subsequent data block accesses preferentially avoid the fault node, other normal storage nodes are accessed to read data, the data reading time delay is reduced, and the data reading speed is increased.

Description

Data block access method and device and storage medium
Technical Field
The present disclosure relates to the field of data storage technologies, and in particular, to a data access method and apparatus, and a storage medium.
Background
In a distributed file system, there will be a metadata server, a data server, and a client. A typical read-write process is that a Client (Client) initiates a metadata query to a metadata server, and the metadata server returns a data server address list to the Client. The client initiates real data read-write operation to the data servers.
To achieve high availability of data, the data of a distributed file system typically has multiple copies stored on individual metadata servers. The write operation writes data to all storage nodes containing copies, and the read operation selects one of the nodes containing copies of data to be read. In a read operation, in order to improve performance, node information of storage nodes (namely data servers) of all data blocks of a file is acquired from a metadata server and cached when each file is opened, so that frequent interaction with the metadata server is not required.
However, in the related art, it is found that the data block reading delay is large or the success rate is low frequently.
Disclosure of Invention
The disclosure provides a data access method and device and a storage medium.
A first aspect of the present disclosure provides a data block access method, including:
determining a storage node of a target data block to be accessed;
determining a fault node according to the fault identification information;
accessing the target data block from the storage node other than the failed node.
Based on the above scheme, the fault identification information includes: a failed node list containing node identifications of failed nodes;
the determining the fault node according to the fault information includes:
and determining the fault node according to the fault node list.
Based on the above scheme, the method further comprises:
acquiring the repair condition information of the fault node;
and deleting the node identification of the storage node with the eliminated fault from the fault identification information according to the repair condition information.
Based on the above scheme, the method further comprises:
acquiring node state information of each storage node in a file system;
and generating fault identification information for identifying the fault node according to the node state information.
Based on the above scheme, the acquiring node state information of each storage node in the file system includes:
acquiring file access information of each storage node in the file system;
generating fault identification information for identifying the fault node according to the node state information, wherein the fault identification information comprises:
when one of the storage nodes fails to access data blocks of one or more files, the corresponding storage node is determined to be the failed node.
A second aspect of the embodiments of the present application provides a data block access apparatus, including:
the first determining module is used for determining a storage node of a target data block to be accessed;
the second determining module is used for determining a fault node according to the fault identification information;
and the access module is used for accessing the target data block from the storage nodes except the failed node.
Based on the above scheme, the apparatus further comprises:
the fault identification information includes: a failed node list containing node identifications of failed nodes;
and the second determining module is used for determining the fault node according to the fault node list.
Based on the above scheme, the apparatus further comprises:
the first acquisition module is used for acquiring the repair condition information of the fault node;
and the updating module is used for deleting the node identification of the storage node with the removed fault from the fault identification information according to the repair condition information.
Based on the above scheme, the apparatus further comprises:
the second acquisition module is used for acquiring node state information of each storage node in the file system;
and the generating module is used for generating fault identification information for identifying the fault node according to the node state information.
Based on the above scheme, the second obtaining module is configured to obtain file access information of each storage node in the file system;
the generating module is specifically configured to determine that the corresponding storage node is the failed node when accessing the data block of one or more files on one storage node fails.
A third aspect of the embodiments of the present disclosure provides a data block access apparatus, including a processor, a memory, and an executable program stored on the memory and capable of being executed by the processor, where the processor executes the steps of the data block access method according to the first aspect when executing the executable program.
A fourth aspect of the embodiments of the present disclosure provides a storage medium on which an executable program is stored, where the executable program, when executed by a processor, implements the steps of the data block access method of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: determining a storage node of a target data block to be accessed; determining a fault node according to the fault identification information; accessing the target data block from the storage node other than the failed node. Therefore, compared with the method that only a single file is used as the fault marking granularity, the whole storage node is used as the fault marking granularity, once a storage node fault is found, all data block accesses of any subsequent file can preferentially avoid the fault node, other normal storage nodes are accessed to read data, the data reading time delay is reduced, and the data reading speed is improved. Therefore, in the embodiment of the application, the whole fault of one node is reduced, the phenomena of large access delay and low access success rate caused by the fact that access is still requested from the fault node during subsequent file access are reduced, and the access success rate and the access speed are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating a method of accessing a block of data according to an example embodiment.
FIG. 2 is a flow chart illustrating a method of accessing a block of data, according to an example embodiment.
FIG. 3 is a flow chart illustrating a method of data block access in accordance with an exemplary embodiment.
FIG. 4 is a flow chart illustrating a method of accessing a block of data, according to an example embodiment.
Fig. 5 is a block diagram illustrating a structure of a website data block accessing apparatus according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating a structure of a website data block accessing apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
In fig. 1, it is the process that the client needs to access each data block in the file 3. First, a node list where each data block of the file 3 is located is obtained and cached. As shown in FIG. 1, file 3 has a block of data, a copy of which is on data storage node b, storage node c, and storage node e.
FIG. 1 is an exemplary process for accessing the 1 st block of a file 3 in a distributed file system, which may include:
1: a client accesses a metadata server;
2: and the metadata server returns an address list of the data block which is requested to be accessed by the client to the client and informs the client.
3: the client requests to access each data block of the file 3 from the data server b (namely, a storage node b) according to the metadata server;
4: and the data server b returns the data block which is requested to be accessed by the client to the client.
When a client opens a file for reading operation, a data node list corresponding to each data block of the file is acquired and cached, and if a certain node fails, data return is overtime or other error information is returned when the node is accessed. At this point, the client may select another storage node that contains the copy. But the recording of such failures is all at the granularity of one file, rather than the entire failed node.
As shown in fig. 2, the present embodiment provides a data block access method, including:
s11: determining a storage node of a target data block to be accessed;
s12: determining a fault node according to the fault identification information;
s13: the target data block is accessed from a storage node other than the failed node.
The data block access method provided by the embodiment can be applied to a distributed file system. For example, the data block access method may be applied to a client of a distributed file system. The client stores the fault identification information of the global fault node. In this manner, the client determines on which storage nodes a storage copy of the target data block is located before accessing the target data block.
In some embodiments, S11 may include: and accessing a metadata server of the distributed file system, receiving a node list containing the target data blocks from the metadata server, and determining the storage nodes of the target data blocks to be accessed by reading the node list.
Failure identification information is obtained in S12, for example, from a metadata server, and it is determined which failed nodes are currently included in the distributed file system.
In S13, it is compared whether the storage node where the target data block is located includes a failed node, and if the storage node of the target data block does not include a failed node, the target data block may be read from any storage node. And if the storage node of the target data block contains the fault node, reading the target data block on the storage nodes except the fault node.
Typically, in a distributed file system, a data block will have multiple copies stored on multiple storage nodes for distributed access and system stability maintenance.
It is also determined which storage nodes in the current distributed system are failed nodes. The failure states here are: the state of data being unable to be read, for example, a storage node goes down, a network of storage nodes is disconnected, or a system failure occurs in a storage node, which may result in the entire failure of a storage node. Thus, access to all data blocks that exist on the storage link will fail.
In view of this, in order to reduce data access to the failed node, global failure identification information is generated in the embodiment of the present application. The global failure identification information is broadcast to clients of the distributed file system.
Therefore, after the fault node is determined, the target data are read from the normal storage node, so that the reading success rate of the target data can be obviously improved on one hand, and on the other hand, the access delay of the target data block can be greatly reduced.
For example, only a single ask granularity of a failed file is created in a distributed file system as shown in FIG. 3. If the storage node (hereinafter referred to as node) b fails. When accessing the data block 2 of the file 1, the client finds that the access fails, and labels the node b with the access failure of the file 1, but the labeling is based on the granularity of the file 1 and is not considered as the failure of the whole node b. However, since the failure information is a failure file with file granularity, which is marked as old war of the file 1 on the node b, the terminal accesses the data block of the file 4 on the node b again due to load balancing or near access of the distributed system, and the access of the data block of the file 4 fails. This causes a high access failure rate and a large delay time for the file 4. And the terminal continues to access the file, re-accesses the failed node b when accessing the file 2, fails to access the file 2 again, and then marks the failure information of the file 2.
And a method employing an embodiment of the present application may be as shown in fig. 4. When the terminal determines that the data block of the file 1 is accessed, failure in reading from the node b generates fault identification information with the node as granularity, and the node b is marked with a fault. When the terminal accesses the data blocks of the file 4 and the file 2 subsequently, the terminal avoids the node b, and the swordsman accesses the data blocks of the file 4 and the file 2 from the node a, the node c or the node d, so that the access delay of the file 4 and the file 2 is reduced, and the access failure rate is reduced.
The fault identification information includes: a failed node list containing node identifications of failed nodes;
s12 may include: and determining the fault node according to the fault node list.
In some embodiments, the node identifications of the failed nodes are stored in a list, forming a global failure list.
In this way, after the node identifier of the storage node storing the target data block is determined in S11, the determined node identifier is matched with the node identifier in the failed node list, and if the node identifier that is successfully matched is removed. In S13, one or more target nodes that are used as target nodes for accessing the target data block are selected from the node identifiers from which matching is successfully removed, and an access request is sent to the target nodes, so as to improve the access success rate and the access rate of the data block.
In some embodiments, the method further comprises:
acquiring the repair condition information of a fault node;
and when the node identification of the storage node with the eliminated fault is deleted from the fault identification information according to the repair condition information.
If one storage node is identified as a failed node, the distributed file system tries to remove the failure of the failed node, and the storage node is repaired.
For example, the failed node is restarted to remove the failure caused by the repairable system failure or downtime of the failed node. As another example, a network reconnection of the failed node is performed to eliminate the network failure of the failed node.
According to the repair condition information, the grounding identification of the node with the fault removed is removed from the fault identification information in time, so that the corresponding storage node can be conveniently and normally accessed later, the effective utilization rate of each node in the distributed file system is realized, and the resource waste is reduced.
In some embodiments, repair status information for the failed node is obtained periodically. For example, a probe packet is periodically sent to a failed node, and if a response packet returned based on the probe packet is received, whether the failure of the node is eliminated can be determined according to the response packet. For example, if a response packet is successfully received within a predetermined time, the node may be considered to have failed.
In some embodiments, obtaining repair status information for the failed node may include:
receiving a heartbeat signal of each storage node,
at this time, the receiving status of the heartbeat signal of the failed node is one of the repair status information. For example, a heartbeat signal of a failed node is successfully received, and the failure can be considered to be eliminated.
In some embodiments, the method further comprises:
acquiring node state information of each storage node in a file system;
and generating fault identification information for identifying the fault node according to the node state information.
The node condition information may include:
the operation condition information of the storage node, for example, the self operation state information and/or heartbeat signal broadcast by the storage node, etc. For another example, the operating condition information may further include: file access status information. For example, information such as whether a file access failure has occurred or the number of file access failures.
For example, according to the file access condition information, it is determined that one or more file access timeouts occur on a certain node, and it is determined that the corresponding storage node is a failed node.
For example, when one or more prompt messages for returning access errors of file access or access errors such as a messy code file are determined to occur on a certain node according to the file access condition information, the corresponding storage node is determined to be a fault node.
In some embodiments, obtaining node state information of each storage node in the file system includes:
acquiring file access information of each storage node in a file system;
s10 may include: when one storage node fails to access data blocks of one or more files, the corresponding storage node is determined to be a failed node.
For example, in some embodiments, the file distribution system provided by the embodiments of the present application further maintains file-granularity failure information, for example, if a node fails to access less than a predetermined number of files at a time, it may be considered that the accessed files are damaged, so as to generate file-granularity failure information. If the access of the files of which the number is larger than or equal to the preset number on one node fails at one moment, the node is considered to be in fault, and fault information of node granularity is generated. For example, the information of file access failure can be interacted among the terminals, and then whether the file is damaged or the whole node is failed is determined according to the statistical information of the file access. Multiple file access failures typically occur at one time in a node, which may be considered a node failure, otherwise it may be considered a storage failure for a single file.
In some embodiments, the metadata server of the distributed file system stores the failure identification information, and synchronously informs the client when the storage list is issued to the client.
As shown in fig. 5, the present embodiment provides a data block access apparatus, including:
a first determining module 51, configured to determine a storage node of a target data block to be accessed;
a second determining module 52, configured to determine a faulty node according to the fault identification information;
and an accessing module 53, configured to access the target data block from a storage node other than the failed node.
In some embodiments, the first determining module 51, the second determining module 52, and the accessing module 53 may all be program modules; the program module, when executed by the processor, is capable of identifying a storage node and a failed node, and accessing a target data block from a storage node other than the failed node.
In other embodiments, the first determining module 51, the second determining module 52 and the accessing module 53 may be a combination of software and hardware modules; a soft and hard combination module; the soft and hard combining module can comprise: various programmable arrays; programmable arrays include, but are not limited to: complex programmable arrays or field programmable arrays.
In still other embodiments, the first determining module 51, the second determining module 52 and the accessing module 53 may be a combination of hardware and software modules; a pure hardware module; the pure hardware module may include: an application specific integrated circuit.
In some embodiments, the apparatus further comprises:
the fault identification information includes: a failed node list containing node identifications of failed nodes;
and a second determining module 52, configured to determine a failed node according to the failed node list.
In some embodiments, the apparatus further comprises:
the first acquisition module is used for acquiring the repair condition information of the fault node;
and the updating module is used for deleting the node identification of the storage node with the removed fault from the fault identification information according to the repair condition information.
In some embodiments, the apparatus further comprises:
the second acquisition module is used for acquiring node state information of each storage node in the file system;
and the generating module is used for generating fault identification information for identifying the fault node according to the node state information.
In some embodiments, the second obtaining module is configured to obtain file access information of each storage node in the file system;
the generation module is specifically configured to determine that a corresponding storage node is a failed node when accessing data blocks of one or more files on one storage node fails.
Several specific examples are provided below in connection with any of the embodiments described above:
example 1:
this example is a detection mechanism for globally failed nodes. When a read of a file detects a failed node, it adds the failed node to a global failed node list, and when a read of another file attempts to access the data node, it detects whether it is in the global failed node list, and if so, it switches directly to another node. In order to reduce the impact of temporary failures (such as network jitter or data node restart maintenance, etc.), each node at the head of the global failure list is detected for its survival and removed from the head of the failed node if the node is found to become active again. The detection period of each fault node is increased along with the increase of the detection times, so that the waste of system resources caused by frequent detection of real fault nodes can be avoided.
The global fault list is a kind of fault identification information.
Example 2:
if the data blocks of multiple files have copies on the same node, when the node fails, the access to each file will repeatedly try to acquire data from the failed node, and the failed node is marked as the failed node. This increases the latency of each file data access containing the failed node.
In this example, a global list of failed nodes is built at the client, rather than each file being built separately. Before each data block is accessed, the data block is searched in the global data list, if the node to be accessed is found in the list, the node is skipped, and another node in the data block copy is selected. By the method, after a node is accessed by one file and a fault of the node is found, data access of other files can be avoided from the node, so that the performance of accessing other files can be improved, and the delay of data acquisition is reduced.
In the case of fig. 1, it is assumed that the client accesses file 1, file 2, and file 4, and that node b fails at a certain point in time. As in fig. 3, which is the case in a client cache (cache) after a period of time in a conventional implementation. Fig. 4 shows the information of the client after the global bad node list is adopted in the exemplary method.
In the implementation of the present example, in order to deal with the case of repairing the bad node (for example, restarting and recovering the node), a background thread scans the bad node list and accesses the bad node to determine whether the bad node is repaired, and if the bad node is detected to be repaired, the node is deleted from the bad node list, so that the failed node can be prevented from being repeatedly tried when accessing data of different files in the case of a data node failure.
The embodiment of the present disclosure provides a data block access apparatus, which includes a processor, a memory, and an executable program stored on the memory and capable of being executed by the processor, and when the processor executes the executable program, the data block access method provided by any of the foregoing technical solutions is executed. For example, fig. 6 is a block diagram illustrating a data block access device 800 in accordance with an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, the apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 806 provides power to the various components of device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed state of the device 800, the relative positioning of the components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also detect a change in position of the apparatus 800 or a component of the apparatus 800, the presence or absence of user contact with the apparatus 800, orientation or acceleration/deceleration of the apparatus 800, and a change in temperature of the apparatus 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, communications component 816 further includes a Near Field Communications (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Embodiments of the present disclosure also provide a non-transitory computer-readable storage medium, which may be referred to simply as a storage medium. The non-transitory computer readable storage medium has stored thereon an executable program. The instructions in the storage medium, when executed by a processor of a mobile terminal, enable the terminal to perform a data block access method, the method comprising:
determining a storage node of a target data block to be accessed;
determining a fault node according to the fault identification information;
the target data block is accessed from a storage node other than the failed node.
In some embodiments, the fault identification information includes: a failed node list containing node identifications of failed nodes;
according to the fault information, determining a fault node, comprising:
and determining the fault node according to the fault node list.
In some embodiments, the method further comprises:
acquiring the repair condition information of a fault node;
and when the node identification of the storage node with the eliminated fault is deleted from the fault identification information according to the repair condition information.
In some embodiments, the method further comprises:
acquiring node state information of each storage node in a file system;
and generating fault identification information for identifying the fault node according to the node state information.
In some embodiments, obtaining node state information of each storage node in the file system includes:
acquiring file access information of each storage node in a file system;
generating fault identification information for identifying the fault node according to the node state information, wherein the fault identification information comprises:
when one storage node fails to access data blocks of one or more files, the corresponding storage node is determined to be a failed node.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (12)

1. A method for accessing a block of data, comprising:
determining a storage node of a target data block to be accessed;
determining a fault node according to the fault identification information;
accessing the target data block from the storage node other than the failed node.
2. The method of claim 1, wherein the fault identification information comprises: a failed node list containing node identifications of failed nodes;
the determining the fault node according to the fault information includes:
and determining the fault node according to the fault node list.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring the repair condition information of the fault node;
and deleting the node identification of the storage node with the eliminated fault from the fault identification information according to the repair condition information.
4. The method according to any one of claims 1 or 2, further comprising:
acquiring node state information of each storage node in a file system;
and generating fault identification information for identifying the fault node according to the node state information.
5. The method according to claim 4, wherein the obtaining node status information of each storage node in the file system comprises:
acquiring file access information of each storage node in the file system;
generating fault identification information for identifying the fault node according to the node state information, wherein the fault identification information comprises:
when one of the storage nodes fails to access data blocks of one or more files, the corresponding storage node is determined to be the failed node.
6. A data block access apparatus, comprising:
the first determining module is used for determining a storage node of a target data block to be accessed;
the second determining module is used for determining a fault node according to the fault identification information;
and the access module is used for accessing the target data block from the storage nodes except the failed node.
7. The apparatus of claim 6, further comprising:
the fault identification information includes: a failed node list containing node identifications of failed nodes;
and the second determining module is used for determining the fault node according to the fault node list.
8. The apparatus of claim 6 or 7, further comprising:
the first acquisition module is used for acquiring the repair condition information of the fault node;
and the updating module is used for deleting the node identification of the storage node with the removed fault from the fault identification information according to the repair condition information.
9. The apparatus of any one of claims 6 or 7, further comprising:
the second acquisition module is used for acquiring node state information of each storage node in the file system;
and the generating module is used for generating fault identification information for identifying the fault node according to the node state information.
10. The apparatus according to claim 9, wherein the second obtaining module is configured to obtain file access information of each storage node in the file system;
the generating module is specifically configured to determine that the corresponding storage node is the failed node when accessing the data block of one or more files on one storage node fails.
11. A data block accessing device comprising a processor, a memory and an executable program stored on the memory and capable of being executed by the processor, wherein the processor executes the executable program to perform the steps of the data block accessing method according to any one of claims 1 to 5.
12. A storage medium on which an executable program is stored, the executable program when executed by a processor implementing the steps of the data block access method according to any one of claims 1 to 5.
CN202010014783.4A 2020-01-07 2020-01-07 Data block access method and device and storage medium Active CN111274205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010014783.4A CN111274205B (en) 2020-01-07 2020-01-07 Data block access method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010014783.4A CN111274205B (en) 2020-01-07 2020-01-07 Data block access method and device and storage medium

Publications (2)

Publication Number Publication Date
CN111274205A true CN111274205A (en) 2020-06-12
CN111274205B CN111274205B (en) 2024-03-26

Family

ID=71001565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010014783.4A Active CN111274205B (en) 2020-01-07 2020-01-07 Data block access method and device and storage medium

Country Status (1)

Country Link
CN (1) CN111274205B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112181314A (en) * 2020-10-28 2021-01-05 浪潮云信息技术股份公司 Distributed storage method and system
CN114625325A (en) * 2022-05-16 2022-06-14 阿里云计算有限公司 Distributed storage system and storage node offline processing method thereof
CN115454958A (en) * 2022-09-15 2022-12-09 北京百度网讯科技有限公司 Data processing method, device, equipment, system and medium based on artificial intelligence

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110258297A1 (en) * 2010-04-19 2011-10-20 Microsoft Corporation Locator Table and Client Library for Datacenters
CN102624542A (en) * 2010-12-10 2012-08-01 微软公司 Providing transparent failover in a file system
CN103778120A (en) * 2012-10-17 2014-05-07 腾讯科技(深圳)有限公司 Global file identification generation method, generation device and corresponding distributed file system
CN103942112A (en) * 2013-01-22 2014-07-23 深圳市腾讯计算机系统有限公司 Magnetic disk fault-tolerance method, device and system
CN104750757A (en) * 2013-12-31 2015-07-01 中国移动通信集团公司 Data storage method and equipment based on HBase
CN105531675A (en) * 2013-06-19 2016-04-27 日立数据系统工程英国有限公司 Decentralized distributed computing system
US9990253B1 (en) * 2011-03-31 2018-06-05 EMC IP Holding Company LLC System and method for recovering file systems without a replica

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110258297A1 (en) * 2010-04-19 2011-10-20 Microsoft Corporation Locator Table and Client Library for Datacenters
CN102624542A (en) * 2010-12-10 2012-08-01 微软公司 Providing transparent failover in a file system
US9990253B1 (en) * 2011-03-31 2018-06-05 EMC IP Holding Company LLC System and method for recovering file systems without a replica
CN103778120A (en) * 2012-10-17 2014-05-07 腾讯科技(深圳)有限公司 Global file identification generation method, generation device and corresponding distributed file system
CN103942112A (en) * 2013-01-22 2014-07-23 深圳市腾讯计算机系统有限公司 Magnetic disk fault-tolerance method, device and system
CN105531675A (en) * 2013-06-19 2016-04-27 日立数据系统工程英国有限公司 Decentralized distributed computing system
CN104750757A (en) * 2013-12-31 2015-07-01 中国移动通信集团公司 Data storage method and equipment based on HBase

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUANGQING ZHANG等: ""A tile-based scalable raster data management system based on HDFS"", 《2012 20TH INTERNATIONAL CONFERENCE ON GEOINFORMATICS》, 20 August 2012 (2012-08-20), pages 1 - 4 *
朱椤方等: ""基于CIM的电网冰灾异常故障特征节点检测算法"", 《科技通报》, 30 June 2015 (2015-06-30), pages 226 - 228 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112181314A (en) * 2020-10-28 2021-01-05 浪潮云信息技术股份公司 Distributed storage method and system
CN114625325A (en) * 2022-05-16 2022-06-14 阿里云计算有限公司 Distributed storage system and storage node offline processing method thereof
CN115454958A (en) * 2022-09-15 2022-12-09 北京百度网讯科技有限公司 Data processing method, device, equipment, system and medium based on artificial intelligence
CN115454958B (en) * 2022-09-15 2024-03-05 北京百度网讯科技有限公司 Data processing method, device, equipment, system and medium based on artificial intelligence

Also Published As

Publication number Publication date
CN111274205B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
US11500744B2 (en) Method for primary-backup server switching, and control server
CN111274205B (en) Data block access method and device and storage medium
CN111737617B (en) Page resource loading method and device, electronic equipment and storage medium
CN112506553B (en) Upgrading method and device for data surface container of service grid and electronic equipment
US20170180805A1 (en) Method and electronic device for video follow-play
CN107463419B (en) Application restarting method and device and computer readable storage medium
CN108737588B (en) Domain name resolution method and device
CN114020615A (en) Method, device, equipment and storage medium for testing remote multi-active financial system
CN112883314B (en) Request processing method and device
CN111290882B (en) Data file backup method, data file backup device and electronic equipment
CN112632184A (en) Data processing method and device, electronic equipment and storage medium
CN109948012B (en) Serial number generation method and device and storage medium
CN112102009A (en) Advertisement display method, device, equipment and storage medium
CN111049943A (en) Method, device, equipment and medium for analyzing domain name
CN115134231B (en) Communication method, device and device for communication
CN116662101B (en) Fault restoration method of electronic equipment and electronic equipment
CN109582851B (en) Search result processing method and device
CN110119471B (en) Method and device for checking consistency of search results
CN114237497B (en) Distributed storage method and device
CN114116075A (en) Page parameter acquisition method and device, electronic equipment and storage medium
CN111277664A (en) Service migration method and device
CN116132524A (en) Method and device for pushing resources, electronic equipment and storage medium
CN115941472A (en) Resource updating method and device, electronic equipment and readable storage medium
CN114116382A (en) Data processing method, device, system, electronic equipment and storage medium
CN114140154A (en) Advertisement display method, device, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100085 unit C, building C, lin66, Zhufang Road, Qinghe, Haidian District, Beijing

Applicant after: Beijing Xiaomi pinecone Electronic Co.,Ltd.

Address before: 100085 unit C, building C, lin66, Zhufang Road, Qinghe, Haidian District, Beijing

Applicant before: BEIJING PINECONE ELECTRONICS Co.,Ltd.

GR01 Patent grant
GR01 Patent grant