CN111274205B - Data block access method and device and storage medium - Google Patents

Data block access method and device and storage medium Download PDF

Info

Publication number
CN111274205B
CN111274205B CN202010014783.4A CN202010014783A CN111274205B CN 111274205 B CN111274205 B CN 111274205B CN 202010014783 A CN202010014783 A CN 202010014783A CN 111274205 B CN111274205 B CN 111274205B
Authority
CN
China
Prior art keywords
node
fault
data block
identification information
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010014783.4A
Other languages
Chinese (zh)
Other versions
CN111274205A (en
Inventor
周应超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Pinecone Electronic Co Ltd
Original Assignee
Beijing Xiaomi Pinecone Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Pinecone Electronic Co Ltd filed Critical Beijing Xiaomi Pinecone Electronic Co Ltd
Priority to CN202010014783.4A priority Critical patent/CN111274205B/en
Publication of CN111274205A publication Critical patent/CN111274205A/en
Application granted granted Critical
Publication of CN111274205B publication Critical patent/CN111274205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/13File access structures, e.g. distributed indices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/14Details of searching files based on file metadata
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/1734Details of monitoring file system events, e.g. by the use of hooks, filter drivers, logs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems

Abstract

The disclosure relates to a data access method and device and a storage medium. The data access method comprises the following steps: determining a storage node of a target data block to be accessed; determining a fault node according to the fault identification information; the target data block is accessed from the storage nodes other than the failed node. In the embodiment of the application, the whole storage node is used as the fault marking granularity relative to the single file, so that once a storage node fault is found, all subsequent data block accesses can preferentially avoid the fault node, and other normal storage nodes are accessed to realize data reading, thereby reducing the data reading time delay and improving the data reading rate.

Description

Data block access method and device and storage medium
Technical Field
The disclosure relates to the technical field of data storage, and in particular relates to a data access method and device and a storage medium.
Background
In a distributed file system, there will be metadata servers, data servers, and clients. The typical read-write process is that the Client (Client) first initiates a metadata query to the metadata server, which returns a list of data server addresses to the Client. The client initiates real data read-write operations to the data servers.
To achieve high availability of data, the data of a distributed file system is typically stored in multiple copies on a single metadata server. The write operation writes the data to all storage nodes containing copies, and the read operation selects one of the nodes containing copies of the data to read. In a read operation, to improve performance, node information of storage nodes (i.e., data servers) of all data blocks of each file is obtained and cached from the metadata server when the file is opened, so that frequent interaction with the metadata server is not required.
However, it is found in the related art that a phenomenon that a read delay of a data block is large or a success rate is low frequently occurs.
Disclosure of Invention
The disclosure provides a data access method and device and a storage medium.
A first aspect of the present disclosure provides a data block access method, including:
determining a storage node of a target data block to be accessed;
determining a fault node according to the fault identification information;
the target data block is accessed from the storage nodes other than the failed node.
Based on the above scheme, the fault identification information includes: a list of failed nodes comprising node identifications of the failed nodes;
the determining the fault node according to the fault information comprises the following steps:
and determining the fault node according to the fault node list.
Based on the above scheme, the method further comprises:
acquiring repair status information of the fault node;
and deleting the node identification of the storage node with the removed fault from the fault identification information according to the repair status information.
Based on the above scheme, the method further comprises:
acquiring node state information of each storage node in a file system;
and generating fault identification information for identifying the fault node according to the node state information.
Based on the above scheme, the obtaining node status information of each storage node in the file system includes:
acquiring file access information of each storage node in the file system;
generating fault identification information for identifying the fault node according to the node state information, including:
when accessing a data block of one or more files on one of the storage nodes fails, determining the corresponding storage node as the failed node.
A second aspect of an embodiment of the present application provides a data block access apparatus, including:
the first determining module is used for determining a storage node of the target data block to be accessed;
the second determining module is used for determining a fault node according to the fault identification information;
and the access module is used for accessing the target data block from the storage nodes except the fault node.
Based on the above scheme, the device further comprises:
the fault identification information includes: a list of failed nodes comprising node identifications of the failed nodes;
the second determining module is configured to determine the fault node according to the fault node list.
Based on the above scheme, the device further comprises:
the first acquisition module is used for acquiring the repair status information of the fault node;
and the updating module is used for deleting the node identification of the storage node with the removed fault from the fault identification information according to the repair status information.
Based on the above scheme, the device further comprises:
the second acquisition module is used for acquiring node state information of each storage node in the file system;
and the generating module is used for generating fault identification information for identifying the fault node according to the node state information.
Based on the above scheme, the second obtaining module is configured to obtain file access information of each storage node in the file system;
the generation module is specifically configured to determine that, when accessing a data block of one or more files on one storage node fails, the corresponding storage node is the failed node.
A third aspect of the disclosed embodiments provides a data block access apparatus, including a processor, a memory, and an executable program stored on the memory and capable of being executed by the processor, where the processor executes the steps of the data block access method of the first aspect.
A fourth aspect of the disclosed embodiments provides a storage medium having stored thereon an executable program which when executed by a processor implements the steps of the data block access method of the aforementioned first aspect.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: determining a storage node of a target data block to be accessed; determining a fault node according to the fault identification information; the target data block is accessed from the storage nodes other than the failed node. Therefore, compared with the method that a single file is taken as the fault marking granularity, the whole storage node is taken as the fault marking granularity, so that once a storage node fault is found, all subsequent access of data blocks of any file can avoid the fault node preferentially, other normal storage nodes are accessed to realize data reading, the data reading time delay is reduced, and the data reading rate is improved. Therefore, in the embodiment of the application, the whole fault of one node is reduced, the phenomena of large access delay and low access success rate caused by the fact that the access is still requested from the fault node when the subsequent file is accessed are avoided, and the access success rate and the access rate are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating a method of data block access according to an exemplary embodiment.
FIG. 2 is a flow chart illustrating a method of data block access according to an exemplary embodiment.
FIG. 3 is a flowchart illustrating a method of data block access, according to an example embodiment.
FIG. 4 is a flowchart illustrating a method of data block access, according to an example embodiment.
Fig. 5 is a block diagram illustrating a structure of a web address data block accessing apparatus according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating a structure of a website data block accessing apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
In fig. 1, it is the process that the client needs to access each data block in the file 3. Firstly, a node list of each data block of the file 3 is obtained and cached. As shown in fig. 1, file 3 has a block of data with copies on data storage node b, storage node c, and storage node e.
FIG. 1 is a schematic process for accessing data block 1 of file 3 in a distributed file system, which may include:
1: the client accesses the metadata server;
2: the metadata server returns to the client a list of addresses of data blocks that the client requests access to inform the client.
3: the client requests access to the data blocks of the file 3 from the data server b (i.e. the storage node b) according to the metadata server;
4: the data server b returns the data block that the client requests to access to the client.
When the client opens the file for the reading operation, the data node list corresponding to each data block of the file is acquired and cached, and if a certain node fails, the data return can be overtime or return other error information when accessing the node. At this point, the client will select another storage node that contains the copy. But the records of such faults are all at granularity of one file, not the whole fault node.
As shown in fig. 2, the present embodiment provides a data block access method, including:
s11: determining a storage node of a target data block to be accessed;
s12: determining a fault node according to the fault identification information;
s13: the target data block is accessed from a storage node other than the failed node.
The data block access method provided by the embodiment can be applied to a distributed file system. For example, the data block access method can be applied to clients of a distributed file system. The client has stored thereon fault identification information for the global fault node. In this manner, the client determines on which storage nodes the target data block has a copy of the storage before accessing the target data block.
In some embodiments, S11 may include: a metadata server of a distributed file system is accessed, a node list containing target data blocks stored is received from the metadata server, and thereby storage nodes of the target data blocks to be accessed are determined by reading the node list.
In S12, failure identification information is obtained, for example, the failure identification information is received from the metadata server, and it is determined which failure nodes are contained in the current distributed file system.
In S13, it is compared whether the storage node where the target data block is located contains a faulty node, and if the storage node of the target data block does not contain a faulty node, the target data block can be read from any one of the storage nodes. And if the storage nodes of the target data blocks contain the fault nodes, reading the target data blocks on the storage nodes except the removed fault nodes.
Typically in a distributed file system, a block of data will have multiple copies stored on multiple storage nodes for distributed access and system stability maintenance.
And at the same time, it is determined which storage nodes in the current distributed system are failure nodes in failure state. The failure state here is: status of data that cannot be read, such as a downtime of the storage node, a disconnection of the network of the storage node, or a system failure of the storage node, all of which may cause the storage node to fail entirely. Thus, access to all data blocks on the memory access fails.
In view of this, in order to reduce data access to the failed node, global failure identification information is generated in the embodiment of the present application. The global fault identification information is broadcast to clients distributed to the distributed file system.
Therefore, after the fault node is determined, the target data is read from the normal storage node, so that the reading success rate of the target data can be obviously improved on one hand, and the access delay of the target data block can be greatly reduced on the other hand.
For example, only fault files with a single question-rate granularity are established in a distributed file system as shown in FIG. 3. If a storage node (hereinafter referred to as node) b fails. The client finds access failure when accessing the data block 2 of the file 1, marks the node b of the access failure of the file 1, but the marking is with the granularity of the file 1 and is not considered to be the whole node b failure. The terminal continues to access the data block of the file 4, but the previous failure information is a failure file with the file as granularity, and the ancient war of the file 1 on the node b is marked, at this time, the terminal accesses the data block of the file 4 on the node b again due to the reasons of load balancing or nearby access of the distributed system, and the access of the data block of the file 4 fails. Thus, on the one hand, the access failure rate of the file 4 is high and the delay is large. And the terminal continues to access the file, accesses the failed node b again when accessing the file 2, fails to access the file 2 again, and marks the failure information of the file 2.
And a method employing an embodiment of the present application may be as shown in fig. 4. In the process that the terminal determines that the data block of the file 1 is accessed, failure in reading from the node b can generate fault identification information with the node as granularity, and the fault of the whole node b is marked. When the terminal subsequently accesses the data blocks of the file 4 and the file 2, the terminal avoids the node b, and the swordsman accesses the data blocks of the file 4 and the file 2 from the node a, the node c or the node d, so that the access delay of the file 4 and the file 2 is reduced, and the access failure rate is reduced.
The fault identification information includes: a list of failed nodes comprising node identifications of the failed nodes;
s12 may include: and determining the fault node according to the fault node list.
In some embodiments, the node identities of the failed nodes will be stored in the form of a list, forming a global failure list.
In this way, after determining the node identifier of the storage node storing the target data block in S11, the determined node identifier is matched with the node identifier in the fault node list, and if the node identifier successfully matched is removed. In S13, one or more target nodes selected from the node identifiers from which the matching is removed are selected as target nodes for accessing the target data block, and an access request is sent to the target nodes, so as to improve the access success rate and access rate of the data block.
In some embodiments, the method further comprises:
obtaining repair status information of a fault node;
and deleting the node identification of the storage node with the removed fault from the fault identification information according to the repair status information.
If one storage node is identified as a failed node, the distributed file system attempts to perform fault removal on the failed node, so that the restoration of the storage node is realized.
For example, the failed node is restarted to remove the failure caused by the failed node repairability system failure or downtime. As another example, a network reconnection of the failed node is performed to eliminate the network failure of the failed node.
According to the repair status information, the grounding identification of the node with the removed fault is removed from the fault identification information in time, so that the corresponding storage node can be conveniently and normally accessed later, the effective utilization rate of each node in the distributed file system is realized, and the resource waste is reduced.
Repair status information for a failed node may be periodically obtained in some embodiments. For example, a probe packet is periodically sent to a failed node, and if a response packet based on the probe packet is received, it may be determined whether the node has failed based on the response packet. For example, successful receipt of a response packet within a predetermined time may be considered that the node has been cleared of its failure.
In some embodiments, obtaining repair status information for a failed node may include:
a heartbeat signal is received for each storage node,
at this time, the receiving status of the heartbeat signal of the failed node is one of the repair status information. For example, successful receipt of a heartbeat signal from a failed node may be considered that the failure has been cleared.
In some embodiments, the method further comprises:
acquiring node state information of each storage node in a file system;
and generating fault identification information for identifying the fault node according to the node state information.
The node condition information may include:
the operation status information of the storage node, for example, the self operation status information and/or heartbeat signal broadcast by the storage node, etc. For another example, the operating condition information may further include: file access status information. For example, whether file access failure occurs or not, or the number of file access failures.
For example, according to the file access status information, it is determined that one or more file access timeouts occur on a certain node, and the corresponding storage node is determined to be a failed node.
For another example, according to the file access status information, when determining that one or more prompt messages of file access return access errors or access errors such as return messy code files occur on a certain node, determining that the corresponding storage node is a fault node.
In some embodiments, obtaining node state information for each storage node in a file system includes:
acquiring file access information of each storage node in a file system;
s10 may include: when a data block of one or more files fails to be accessed on one storage node, the corresponding storage node is determined to be a failed node.
For example, in some embodiments, the embodiments of the present application provide that failure information with file granularity is also maintained in the obtained file distributed system, for example, if less than a predetermined number of file access failures occur on a node at a time, the accessed file may be considered to be damaged, so as to generate the failure information with file granularity. If the access of files larger than or equal to the preset number on one node at one moment fails, the node can be considered to be faulty, and fault information of node granularity is generated. For example, information of file access failure can be interacted between the terminals, and then whether the file is damaged or the whole node is damaged is determined according to statistical information of file access. Multiple file access failures typically occur at one time at a node, which may be considered a node failure, or else may be considered a storage failure for a single file.
In some embodiments, the metadata server of the distributed file system may have fault identification information stored thereon and may synchronously notify the client when the stored list is issued to the client.
As shown in fig. 5, the present embodiment provides a data block access apparatus, including:
a first determining module 51, configured to determine a storage node of a target data block to be accessed;
a second determining module 52, configured to determine a fault node according to the fault identification information;
an accessing module 53, configured to access the target data block from a storage node other than the failed node.
In some embodiments, the first determination module 51, the second determination module 52, and the access module 53 may all be program modules; after the program module is executed by the processor, the storage node and the fault node can be determined, and the target data block is accessed from the storage nodes except the fault node.
In other embodiments, the first determining module 51, the second determining module 52 and the accessing module 53 may be all soft and hard combined modules; a soft-hard combination module; the soft and hard combining module may include: various programmable arrays; programmable arrays include, but are not limited to: a complex programmable array or a field programmable array.
In still other embodiments, the first determining module 51, the second determining module 52, and the accessing module 53 may be all soft and hard combined modules; a pure hardware module; the pure hardware modules may include: an application specific integrated circuit.
In some embodiments, the apparatus further comprises:
the fault identification information includes: a list of failed nodes comprising node identifications of the failed nodes;
the second determining module 52 is configured to determine a fault node according to the fault node list.
In some embodiments, the apparatus further comprises:
the first acquisition module is used for acquiring the repair status information of the fault node;
and the updating module is used for deleting the node identification of the storage node with the removed fault from the fault identification information when the storage node is subjected to the fault removal according to the repair condition information.
In some embodiments, the apparatus further comprises:
the second acquisition module is used for acquiring node state information of each storage node in the file system;
and the generating module is used for generating fault identification information for identifying the fault node according to the node state information.
In some embodiments, the second obtaining module is configured to obtain file access information of each storage node in the file system;
the generation module is specifically configured to determine, when accessing a data block of one or more files on one storage node fails, that the corresponding storage node is a failed node.
Several specific examples are provided below in connection with any of the above embodiments:
example 1:
this example is a global fault node detection mechanism. When a read operation of a file detects a failed node, the failed node is added to a global failed node list, and when a read operation of another file attempts to access the data node, it is detected whether it is in the global failed node list, and if so, it is directly switched to another node. To reduce the effects of temporary failures (such as network jitter or data node restart maintenance, etc.), each node in the global failure list will detect their survival and reject it from the failed node if it is found to be active again. The detection period of each fault node increases with the increase of the detection times, so that the waste of system resources caused by the frequent detection of the real fault node can be avoided.
The global fault list is one type of fault identification information.
Example 2:
if the data blocks of a plurality of files have copies on the same node, when the node fails, the access to each file is repeatedly tried to acquire the data from the failed node, and the node is marked as the failed node after failure. This increases the delay of access to the file data per node containing the failure.
In this example, a global list of failed nodes would be built at the client instead of each file individually. Before each access to a data block, the data block is searched in the global data list, if the node to be accessed is found in the list, the node is skipped, and another node in the copy of the data block is selected. In this way, after a node is found to be faulty by accessing a file, the data access of other files will avoid the node, so that the access performance of other files can be improved, and the delay of data acquisition is reduced.
In the case of fig. 1, it is assumed that the client accesses file 1, file 2, and file 4, and that node b fails at some point in time. As in the case of fig. 3, in a client cache (cache) over time in a conventional implementation. Fig. 4 shows the client information after the global bad node list is adopted in the present exemplary method.
In this example implementation, to cope with the situation of repairing bad nodes (such as restarting a node, for example), a background thread scans the bad node list and accesses the bad node to determine whether the bad node is repaired, if the detection finds that the bad node is repaired, the bad node is deleted from the bad node list, so that it is possible to prevent repeated attempts to use the failed node when accessing the data of different files in the case of failure of the data node.
The embodiment of the disclosure provides a data block access device, which comprises a processor, a memory and an executable program stored on the memory and capable of being operated by the processor, wherein the processor executes the data block access method provided by any technical scheme. For example, fig. 6 is a block diagram illustrating a data block access apparatus 800 according to an example embodiment. For example, apparatus 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 6, apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the apparatus 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the device 800. Examples of such data include instructions for any application or method operating on the device 800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 806 provides power to the various components of the device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen between the device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the apparatus 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the assemblies, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in position of the device 800 or one of the assemblies of the device 800, the presence or absence of user contact with the device 800, an orientation or acceleration/deceleration of the device 800, and a change in temperature of the device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the apparatus 800 and other devices, either in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of apparatus 800 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Embodiments of the present disclosure also provide a non-transitory computer-readable storage medium, which may be simply referred to as a storage medium. The non-transitory computer-readable storage medium stores thereon a program executable by the computer. The instructions in the storage medium, when executed by a processor of a mobile terminal, enable the terminal to perform a data block access method comprising:
determining a storage node of a target data block to be accessed;
determining a fault node according to the fault identification information;
the target data block is accessed from a storage node other than the failed node.
In some embodiments, the fault identification information includes: a list of failed nodes comprising node identifications of the failed nodes;
determining a fault node according to the fault information, including:
and determining the fault node according to the fault node list.
In some embodiments, the method further comprises:
obtaining repair status information of a fault node;
and deleting the node identification of the storage node with the removed fault from the fault identification information according to the repair status information.
In some embodiments, the method further comprises:
acquiring node state information of each storage node in a file system;
and generating fault identification information for identifying the fault node according to the node state information.
In some embodiments, obtaining node state information for each storage node in a file system includes:
acquiring file access information of each storage node in a file system;
generating fault identification information for identifying a fault node according to the node state information, including:
when a data block of one or more files fails to be accessed on one storage node, the corresponding storage node is determined to be a failed node.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (12)

1. A method of data block access performed by a client of a distributed file system, the method comprising:
determining a storage node of a target data block to be accessed;
receiving fault identification information broadcast and sent by a metadata server of the distributed file system; the fault identification information includes: the distributed file system broadcasts distributed global fault identification information;
determining a fault node according to the fault identification information;
the target data block is accessed from the storage nodes other than the failed node.
2. The method of claim 1, wherein the fault identification information comprises: a list of failed nodes comprising node identifications of the failed nodes;
the determining the fault node according to the fault identification information comprises the following steps:
and determining the fault node according to the fault node list.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring repair status information of the fault node;
and deleting the node identification of the storage node with the removed fault from the fault identification information according to the repair status information.
4. The method according to any one of claims 1 or 2, further comprising:
acquiring node state information of each storage node in a file system;
and generating fault identification information for identifying the fault node according to the node state information.
5. The method of claim 4, wherein the obtaining node state information for each storage node in the file system comprises:
acquiring file access information of each storage node in the file system;
the generating fault identification information for identifying the fault node according to the node state information includes:
when accessing a data block of one or more files on one of the storage nodes fails, determining the corresponding storage node as the failed node.
6. A data block access apparatus, the apparatus operating on a client of a distributed file system, comprising:
the first determining module is used for determining a storage node of the target data block to be accessed;
the second determining module is used for receiving fault identification information broadcast and sent by a metadata server of the distributed file system; the fault identification information includes: the distributed file system broadcasts distributed global fault identification information; determining a fault node according to the fault identification information;
and the access module is used for accessing the target data block from the storage nodes except the fault node.
7. The apparatus of claim 6, wherein the apparatus further comprises:
the fault identification information includes: a list of failed nodes comprising node identifications of the failed nodes;
the second determining module is configured to determine the fault node according to the fault node list.
8. The apparatus according to claim 6 or 7, characterized in that the apparatus further comprises:
the first acquisition module is used for acquiring the repair status information of the fault node;
and the updating module is used for deleting the node identification of the storage node with the removed fault from the fault identification information according to the repair status information.
9. The apparatus according to any one of claims 6 or 7, further comprising:
the second acquisition module is used for acquiring node state information of each storage node in the file system;
and the generating module is used for generating fault identification information for identifying the fault node according to the node state information.
10. The apparatus of claim 9, wherein the second obtaining module is configured to obtain file access information of each storage node in the file system;
the generation module is specifically configured to determine that, when accessing a data block of one or more files on one storage node fails, the corresponding storage node is the failed node.
11. A data block access apparatus comprising a processor, a memory and an executable program stored on the memory and capable of being run by the processor, wherein the processor performs the steps of the data block access method of any one of claims 1 to 5 when the executable program is run by the processor.
12. A storage medium having stored thereon an executable program which when executed by a processor performs the steps of the data block access method of any of claims 1 to 5.
CN202010014783.4A 2020-01-07 2020-01-07 Data block access method and device and storage medium Active CN111274205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010014783.4A CN111274205B (en) 2020-01-07 2020-01-07 Data block access method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010014783.4A CN111274205B (en) 2020-01-07 2020-01-07 Data block access method and device and storage medium

Publications (2)

Publication Number Publication Date
CN111274205A CN111274205A (en) 2020-06-12
CN111274205B true CN111274205B (en) 2024-03-26

Family

ID=71001565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010014783.4A Active CN111274205B (en) 2020-01-07 2020-01-07 Data block access method and device and storage medium

Country Status (1)

Country Link
CN (1) CN111274205B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112181314A (en) * 2020-10-28 2021-01-05 浪潮云信息技术股份公司 Distributed storage method and system
CN114625325B (en) * 2022-05-16 2022-09-23 阿里云计算有限公司 Distributed storage system and storage node offline processing method thereof
CN115454958B (en) * 2022-09-15 2024-03-05 北京百度网讯科技有限公司 Data processing method, device, equipment, system and medium based on artificial intelligence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102624542A (en) * 2010-12-10 2012-08-01 微软公司 Providing transparent failover in a file system
CN103778120A (en) * 2012-10-17 2014-05-07 腾讯科技(深圳)有限公司 Global file identification generation method, generation device and corresponding distributed file system
CN103942112A (en) * 2013-01-22 2014-07-23 深圳市腾讯计算机系统有限公司 Magnetic disk fault-tolerance method, device and system
CN104750757A (en) * 2013-12-31 2015-07-01 中国移动通信集团公司 Data storage method and equipment based on HBase
CN105531675A (en) * 2013-06-19 2016-04-27 日立数据系统工程英国有限公司 Decentralized distributed computing system
US9990253B1 (en) * 2011-03-31 2018-06-05 EMC IP Holding Company LLC System and method for recovering file systems without a replica

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8533299B2 (en) * 2010-04-19 2013-09-10 Microsoft Corporation Locator table and client library for datacenters

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102624542A (en) * 2010-12-10 2012-08-01 微软公司 Providing transparent failover in a file system
US9990253B1 (en) * 2011-03-31 2018-06-05 EMC IP Holding Company LLC System and method for recovering file systems without a replica
CN103778120A (en) * 2012-10-17 2014-05-07 腾讯科技(深圳)有限公司 Global file identification generation method, generation device and corresponding distributed file system
CN103942112A (en) * 2013-01-22 2014-07-23 深圳市腾讯计算机系统有限公司 Magnetic disk fault-tolerance method, device and system
CN105531675A (en) * 2013-06-19 2016-04-27 日立数据系统工程英国有限公司 Decentralized distributed computing system
CN104750757A (en) * 2013-12-31 2015-07-01 中国移动通信集团公司 Data storage method and equipment based on HBase

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"A tile-based scalable raster data management system based on HDFS";Guangqing Zhang等;《2012 20th International Conference on Geoinformatics》;20120820;第1-4页 *
"基于CIM的电网冰灾异常故障特征节点检测算法";朱椤方等;《科技通报》;20150630;第226-228页 *

Also Published As

Publication number Publication date
CN111274205A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
CN111274205B (en) Data block access method and device and storage medium
US11500744B2 (en) Method for primary-backup server switching, and control server
CN107750466B (en) Pairing nearby devices using synchronized alert signals
CN107040576B (en) Information pushing method and device and communication system
CN112506553B (en) Upgrading method and device for data surface container of service grid and electronic equipment
US11523146B2 (en) Live broadcast method and apparatus, electronic device, and storage medium
CN108804684B (en) Data processing method and device
CN108737588B (en) Domain name resolution method and device
CN113064919A (en) Data processing method, data storage system, computer device and storage medium
CN107463419B (en) Application restarting method and device and computer readable storage medium
WO2023220980A1 (en) Frequency sweeping method and apparatus, and storage medium
CN111290882B (en) Data file backup method, data file backup device and electronic equipment
CN112632184A (en) Data processing method and device, electronic equipment and storage medium
CN111274210B (en) Metadata processing method and device and electronic equipment
CN115134231B (en) Communication method, device and device for communication
CN110119471B (en) Method and device for checking consistency of search results
CN114237497B (en) Distributed storage method and device
CN113157604B (en) Data acquisition method and device based on distributed system and related products
CN112804371B (en) Domain name resolution processing method and device
CN116909760B (en) Data processing method, device, readable storage medium and electronic equipment
CN111625536B (en) Data access method and device
CN115941472A (en) Resource updating method and device, electronic equipment and readable storage medium
CN115328601A (en) Navigation bar data updating method and device, electronic equipment and readable storage medium
CN117176820A (en) Service request processing method, device, equipment and storage medium
CN114489856A (en) Application program configuration file acquisition method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100085 unit C, building C, lin66, Zhufang Road, Qinghe, Haidian District, Beijing

Applicant after: Beijing Xiaomi pinecone Electronic Co.,Ltd.

Address before: 100085 unit C, building C, lin66, Zhufang Road, Qinghe, Haidian District, Beijing

Applicant before: BEIJING PINECONE ELECTRONICS Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant