US20170249228A1 - Persistent device fault indicators - Google Patents

Persistent device fault indicators Download PDF

Info

Publication number
US20170249228A1
US20170249228A1 US15/401,377 US201715401377A US2017249228A1 US 20170249228 A1 US20170249228 A1 US 20170249228A1 US 201715401377 A US201715401377 A US 201715401377A US 2017249228 A1 US2017249228 A1 US 2017249228A1
Authority
US
United States
Prior art keywords
memory
devices
status
interpreting
status information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/401,377
Inventor
Ryan J. Attard
Omkar Deshmukh
Dustin M. Hendrickson
Trent W. Johnson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pure Storage Inc
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US15/401,377 priority Critical patent/US20170249228A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ATTARD, RYAN J., HENDRICKSON, DUSTIN M., DESHMUKH, OMKAR, JOHNSON, TRENT W.
Publication of US20170249228A1 publication Critical patent/US20170249228A1/en
Assigned to PURE STORAGE, INC. reassignment PURE STORAGE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/32Monitoring with visual or acoustical indication of the functioning of the machine
    • G06F11/324Display of status information
    • G06F11/327Alarm or error message display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0766Error or fault reporting or storing
    • G06F11/0781Error filtering or prioritizing based on a policy defined by the user or on a policy defined by a hardware/software module, e.g. according to a severity level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0727Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a storage system, e.g. in a DASD or network based storage system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/079Root cause analysis, i.e. error or fault diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • G06F11/1451Management of the data involved in backup or backup restore by selection of backup contents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3034Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a storage system, e.g. DASD based or network based
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3051Monitoring arrangements for monitoring the configuration of the computing system or of the computing system component, e.g. monitoring the presence of processing resources, peripherals, I/O links, software programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4022Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0623Securing storage systems in relation to content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • G06Q10/063116Schedule adjustment for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06316Sequencing of tasks or work
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/15Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
    • H03M13/151Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes using error location or error correction polynomials
    • H03M13/1515Reed-Solomon codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2906Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes using block codes
    • H03M13/2909Product codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/3761Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 using code combining, i.e. using combining of codeword portions which may have been transmitted separately, e.g. Digital Fountain codes, Raptor codes or Luby Transform [LT] codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/61Aspects and characteristics of methods and arrangements for error correction or error detection, not provided for otherwise
    • H03M13/615Use of computational or mathematical techniques
    • H03M13/616Matrix operations, especially for generator matrices or check matrices, e.g. column or row permutations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/06Network architectures or network communication protocols for network security for supporting key management in a packet data network
    • H04L63/061Network architectures or network communication protocols for network security for supporting key management in a packet data network for key exchange, e.g. in peer-to-peer networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/101Access control lists [ACL]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • H04L9/0869Generation of secret information including derivation or calculation of cryptographic keys or passwords involving random numbers or seeds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0894Escrow, recovery or storing of secret information, e.g. secret key escrow or cryptographic key storage
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/14Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols using a plurality of keys or algorithms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3236Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions
    • H04L9/3242Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions involving keyed hash functions, e.g. message authentication codes [MACs], CBC-MAC or HMAC
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/84Using snapshots, i.e. a logical point-in-time copy of the data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • This invention relates generally to computer networks and more particularly to dispersing error encoded data.
  • Computing devices are known to communicate data, process data, and/or store data. Such computing devices range from wireless smart phones, laptops, tablets, personal computers (PC), work stations, and video game devices, to data centers that support millions of web searches, stock trades, or on-line purchases every day.
  • a computing device includes a central processing unit (CPU), a memory system, user input/output interfaces, peripheral device interfaces, and an interconnecting bus structure.
  • a computer may effectively extend its CPU by using “cloud computing” to perform one or more computing functions (e.g., a service, an application, an algorithm, an arithmetic logic function, etc.) on behalf of the computer.
  • cloud computing may be performed by multiple cloud computing resources in a distributed manner to improve the response time for completion of the service, application, and/or function.
  • Hadoop is an open source software framework that supports distributed applications enabling application execution by thousands of computers.
  • a computer may use “cloud storage” as part of its memory system.
  • cloud storage enables a user, via its computer, to store files, applications, etc. on an Internet storage system.
  • the Internet storage system may include a RAID (redundant array of independent disks) system and/or a dispersed storage system that uses an error correction scheme to encode data for storage.
  • FIG. 1 is a schematic block diagram of an embodiment of a dispersed or distributed storage network (DSN) in accordance with the present invention
  • FIG. 2 is a schematic block diagram of an embodiment of a computing core in accordance with the present invention.
  • FIG. 3 is a schematic block diagram of an example of dispersed storage error encoding of data in accordance with the present invention.
  • FIG. 4 is a schematic block diagram of a generic example of an error encoding function in accordance with the present invention.
  • FIG. 5 is a schematic block diagram of a specific example of an error encoding function in accordance with the present invention.
  • FIG. 6 is a schematic block diagram of an example of a slice name of an encoded data slice (EDS) in accordance with the present invention.
  • FIG. 7 is a schematic block diagram of an example of dispersed storage error decoding of data in accordance with the present invention.
  • FIG. 8 is a schematic block diagram of a generic example of an error decoding function in accordance with the present invention.
  • FIG. 9 is a schematic block diagram of an embodiment of a dispersed storage network in accordance with the present invention.
  • FIG. 9A is a flowchart illustrating an example of indicating device status in accordance with the present invention.
  • FIG. 9B is a flowchart illustrating an example of indicating electrical component status in accordance with the present invention.
  • FIG. 1 is a schematic block diagram of an embodiment of a dispersed, or distributed, storage network (DSN) 10 that includes a plurality of computing devices 12 - 16 , a managing unit 18 , an integrity processing unit 20 , and a DSN memory 22 .
  • the components of the DSN 10 are coupled to a network 24 , which may include one or more wireless and/or wire lined communication systems; one or more non-public intranet systems and/or public internet systems; and/or one or more local area networks (LAN) and/or wide area networks (WAN).
  • LAN local area network
  • WAN wide area network
  • the DSN memory 22 includes a plurality of storage units 36 that may be located at geographically different sites (e.g., one in Chicago, one in Milwaukee, etc.), at a common site, or a combination thereof. For example, if the DSN memory 22 includes eight storage units 36 , each storage unit is located at a different site. As another example, if the DSN memory 22 includes eight storage units 36 , all eight storage units are located at the same site. As yet another example, if the DSN memory 22 includes eight storage units 36 , a first pair of storage units are at a first common site, a second pair of storage units are at a second common site, a third pair of storage units are at a third common site, and a fourth pair of storage units are at a fourth common site.
  • geographically different sites e.g., one in Chicago, one in Milwaukee, etc.
  • each storage unit is located at a different site.
  • all eight storage units are located at the same site.
  • a first pair of storage units are at a first common site
  • a DSN memory 22 may include more or less than eight storage units 36 . Further note that each storage unit 36 includes a computing core (as shown in FIG. 2 , or components thereof) and a plurality of memory devices for storing dispersed error encoded data.
  • Each of the computing devices 12 - 16 , the managing unit 18 , and the integrity processing unit 20 include a computing core 26 , which includes network interfaces 30 - 33 .
  • Computing devices 12 - 16 may each be a portable computing device and/or a fixed computing device.
  • a portable computing device may be a social networking device, a gaming device, a cell phone, a smart phone, a digital assistant, a digital music player, a digital video player, a laptop computer, a handheld computer, a tablet, a video game controller, and/or any other portable device that includes a computing core.
  • a fixed computing device may be a computer (PC), a computer server, a cable set-top box, a satellite receiver, a television set, a printer, a fax machine, home entertainment equipment, a video game console, and/or any type of home or office computing equipment.
  • each of the managing unit 18 and the integrity processing unit 20 may be separate computing devices, may be a common computing device, and/or may be integrated into one or more of the computing devices 12 - 16 and/or into one or more of the storage units 36 .
  • Each interface 30 , 32 , and 33 includes software and hardware to support one or more communication links via the network 24 indirectly and/or directly.
  • interface 30 supports a communication link (e.g., wired, wireless, direct, via a LAN, via the network 24 , etc.) between computing devices 14 and 16 .
  • interface 32 supports communication links (e.g., a wired connection, a wireless connection, a LAN connection, and/or any other type of connection to/from the network 24 ) between computing devices 12 & 16 and the DSN memory 22 .
  • interface 33 supports a communication link for each of the managing unit 18 and the integrity processing unit 20 to the network 24 .
  • Computing devices 12 and 16 include a dispersed storage (DS) client module 34 , which enables the computing device to dispersed storage error encode and decode data as subsequently described with reference to one or more of FIGS. 3-8 .
  • computing device 16 functions as a dispersed storage processing agent for computing device 14 .
  • computing device 16 dispersed storage error encodes and decodes data on behalf of computing device 14 .
  • the DSN 10 is tolerant of a significant number of storage unit failures (the number of failures is based on parameters of the dispersed storage error encoding function) without loss of data and without the need for a redundant or backup copies of the data. Further, the DSN 10 stores data for an indefinite period of time without data loss and in a secure manner (e.g., the system is very resistant to unauthorized attempts at accessing the data).
  • the managing unit 18 performs DS management services. For example, the managing unit 18 establishes distributed data storage parameters (e.g., vault creation, distributed storage parameters, security parameters, billing information, user profile information, etc.) for computing devices 12 - 14 individually or as part of a group of user devices. As a specific example, the managing unit 18 coordinates creation of a vault (e.g., a virtual memory block associated with a portion of an overall namespace of the DSN) within the DSTN memory 22 for a user device, a group of devices, or for public access and establishes per vault dispersed storage (DS) error encoding parameters for a vault.
  • distributed data storage parameters e.g., vault creation, distributed storage parameters, security parameters, billing information, user profile information, etc.
  • the managing unit 18 coordinates creation of a vault (e.g., a virtual memory block associated with a portion of an overall namespace of the DSN) within the DSTN memory 22 for a user device, a group of devices, or for public access and establish
  • the managing unit 18 facilitates storage of DS error encoding parameters for each vault by updating registry information of the DSN 10 , where the registry information may be stored in the DSN memory 22 , a computing device 12 - 16 , the managing unit 18 , and/or the integrity processing unit 20 .
  • the DSN managing unit 18 creates and stores user profile information (e.g., an access control list (ACL)) in local memory and/or within memory of the DSN memory 22 .
  • the user profile information includes authentication information, permissions, and/or the security parameters.
  • the security parameters may include encryption/decryption scheme, one or more encryption keys, key generation scheme, and/or data encoding/decoding scheme.
  • the DSN managing unit 18 creates billing information for a particular user, a user group, a vault access, public vault access, etc. For instance, the DSTN managing unit 18 tracks the number of times a user accesses a non-public vault and/or public vaults, which can be used to generate per-access billing information. In another instance, the DSTN managing unit 18 tracks the amount of data stored and/or retrieved by a user device and/or a user group, which can be used to generate per-data-amount billing information.
  • the managing unit 18 performs network operations, network administration, and/or network maintenance.
  • Network operations includes authenticating user data allocation requests (e.g., read and/or write requests), managing creation of vaults, establishing authentication credentials for user devices, adding/deleting components (e.g., user devices, storage units, and/or computing devices with a DS client module 34 ) to/from the DSN 10 , and/or establishing authentication credentials for the storage units 36 .
  • Network administration includes monitoring devices and/or units for failures, maintaining vault information, determining device and/or unit activation status, determining device and/or unit loading, and/or determining any other system level operation that affects the performance level of the DSN 10 .
  • Network maintenance includes facilitating replacing, upgrading, repairing, and/or expanding a device and/or unit of the DSN 10 .
  • the integrity processing unit 20 performs rebuilding of ‘bad’ or missing encoded data slices.
  • the integrity processing unit 20 performs rebuilding by periodically attempting to retrieve/list encoded data slices, and/or slice names of the encoded data slices, from the DSN memory 22 .
  • retrieved encoded slices they are checked for errors due to data corruption, outdated version, etc. If a slice includes an error, it is flagged as a ‘bad’ slice.
  • encoded data slices that were not received and/or not listed they are flagged as missing slices.
  • Bad and/or missing slices are subsequently rebuilt using other retrieved encoded data slices that are deemed to be good slices to produce rebuilt slices.
  • the rebuilt slices are stored in the DSTN memory 22 .
  • FIG. 2 is a schematic block diagram of an embodiment of a computing core 26 that includes a processing module 50 , a memory controller 52 , main memory 54 , a video graphics processing unit 55 , an input/output (IO) controller 56 , a peripheral component interconnect (PCI) interface 58 , an IO interface module 60 , at least one IO device interface module 62 , a read only memory (ROM) basic input output system (BIOS) 64 , and one or more memory interface modules.
  • IO input/output
  • PCI peripheral component interconnect
  • IO interface module 60 at least one IO device interface module 62
  • ROM read only memory
  • BIOS basic input output system
  • the one or more memory interface module(s) includes one or more of a universal serial bus (USB) interface module 66 , a host bus adapter (HBA) interface module 68 , a network interface module 70 , a flash interface module 72 , a hard drive interface module 74 , and a DSN interface module 76 .
  • USB universal serial bus
  • HBA host bus adapter
  • the DSN interface module 76 functions to mimic a conventional operating system (OS) file system interface (e.g., network file system (NFS), flash file system (FFS), disk file system (DFS), file transfer protocol (FTP), web-based distributed authoring and versioning (WebDAV), etc.) and/or a block memory interface (e.g., small computer system interface (SCSI), internet small computer system interface (iSCSI), etc.).
  • OS operating system
  • the DSN interface module 76 and/or the network interface module 70 may function as one or more of the interface 30 - 33 of FIG. 1 .
  • the IO device interface module 62 and/or the memory interface modules 66 - 76 may be collectively or individually referred to as IO ports.
  • FIG. 3 is a schematic block diagram of an example of dispersed storage error encoding of data.
  • a computing device 12 or 16 When a computing device 12 or 16 has data to store it disperse storage error encodes the data in accordance with a dispersed storage error encoding process based on dispersed storage error encoding parameters.
  • the dispersed storage error encoding parameters include an encoding function (e.g., information dispersal algorithm, Reed-Solomon, Cauchy Reed-Solomon, systematic encoding, non-systematic encoding, on-line codes, etc.), a data segmenting protocol (e.g., data segment size, fixed, variable, etc.), and per data segment encoding values.
  • an encoding function e.g., information dispersal algorithm, Reed-Solomon, Cauchy Reed-Solomon, systematic encoding, non-systematic encoding, on-line codes, etc.
  • a data segmenting protocol e.g., data segment size
  • the per data segment encoding values include a total, or pillar width, number (T) of encoded data slices per encoding of a data segment i.e., in a set of encoded data slices); a decode threshold number (D) of encoded data slices of a set of encoded data slices that are needed to recover the data segment; a read threshold number (R) of encoded data slices to indicate a number of encoded data slices per set to be read from storage for decoding of the data segment; and/or a write threshold number (W) to indicate a number of encoded data slices per set that must be accurately stored before the encoded data segment is deemed to have been properly stored.
  • T total, or pillar width, number
  • D decode threshold number
  • R read threshold number
  • W write threshold number
  • the dispersed storage error encoding parameters may further include slicing information (e.g., the number of encoded data slices that will be created for each data segment) and/or slice security information (e.g., per encoded data slice encryption, compression, integrity checksum, etc.).
  • slicing information e.g., the number of encoded data slices that will be created for each data segment
  • slice security information e.g., per encoded data slice encryption, compression, integrity checksum, etc.
  • the encoding function has been selected as Cauchy Reed-Solomon (a generic example is shown in FIG. 4 and a specific example is shown in FIG. 5 );
  • the data segmenting protocol is to divide the data object into fixed sized data segments; and the per data segment encoding values include: a pillar width of 5, a decode threshold of 3, a read threshold of 4, and a write threshold of 4.
  • the computing device 12 or 16 divides the data (e.g., a file (e.g., text, video, audio, etc.), a data object, or other data arrangement) into a plurality of fixed sized data segments (e.g., 1 through Y of a fixed size in range of Kilo-bytes to Tera-bytes or more).
  • the number of data segments created is dependent of the size of the data and the data segmenting protocol.
  • FIG. 4 illustrates a generic Cauchy Reed-Solomon encoding function, which includes an encoding matrix (EM), a data matrix (DM), and a coded matrix (CM).
  • the size of the encoding matrix (EM) is dependent on the pillar width number (T) and the decode threshold number (D) of selected per data segment encoding values.
  • EM encoding matrix
  • T pillar width number
  • D decode threshold number
  • Z is a function of the number of data blocks created from the data segment and the decode threshold number (D).
  • the coded matrix is produced by matrix multiplying the data matrix by the encoding matrix.
  • FIG. 5 illustrates a specific example of Cauchy Reed-Solomon encoding with a pillar number (T) of five and decode threshold number of three.
  • a first data segment is divided into twelve data blocks (D 1 -D 12 ).
  • the coded matrix includes five rows of coded data blocks, where the first row of X 11 -X 14 corresponds to a first encoded data slice (EDS 1 _ 1 ), the second row of X 21 -X 24 corresponds to a second encoded data slice (EDS 2 _ 1 ), the third row of X 31 -X 34 corresponds to a third encoded data slice (EDS 3 _ 1 ), the fourth row of X 41 -X 44 corresponds to a fourth encoded data slice (EDS 4 _ 1 ), and the fifth row of X 51 -X 54 corresponds to a fifth encoded data slice (EDS 5 _ 1 ).
  • the second number of the EDS designation corresponds to the data segment number.
  • the computing device also creates a slice name (SN) for each encoded data slice (EDS) in the set of encoded data slices.
  • a typical format for a slice name 60 is shown in FIG. 6 .
  • the slice name (SN) 60 includes a pillar number of the encoded data slice (e.g., one of 1 -T), a data segment number (e.g., one of 1 -Y), a vault identifier (ID), a data object identifier (ID), and may further include revision level information of the encoded data slices.
  • the slice name functions as, at least part of, a DSN address for the encoded data slice for storage and retrieval from the DSN memory 22 .
  • the computing device 12 or 16 produces a plurality of sets of encoded data slices, which are provided with their respective slice names to the storage units for storage.
  • the first set of encoded data slices includes EDS 1 _ 1 through EDS 5 _ 1 and the first set of slice names includes SN 1 _ 1 through SN 5 _ 1 and the last set of encoded data slices includes EDS 1 _Y through EDS 5 _Y and the last set of slice names includes SN 1 _Y through SN 5 _Y.
  • FIG. 7 is a schematic block diagram of an example of dispersed storage error decoding of a data object that was dispersed storage error encoded and stored in the example of FIG. 4 .
  • the computing device 12 or 16 retrieves from the storage units at least the decode threshold number of encoded data slices per data segment. As a specific example, the computing device retrieves a read threshold number of encoded data slices.
  • the computing device uses a decoding function as shown in FIG. 8 .
  • the decoding function is essentially an inverse of the encoding function of FIG. 4 .
  • the coded matrix includes a decode threshold number of rows (e.g., three in this example) and the decoding matrix in an inversion of the encoding matrix that includes the corresponding rows of the coded matrix. For example, if the coded matrix includes rows 1, 2, and 4, the encoding matrix is reduced to rows 1, 2, and 4, and then inverted to produce the decoding matrix.
  • memory devices are mounted in such a way that multiple memory devices occupy the same carrier. These carriers, when removed must have power disconnected to do so. For example some systems contain “two memory device carriers” both of which must be removed together. Each memory device location in the chassis may have an indicator which reports the health of each memory device, but often these indicators require power to function. The problem is that when power is disconnected to service the memory device, the indicators of which memory device is unhealthy may no longer be available, creating much room for operator error.
  • the technology described herein implements a visibly persistent indicator such as “e-ink”, or thrown switch, etc. for each memory device, such that the health status of the memory device can be known even when now power is connected. This solution could be extended to other electrical component types that require a persistent clear visual cue that serves as a health indicator after power is disconnected.
  • FIG. 9 is a schematic block diagram of another embodiment of a dispersed storage network that includes a plurality of distributed storage and task (DST) processing units 1 -D, the network 24 of FIG. 1 , the distributed storage and task network (DSTN) managing unit 18 of FIG. 1 , and a set of DST execution units 1 - n.
  • Each DST execution unit includes a processing module 50 of FIG. 2 , a plurality of memories 1 -M, and a plurality of persistent indicators 1 -M (visibly persistent).
  • Each memory may be implemented utilizing the DSN memory 22 of FIG. 1 .
  • Each memory is associated with a persistent indicator.
  • Each persistent indicator may be implemented utilizing one or more of electronic ink or a latching mechanical device (e.g., a latching relay, a circuit breaker, etc.) such that a status indication is maintained with or without power.
  • a subset of the memories may be associated with a common physical power buss (e.g., when the subset of memories are common to a physical array or gang).
  • the DSN functions to indicate device status.
  • the processing module 50 obtains status information, where the plurality of memory devices share a common power connection.
  • the status information includes one or more of a failed indication, an operational indication, a memory size level, a storage utilization level, he utilization rate, an average bandwidth utilization level, a storage error rate, a retrieval error rate, a number of failed memory locations, identifiers of failed memory locations, a memory identifier, a group memory identifier, a manufacturer identifier, a model number, a serial number, etc.
  • the obtaining includes at least one of initiating a test, interpreting a test result, initiating a query, interpreting a query response, estimating based on one or more other memory devices of the plurality of memory devices, performing a lookup, and interpreting an error message.
  • the processing module 50 for each memory device of the plurality memory devices, updates an associated persistent indicator based on the obtained status information.
  • the updating includes converting the status information to one or more driver electrical signals for the persistent indicator, and activating the persistent indicator utilizing the one or more driver electrical signals.
  • the processing module 50 activates a memory failed indicator of the persistent indicator 2 when the processing module 50 determines that status information associated with the memory 2 indicates that the memory 2 has failed.
  • the invention may provide a system repair optimization when a service technician removes the plurality of memories from the DST execution unit for servicing and can accurately identify which memory devices have failed and which have not failed.
  • FIG. 9A is a flowchart illustrating an example of indicating device status. In particular, a method is presented for use in conjunction with one or more functions and features described in conjunction with FIGS. 1-2, 3-8 , and also FIG. 9 .
  • the method includes a step 902 where a processing module (e.g., of a distributed storage and task (DST) execution unit) identifies a group of memory devices that share a common power connection.
  • the identifying includes one or more of interpreting system registry information, interpreting a power cycle test result, and receiving configuration information.
  • the method continues at step 904 where, for each memory device of the group of memory devices, the processing module obtains status information.
  • the obtaining includes one or more of initiating a test, interpreting a test result, initiating a query, interpreting a query response, estimating the status information based on one or more of status of other memory devices of a group of memory devices, performing a lookup, and interpreting an error message.
  • the method continues at step 906 where the processing module updates an associated persistent indicator based on the corresponding status information. For example, the processing module converts the status information to one or more driver electrical signals compatible with the persistent indicator and activates the persistent indicator utilizing the one or more driver electrical signals.
  • FIG. 9A is a flowchart illustrating an example of indicating device status. In particular, a method is presented for use in conjunction with one or more functions and features described in conjunction with FIGS. 1-2, 3-8 , and also FIG. 9 .
  • the method includes a step 902 where a processing module (e.g., of a distributed storage and task (DST) execution unit) identifies a group of memory devices that share a common power connection.
  • the identifying includes one or more of interpreting system registry information, interpreting a power cycle test result, and receiving configuration information.
  • the method continues at step 904 where, for each memory device of the group of memory devices, the processing module obtains status information.
  • the obtaining includes one or more of initiating a test, interpreting a test result, initiating a query, interpreting a query response, estimating the status information based on one or more of status of other memory devices of a group of memory devices, performing a lookup, and interpreting an error message.
  • the method continues at step 906 where the processing module updates an associated persistent indicator based on the corresponding status information. For example, the processing module converts the status information to one or more driver electrical signals compatible with the persistent indicator and activates the persistent indicator utilizing the one or more driver electrical signals.
  • FIG. 9B is a flowchart illustrating an example of indicating electrical component status. In particular, a method is presented for use in conjunction with one or more functions and features described in conjunction with FIGS. 1-2, 3-8 , and also FIG. 9 .
  • the method includes a step 908 where a processing module identifies a group of electrical components that share a common power connection.
  • the identifying includes one or more of interpreting system registry information, interpreting a power cycle test result, and receiving configuration information.
  • the method continues at step 910 where, for each electrical component of the group of electrical components, the processing module obtains status information.
  • the obtaining includes one or more of initiating a test, interpreting a test result, initiating a query, interpreting a query response, estimating the status information based on one or more of status of other electrical components of a group of electrical components, performing a lookup, and interpreting an error message.
  • the method continues at step 912 where the processing module updates an associated visibly persistent indicator based on the corresponding status information. For example, the processing module converts the status information to one or more driver electrical signals compatible with the persistent indicator and activates the persistent indicator utilizing the one or more driver electrical signals.
  • At least one memory section e.g., a non-transitory computer readable storage medium
  • that stores operational instructions can, when executed by one or more processing modules of one or more computing devices of the dispersed storage network (DSN), cause the one or more computing devices to perform any or all of the method steps described above.
  • the terms “substantially” and “approximately” provides an industry-accepted tolerance for its corresponding term and/or relativity between items. Such an industry-accepted tolerance ranges from less than one percent to fifty percent and corresponds to, but is not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, and/or thermal noise. Such relativity between items ranges from a difference of a few percent to magnitude differences.
  • the term(s) “configured to”, “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for an example of indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level.
  • inferred coupling i.e., where one element is coupled to another element by inference
  • the term “configured to”, “operable to”, “coupled to”, or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items.
  • the term “associated with”, includes direct and/or indirect coupling of separate items and/or one item being embedded within another item.
  • the term “compares favorably”, indicates that a comparison between two or more items, signals, etc., provides a desired relationship. For example, when the desired relationship is that signal 1 has a greater magnitude than signal 2 , a favorable comparison may be achieved when the magnitude of signal 1 is greater than that of signal 2 or when the magnitude of signal 2 is less than that of signal 1 .
  • the term “compares unfavorably”, indicates that a comparison between two or more items, signals, etc., fails to provide the desired relationship.
  • processing module may be a single processing device or a plurality of processing devices.
  • a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions.
  • the processing module, module, processing circuit, and/or processing unit may be, or further include, memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, and/or processing unit.
  • a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information.
  • processing module, module, processing circuit, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network). Further note that if the processing module, module, processing circuit, and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry.
  • the memory element may store, and the processing module, module, processing circuit, and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures.
  • Such a memory device or memory element can be included in an article of manufacture.
  • a flow diagram may include a “start” and/or “continue” indication.
  • the “start” and “continue” indications reflect that the steps presented can optionally be incorporated in or otherwise used in conjunction with other routines.
  • start indicates the beginning of the first step presented and may be preceded by other activities not specifically shown.
  • continue indicates that the steps presented may be performed multiple times and/or may be succeeded by other activities not specifically shown.
  • a flow diagram indicates a particular ordering of steps, other orderings are likewise possible provided that the principles of causality are maintained.
  • the one or more embodiments are used herein to illustrate one or more aspects, one or more features, one or more concepts, and/or one or more examples.
  • a physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein.
  • the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones.
  • signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential.
  • signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential.
  • a signal path is shown as a single-ended path, it also represents a differential signal path.
  • a signal path is shown as a differential path, it also represents a single-ended signal path.
  • module is used in the description of one or more of the embodiments.
  • a module implements one or more functions via a device such as a processor or other processing device or other hardware that may include or operate in association with a memory that stores operational instructions.
  • a module may operate independently and/or in conjunction with software and/or firmware.
  • a module may contain one or more sub-modules, each of which may be one or more modules.
  • a computer readable memory includes one or more memory elements.
  • a memory element may be a separate memory device, multiple memory devices, or a set of memory locations within a memory device.
  • Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information.
  • the memory device may be in a form a solid state memory, a hard drive memory, cloud memory, thumb drive, server memory, computing device memory, and/or other physical medium for storing digital information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Hardware Design (AREA)
  • Probability & Statistics with Applications (AREA)
  • Operations Research (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Pure & Applied Mathematics (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Algebra (AREA)
  • Mathematical Optimization (AREA)

Abstract

A method for execution by one or more processing modules of one or more computing devices, the method begins by identifying devices with a power connection. The method continues by obtaining status information for each device with the power connection. The method continues by updating an associated persistent indicator based on the obtained status information for each device.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present U.S. Utility patent application claims priority pursuant to 35 U.S.C. §119(e) to U.S. Provisional Application No. 62/301,214, entitled “ENHANCING PERFORMANCE OF A DISPERSED STORAGE NETWORK,” filed Feb. 29, 2016, which is hereby incorporated herein by reference in its entirety and made part of the present U.S. Utility patent application for all purposes.
  • STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
  • Not applicable.
  • INCORPORATION-BY-REFERENCE OF MATERIAL SUBMITTED ON A COMPACT DISC
  • Not applicable.
  • BACKGROUND OF THE INVENTION
  • Technical Field of the Invention
  • This invention relates generally to computer networks and more particularly to dispersing error encoded data.
  • Description of Related Art
  • Computing devices are known to communicate data, process data, and/or store data. Such computing devices range from wireless smart phones, laptops, tablets, personal computers (PC), work stations, and video game devices, to data centers that support millions of web searches, stock trades, or on-line purchases every day. In general, a computing device includes a central processing unit (CPU), a memory system, user input/output interfaces, peripheral device interfaces, and an interconnecting bus structure.
  • As is further known, a computer may effectively extend its CPU by using “cloud computing” to perform one or more computing functions (e.g., a service, an application, an algorithm, an arithmetic logic function, etc.) on behalf of the computer. Further, for large services, applications, and/or functions, cloud computing may be performed by multiple cloud computing resources in a distributed manner to improve the response time for completion of the service, application, and/or function. For example, Hadoop is an open source software framework that supports distributed applications enabling application execution by thousands of computers.
  • In addition to cloud computing, a computer may use “cloud storage” as part of its memory system. As is known, cloud storage enables a user, via its computer, to store files, applications, etc. on an Internet storage system. The Internet storage system may include a RAID (redundant array of independent disks) system and/or a dispersed storage system that uses an error correction scheme to encode data for storage.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)
  • FIG. 1 is a schematic block diagram of an embodiment of a dispersed or distributed storage network (DSN) in accordance with the present invention;
  • FIG. 2 is a schematic block diagram of an embodiment of a computing core in accordance with the present invention;
  • FIG. 3 is a schematic block diagram of an example of dispersed storage error encoding of data in accordance with the present invention;
  • FIG. 4 is a schematic block diagram of a generic example of an error encoding function in accordance with the present invention;
  • FIG. 5 is a schematic block diagram of a specific example of an error encoding function in accordance with the present invention;
  • FIG. 6 is a schematic block diagram of an example of a slice name of an encoded data slice (EDS) in accordance with the present invention;
  • FIG. 7 is a schematic block diagram of an example of dispersed storage error decoding of data in accordance with the present invention;
  • FIG. 8 is a schematic block diagram of a generic example of an error decoding function in accordance with the present invention;
  • FIG. 9 is a schematic block diagram of an embodiment of a dispersed storage network in accordance with the present invention;
  • FIG. 9A is a flowchart illustrating an example of indicating device status in accordance with the present invention; and
  • FIG. 9B is a flowchart illustrating an example of indicating electrical component status in accordance with the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 1 is a schematic block diagram of an embodiment of a dispersed, or distributed, storage network (DSN) 10 that includes a plurality of computing devices 12-16, a managing unit 18, an integrity processing unit 20, and a DSN memory 22. The components of the DSN 10 are coupled to a network 24, which may include one or more wireless and/or wire lined communication systems; one or more non-public intranet systems and/or public internet systems; and/or one or more local area networks (LAN) and/or wide area networks (WAN).
  • The DSN memory 22 includes a plurality of storage units 36 that may be located at geographically different sites (e.g., one in Chicago, one in Milwaukee, etc.), at a common site, or a combination thereof. For example, if the DSN memory 22 includes eight storage units 36, each storage unit is located at a different site. As another example, if the DSN memory 22 includes eight storage units 36, all eight storage units are located at the same site. As yet another example, if the DSN memory 22 includes eight storage units 36, a first pair of storage units are at a first common site, a second pair of storage units are at a second common site, a third pair of storage units are at a third common site, and a fourth pair of storage units are at a fourth common site. Note that a DSN memory 22 may include more or less than eight storage units 36. Further note that each storage unit 36 includes a computing core (as shown in FIG. 2, or components thereof) and a plurality of memory devices for storing dispersed error encoded data.
  • Each of the computing devices 12-16, the managing unit 18, and the integrity processing unit 20 include a computing core 26, which includes network interfaces 30-33. Computing devices 12-16 may each be a portable computing device and/or a fixed computing device. A portable computing device may be a social networking device, a gaming device, a cell phone, a smart phone, a digital assistant, a digital music player, a digital video player, a laptop computer, a handheld computer, a tablet, a video game controller, and/or any other portable device that includes a computing core. A fixed computing device may be a computer (PC), a computer server, a cable set-top box, a satellite receiver, a television set, a printer, a fax machine, home entertainment equipment, a video game console, and/or any type of home or office computing equipment. Note that each of the managing unit 18 and the integrity processing unit 20 may be separate computing devices, may be a common computing device, and/or may be integrated into one or more of the computing devices 12-16 and/or into one or more of the storage units 36.
  • Each interface 30, 32, and 33 includes software and hardware to support one or more communication links via the network 24 indirectly and/or directly. For example, interface 30 supports a communication link (e.g., wired, wireless, direct, via a LAN, via the network 24, etc.) between computing devices 14 and 16. As another example, interface 32 supports communication links (e.g., a wired connection, a wireless connection, a LAN connection, and/or any other type of connection to/from the network 24) between computing devices 12 & 16 and the DSN memory 22. As yet another example, interface 33 supports a communication link for each of the managing unit 18 and the integrity processing unit 20 to the network 24.
  • Computing devices 12 and 16 include a dispersed storage (DS) client module 34, which enables the computing device to dispersed storage error encode and decode data as subsequently described with reference to one or more of FIGS. 3-8. In this example embodiment, computing device 16 functions as a dispersed storage processing agent for computing device 14. In this role, computing device 16 dispersed storage error encodes and decodes data on behalf of computing device 14. With the use of dispersed storage error encoding and decoding, the DSN 10 is tolerant of a significant number of storage unit failures (the number of failures is based on parameters of the dispersed storage error encoding function) without loss of data and without the need for a redundant or backup copies of the data. Further, the DSN 10 stores data for an indefinite period of time without data loss and in a secure manner (e.g., the system is very resistant to unauthorized attempts at accessing the data).
  • In operation, the managing unit 18 performs DS management services. For example, the managing unit 18 establishes distributed data storage parameters (e.g., vault creation, distributed storage parameters, security parameters, billing information, user profile information, etc.) for computing devices 12-14 individually or as part of a group of user devices. As a specific example, the managing unit 18 coordinates creation of a vault (e.g., a virtual memory block associated with a portion of an overall namespace of the DSN) within the DSTN memory 22 for a user device, a group of devices, or for public access and establishes per vault dispersed storage (DS) error encoding parameters for a vault. The managing unit 18 facilitates storage of DS error encoding parameters for each vault by updating registry information of the DSN 10, where the registry information may be stored in the DSN memory 22, a computing device 12-16, the managing unit 18, and/or the integrity processing unit 20.
  • The DSN managing unit 18 creates and stores user profile information (e.g., an access control list (ACL)) in local memory and/or within memory of the DSN memory 22. The user profile information includes authentication information, permissions, and/or the security parameters. The security parameters may include encryption/decryption scheme, one or more encryption keys, key generation scheme, and/or data encoding/decoding scheme.
  • The DSN managing unit 18 creates billing information for a particular user, a user group, a vault access, public vault access, etc. For instance, the DSTN managing unit 18 tracks the number of times a user accesses a non-public vault and/or public vaults, which can be used to generate per-access billing information. In another instance, the DSTN managing unit 18 tracks the amount of data stored and/or retrieved by a user device and/or a user group, which can be used to generate per-data-amount billing information.
  • As another example, the managing unit 18 performs network operations, network administration, and/or network maintenance. Network operations includes authenticating user data allocation requests (e.g., read and/or write requests), managing creation of vaults, establishing authentication credentials for user devices, adding/deleting components (e.g., user devices, storage units, and/or computing devices with a DS client module 34) to/from the DSN 10, and/or establishing authentication credentials for the storage units 36. Network administration includes monitoring devices and/or units for failures, maintaining vault information, determining device and/or unit activation status, determining device and/or unit loading, and/or determining any other system level operation that affects the performance level of the DSN 10. Network maintenance includes facilitating replacing, upgrading, repairing, and/or expanding a device and/or unit of the DSN 10.
  • The integrity processing unit 20 performs rebuilding of ‘bad’ or missing encoded data slices. At a high level, the integrity processing unit 20 performs rebuilding by periodically attempting to retrieve/list encoded data slices, and/or slice names of the encoded data slices, from the DSN memory 22. For retrieved encoded slices, they are checked for errors due to data corruption, outdated version, etc. If a slice includes an error, it is flagged as a ‘bad’ slice. For encoded data slices that were not received and/or not listed, they are flagged as missing slices. Bad and/or missing slices are subsequently rebuilt using other retrieved encoded data slices that are deemed to be good slices to produce rebuilt slices. The rebuilt slices are stored in the DSTN memory 22.
  • FIG. 2 is a schematic block diagram of an embodiment of a computing core 26 that includes a processing module 50, a memory controller 52, main memory 54, a video graphics processing unit 55, an input/output (IO) controller 56, a peripheral component interconnect (PCI) interface 58, an IO interface module 60, at least one IO device interface module 62, a read only memory (ROM) basic input output system (BIOS) 64, and one or more memory interface modules. The one or more memory interface module(s) includes one or more of a universal serial bus (USB) interface module 66, a host bus adapter (HBA) interface module 68, a network interface module 70, a flash interface module 72, a hard drive interface module 74, and a DSN interface module 76.
  • The DSN interface module 76 functions to mimic a conventional operating system (OS) file system interface (e.g., network file system (NFS), flash file system (FFS), disk file system (DFS), file transfer protocol (FTP), web-based distributed authoring and versioning (WebDAV), etc.) and/or a block memory interface (e.g., small computer system interface (SCSI), internet small computer system interface (iSCSI), etc.). The DSN interface module 76 and/or the network interface module 70 may function as one or more of the interface 30-33 of FIG. 1. Note that the IO device interface module 62 and/or the memory interface modules 66-76 may be collectively or individually referred to as IO ports.
  • FIG. 3 is a schematic block diagram of an example of dispersed storage error encoding of data. When a computing device 12 or 16 has data to store it disperse storage error encodes the data in accordance with a dispersed storage error encoding process based on dispersed storage error encoding parameters. The dispersed storage error encoding parameters include an encoding function (e.g., information dispersal algorithm, Reed-Solomon, Cauchy Reed-Solomon, systematic encoding, non-systematic encoding, on-line codes, etc.), a data segmenting protocol (e.g., data segment size, fixed, variable, etc.), and per data segment encoding values. The per data segment encoding values include a total, or pillar width, number (T) of encoded data slices per encoding of a data segment i.e., in a set of encoded data slices); a decode threshold number (D) of encoded data slices of a set of encoded data slices that are needed to recover the data segment; a read threshold number (R) of encoded data slices to indicate a number of encoded data slices per set to be read from storage for decoding of the data segment; and/or a write threshold number (W) to indicate a number of encoded data slices per set that must be accurately stored before the encoded data segment is deemed to have been properly stored. The dispersed storage error encoding parameters may further include slicing information (e.g., the number of encoded data slices that will be created for each data segment) and/or slice security information (e.g., per encoded data slice encryption, compression, integrity checksum, etc.).
  • In the present example, Cauchy Reed-Solomon has been selected as the encoding function (a generic example is shown in FIG. 4 and a specific example is shown in FIG. 5); the data segmenting protocol is to divide the data object into fixed sized data segments; and the per data segment encoding values include: a pillar width of 5, a decode threshold of 3, a read threshold of 4, and a write threshold of 4. In accordance with the data segmenting protocol, the computing device 12 or 16 divides the data (e.g., a file (e.g., text, video, audio, etc.), a data object, or other data arrangement) into a plurality of fixed sized data segments (e.g., 1 through Y of a fixed size in range of Kilo-bytes to Tera-bytes or more). The number of data segments created is dependent of the size of the data and the data segmenting protocol.
  • The computing device 12 or 16 then disperse storage error encodes a data segment using the selected encoding function (e.g., Cauchy Reed-Solomon) to produce a set of encoded data slices. FIG. 4 illustrates a generic Cauchy Reed-Solomon encoding function, which includes an encoding matrix (EM), a data matrix (DM), and a coded matrix (CM). The size of the encoding matrix (EM) is dependent on the pillar width number (T) and the decode threshold number (D) of selected per data segment encoding values. To produce the data matrix (DM), the data segment is divided into a plurality of data blocks and the data blocks are arranged into D number of rows with Z data blocks per row. Note that Z is a function of the number of data blocks created from the data segment and the decode threshold number (D). The coded matrix is produced by matrix multiplying the data matrix by the encoding matrix.
  • FIG. 5 illustrates a specific example of Cauchy Reed-Solomon encoding with a pillar number (T) of five and decode threshold number of three. In this example, a first data segment is divided into twelve data blocks (D1-D12). The coded matrix includes five rows of coded data blocks, where the first row of X11-X14 corresponds to a first encoded data slice (EDS 1_1), the second row of X21-X24 corresponds to a second encoded data slice (EDS 2_1), the third row of X31-X34 corresponds to a third encoded data slice (EDS 3_1), the fourth row of X41-X44 corresponds to a fourth encoded data slice (EDS 4_1), and the fifth row of X51-X54 corresponds to a fifth encoded data slice (EDS 5_1). Note that the second number of the EDS designation corresponds to the data segment number.
  • Returning to the discussion of FIG. 3, the computing device also creates a slice name (SN) for each encoded data slice (EDS) in the set of encoded data slices. A typical format for a slice name 60 is shown in FIG. 6. As shown, the slice name (SN) 60 includes a pillar number of the encoded data slice (e.g., one of 1-T), a data segment number (e.g., one of 1-Y), a vault identifier (ID), a data object identifier (ID), and may further include revision level information of the encoded data slices. The slice name functions as, at least part of, a DSN address for the encoded data slice for storage and retrieval from the DSN memory 22.
  • As a result of encoding, the computing device 12 or 16 produces a plurality of sets of encoded data slices, which are provided with their respective slice names to the storage units for storage. As shown, the first set of encoded data slices includes EDS 1_1 through EDS 5_1 and the first set of slice names includes SN 1_1 through SN 5_1 and the last set of encoded data slices includes EDS 1_Y through EDS 5_Y and the last set of slice names includes SN 1_Y through SN 5_Y.
  • FIG. 7 is a schematic block diagram of an example of dispersed storage error decoding of a data object that was dispersed storage error encoded and stored in the example of FIG. 4. In this example, the computing device 12 or 16 retrieves from the storage units at least the decode threshold number of encoded data slices per data segment. As a specific example, the computing device retrieves a read threshold number of encoded data slices.
  • To recover a data segment from a decode threshold number of encoded data slices, the computing device uses a decoding function as shown in FIG. 8. As shown, the decoding function is essentially an inverse of the encoding function of FIG. 4. The coded matrix includes a decode threshold number of rows (e.g., three in this example) and the decoding matrix in an inversion of the encoding matrix that includes the corresponding rows of the coded matrix. For example, if the coded matrix includes rows 1, 2, and 4, the encoding matrix is reduced to rows 1, 2, and 4, and then inverted to produce the decoding matrix.
  • In many device designs, memory devices are mounted in such a way that multiple memory devices occupy the same carrier. These carriers, when removed must have power disconnected to do so. For example some systems contain “two memory device carriers” both of which must be removed together. Each memory device location in the chassis may have an indicator which reports the health of each memory device, but often these indicators require power to function. The problem is that when power is disconnected to service the memory device, the indicators of which memory device is unhealthy may no longer be available, creating much room for operator error. In one embodiment, the technology described herein implements a visibly persistent indicator such as “e-ink”, or thrown switch, etc. for each memory device, such that the health status of the memory device can be known even when now power is connected. This solution could be extended to other electrical component types that require a persistent clear visual cue that serves as a health indicator after power is disconnected.
  • FIG. 9 is a schematic block diagram of another embodiment of a dispersed storage network that includes a plurality of distributed storage and task (DST) processing units 1-D, the network 24 of FIG. 1, the distributed storage and task network (DSTN) managing unit 18 of FIG. 1, and a set of DST execution units 1-n. Each DST execution unit includes a processing module 50 of FIG. 2, a plurality of memories 1-M, and a plurality of persistent indicators 1-M (visibly persistent). Each memory may be implemented utilizing the DSN memory 22 of FIG. 1. Each memory is associated with a persistent indicator. Each persistent indicator may be implemented utilizing one or more of electronic ink or a latching mechanical device (e.g., a latching relay, a circuit breaker, etc.) such that a status indication is maintained with or without power. A subset of the memories may be associated with a common physical power buss (e.g., when the subset of memories are common to a physical array or gang). The DSN functions to indicate device status.
  • In an example of operation of indicating the device status, for each memory device of the plurality of memory devices, the processing module 50 obtains status information, where the plurality of memory devices share a common power connection. The status information includes one or more of a failed indication, an operational indication, a memory size level, a storage utilization level, he utilization rate, an average bandwidth utilization level, a storage error rate, a retrieval error rate, a number of failed memory locations, identifiers of failed memory locations, a memory identifier, a group memory identifier, a manufacturer identifier, a model number, a serial number, etc., the obtaining includes at least one of initiating a test, interpreting a test result, initiating a query, interpreting a query response, estimating based on one or more other memory devices of the plurality of memory devices, performing a lookup, and interpreting an error message.
  • Having obtained the status information, the processing module 50, for each memory device of the plurality memory devices, updates an associated persistent indicator based on the obtained status information. The updating includes converting the status information to one or more driver electrical signals for the persistent indicator, and activating the persistent indicator utilizing the one or more driver electrical signals. For example, the processing module 50 activates a memory failed indicator of the persistent indicator 2 when the processing module 50 determines that status information associated with the memory 2 indicates that the memory 2 has failed. The invention may provide a system repair optimization when a service technician removes the plurality of memories from the DST execution unit for servicing and can accurately identify which memory devices have failed and which have not failed.
  • FIG. 9A is a flowchart illustrating an example of indicating device status. In particular, a method is presented for use in conjunction with one or more functions and features described in conjunction with FIGS. 1-2, 3-8, and also FIG. 9.
  • The method includes a step 902 where a processing module (e.g., of a distributed storage and task (DST) execution unit) identifies a group of memory devices that share a common power connection. The identifying includes one or more of interpreting system registry information, interpreting a power cycle test result, and receiving configuration information. The method continues at step 904 where, for each memory device of the group of memory devices, the processing module obtains status information. The obtaining includes one or more of initiating a test, interpreting a test result, initiating a query, interpreting a query response, estimating the status information based on one or more of status of other memory devices of a group of memory devices, performing a lookup, and interpreting an error message.
  • For each memory device of the group of memory devices, the method continues at step 906 where the processing module updates an associated persistent indicator based on the corresponding status information. For example, the processing module converts the status information to one or more driver electrical signals compatible with the persistent indicator and activates the persistent indicator utilizing the one or more driver electrical signals.
  • FIG. 9A is a flowchart illustrating an example of indicating device status. In particular, a method is presented for use in conjunction with one or more functions and features described in conjunction with FIGS. 1-2, 3-8, and also FIG. 9.
  • The method includes a step 902 where a processing module (e.g., of a distributed storage and task (DST) execution unit) identifies a group of memory devices that share a common power connection. The identifying includes one or more of interpreting system registry information, interpreting a power cycle test result, and receiving configuration information. The method continues at step 904 where, for each memory device of the group of memory devices, the processing module obtains status information. The obtaining includes one or more of initiating a test, interpreting a test result, initiating a query, interpreting a query response, estimating the status information based on one or more of status of other memory devices of a group of memory devices, performing a lookup, and interpreting an error message.
  • For each memory device of the group of memory devices, the method continues at step 906 where the processing module updates an associated persistent indicator based on the corresponding status information. For example, the processing module converts the status information to one or more driver electrical signals compatible with the persistent indicator and activates the persistent indicator utilizing the one or more driver electrical signals.
  • FIG. 9B is a flowchart illustrating an example of indicating electrical component status. In particular, a method is presented for use in conjunction with one or more functions and features described in conjunction with FIGS. 1-2, 3-8, and also FIG. 9.
  • The method includes a step 908 where a processing module identifies a group of electrical components that share a common power connection. The identifying includes one or more of interpreting system registry information, interpreting a power cycle test result, and receiving configuration information. The method continues at step 910 where, for each electrical component of the group of electrical components, the processing module obtains status information. The obtaining includes one or more of initiating a test, interpreting a test result, initiating a query, interpreting a query response, estimating the status information based on one or more of status of other electrical components of a group of electrical components, performing a lookup, and interpreting an error message.
  • For each electrical component of the group of electrical components, the method continues at step 912 where the processing module updates an associated visibly persistent indicator based on the corresponding status information. For example, the processing module converts the status information to one or more driver electrical signals compatible with the persistent indicator and activates the persistent indicator utilizing the one or more driver electrical signals.
  • The method described above in conjunction with the processing module can alternatively be performed by other modules of the dispersed storage network or by other computing devices. In addition, at least one memory section (e.g., a non-transitory computer readable storage medium) that stores operational instructions can, when executed by one or more processing modules of one or more computing devices of the dispersed storage network (DSN), cause the one or more computing devices to perform any or all of the method steps described above.
  • It is noted that terminologies as may be used herein such as bit stream, stream, signal sequence, etc. (or their equivalents) have been used interchangeably to describe digital information whose content corresponds to any of a number of desired types (e.g., data, video, speech, audio, etc. any of which may generally be referred to as ‘data’).
  • As may be used herein, the terms “substantially” and “approximately” provides an industry-accepted tolerance for its corresponding term and/or relativity between items. Such an industry-accepted tolerance ranges from less than one percent to fifty percent and corresponds to, but is not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, and/or thermal noise. Such relativity between items ranges from a difference of a few percent to magnitude differences. As may also be used herein, the term(s) “configured to”, “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for an example of indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level. As may further be used herein, inferred coupling (i.e., where one element is coupled to another element by inference) includes direct and indirect coupling between two items in the same manner as “coupled to”. As may even further be used herein, the term “configured to”, “operable to”, “coupled to”, or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items. As may still further be used herein, the term “associated with”, includes direct and/or indirect coupling of separate items and/or one item being embedded within another item.
  • As may be used herein, the term “compares favorably”, indicates that a comparison between two or more items, signals, etc., provides a desired relationship. For example, when the desired relationship is that signal 1 has a greater magnitude than signal 2, a favorable comparison may be achieved when the magnitude of signal 1 is greater than that of signal 2 or when the magnitude of signal 2 is less than that of signal 1. As may be used herein, the term “compares unfavorably”, indicates that a comparison between two or more items, signals, etc., fails to provide the desired relationship.
  • As may also be used herein, the terms “processing module”, “processing circuit”, “processor”, and/or “processing unit” may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions. The processing module, module, processing circuit, and/or processing unit may be, or further include, memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, and/or processing unit. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that if the processing module, module, processing circuit, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network). Further note that if the processing module, module, processing circuit, and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. Still further note that, the memory element may store, and the processing module, module, processing circuit, and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures. Such a memory device or memory element can be included in an article of manufacture.
  • One or more embodiments have been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claims. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality.
  • To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claims. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof.
  • In addition, a flow diagram may include a “start” and/or “continue” indication. The “start” and “continue” indications reflect that the steps presented can optionally be incorporated in or otherwise used in conjunction with other routines. In this context, “start” indicates the beginning of the first step presented and may be preceded by other activities not specifically shown. Further, the “continue” indication reflects that the steps presented may be performed multiple times and/or may be succeeded by other activities not specifically shown. Further, while a flow diagram indicates a particular ordering of steps, other orderings are likewise possible provided that the principles of causality are maintained.
  • The one or more embodiments are used herein to illustrate one or more aspects, one or more features, one or more concepts, and/or one or more examples. A physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein. Further, from figure to figure, the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones.
  • Unless specifically stated to the contra, signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential. For instance, if a signal path is shown as a single-ended path, it also represents a differential signal path. Similarly, if a signal path is shown as a differential path, it also represents a single-ended signal path. While one or more particular architectures are described herein, other architectures can likewise be implemented that use one or more data buses not expressly shown, direct connectivity between elements, and/or indirect coupling between other elements as recognized by one of average skill in the art.
  • The term “module” is used in the description of one or more of the embodiments. A module implements one or more functions via a device such as a processor or other processing device or other hardware that may include or operate in association with a memory that stores operational instructions. A module may operate independently and/or in conjunction with software and/or firmware. As also used herein, a module may contain one or more sub-modules, each of which may be one or more modules.
  • As may further be used herein, a computer readable memory includes one or more memory elements. A memory element may be a separate memory device, multiple memory devices, or a set of memory locations within a memory device. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. The memory device may be in a form a solid state memory, a hard drive memory, cloud memory, thumb drive, server memory, computing device memory, and/or other physical medium for storing digital information.
  • While particular combinations of various functions and features of the one or more embodiments have been expressly described herein, other combinations of these features and functions are likewise possible. The present disclosure is not limited by the particular examples disclosed herein and expressly incorporates these other combinations.

Claims (20)

What is claimed is:
1. A method for execution by one or more processing modules of one or more computing devices, the method comprises:
identifying a device with a power connection;
obtaining status information for the device; and
updating an associated persistent indicator based on the obtained status information for the device.
2. The method of claim 1, wherein the obtaining includes any of: initiating a test, interpreting a test result, initiating a query, interpreting a query response, estimating based on one or more other devices in a group of the devices, a lookup, or interpreting an error message.
3. The method of claim 1, wherein the status information includes any of: failed, operational, memory size, storage utilization level, utilization rate, average bandwidth utilization level, storage error rate, retrieval error rate, number of failed memory locations, identifiers of failed memory locations, memory identifier, memory group identifier, manufacturer identifier, model number, or serial number.
4. The method of claim 1, wherein the identifying includes any of: interpreting system registry information, interpreting a power cycle test result, or receiving configuration information.
5. The method of claim 1, wherein the updating includes: converting the status information into one or more driver electrical signals for the associated persistent indicator and activating the associated persistent indicator utilizing the one or more driver electrical signals.
6. The method of claim 1, wherein the device includes a plurality of the devices comprising dispersed storage network (DSN) memories arranged in arrays or gangs, having a common status for power when ganged on a common physical carrier, but a different operational status.
7. The method of claim 6, wherein, when each of the DSN memories are removed and powered off, a status to replace a failed unit remains visible.
8. The method of claim 1, wherein the associated persistent indicator is any of: e-ink, a latching relay, or a circuit breaker.
9. A computing device comprises:
an interface;
a local memory; and
a processing module operably coupled to the interface and the local memory, wherein the processing module functions to:
identify one or more devices that share a common power connection;
obtain status information for each of the one or more devices; and
update an associated persistent indicator based on the obtained status information for each device of the one or more devices.
10. The computing device of claim 9, wherein the obtaining includes any of: initiating a test, interpreting a test result, initiating a query, interpreting a query response, estimating based on one or more other devices of the one or more devices, a lookup, or interpreting an error message.
11. The computing device of claim 9, wherein the status information includes any of:
failed, operational, memory size, storage utilization level, utilization rate, average bandwidth utilization level, storage error rate, retrieval error rate, number of failed memory locations, identifiers of failed memory locations, memory identifier, memory group identifier, manufacturer identifier, model number, or serial number.
12. The computing device of claim 9, wherein the identifying includes any of: interpreting system registry information, interpreting a power cycle test result, or receiving configuration information.
13. The computing device of claim 9, wherein the updating includes: converting the status information into one or more driver electrical signals for the associated persistent indicator and activating the associated persistent indicator utilizing the one or more driver electrical signals.
14. The computing device of claim 9, wherein the devices comprise memory device and the memory devices are arranged in arrays or gangs, having a common status for power when ganged on a common physical carrier, but with a different operational status.
15. The computing device of claim 14, wherein, when each of the memory devices are removed and powered off, a status to replace a failed unit remains visible.
16. The computing device of claim 9, wherein the associated persistent indicator is any of: e-ink, a latching relay, or a circuit breaker.
17. A method for execution by one or more processing modules of one or more computing devices, the method comprises:
identifying a group of electrical components that share a common power connection;
obtaining status information for each electrical component of the group of electrical components; and
updating an associated visibly persistent indicator based on the obtained status information for each electrical component of the group of electrical components.
18. The method of claim 17, wherein the associated visibly persistent indicator is any of: e-ink, a latching relay, or a circuit breaker.
19. The method of claim 17, wherein the group of electrical components includes dispersed storage network (DSN) memories arranged in arrays or gangs, having a common status for power when ganged on a common physical carrier, but a different operational status.
20. The method of claim 19, wherein, when each of the DSN memories are removed and powered off, a status to replace a failed unit remains visible.
US15/401,377 2016-02-29 2017-01-09 Persistent device fault indicators Abandoned US20170249228A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/401,377 US20170249228A1 (en) 2016-02-29 2017-01-09 Persistent device fault indicators

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662301214P 2016-02-29 2016-02-29
US15/401,377 US20170249228A1 (en) 2016-02-29 2017-01-09 Persistent device fault indicators

Publications (1)

Publication Number Publication Date
US20170249228A1 true US20170249228A1 (en) 2017-08-31

Family

ID=59678482

Family Applications (12)

Application Number Title Priority Date Filing Date
US15/398,540 Active US10089178B2 (en) 2016-02-29 2017-01-04 Developing an accurate dispersed storage network memory performance model through training
US15/401,377 Abandoned US20170249228A1 (en) 2016-02-29 2017-01-09 Persistent device fault indicators
US15/402,378 Active 2037-05-21 US10248505B2 (en) 2016-02-29 2017-01-10 Issue escalation by management unit
US15/403,425 Active 2037-11-09 US10476849B2 (en) 2016-02-29 2017-01-11 Monitoring and alerting for improper memory device replacement
US15/404,560 Abandoned US20170249212A1 (en) 2016-02-29 2017-01-12 Maximizing redundant information in a mirrored vault
US15/410,329 Expired - Fee Related US10326740B2 (en) 2016-02-29 2017-01-19 Efficient secret-key encrypted secure slice
US15/425,553 Expired - Fee Related US10120757B2 (en) 2016-02-29 2017-02-06 Prioritizing dispersed storage network memory operations during a critical juncture
US15/426,380 Active 2037-11-26 US10678622B2 (en) 2016-02-29 2017-02-07 Optimizing and scheduling maintenance tasks in a dispersed storage network
US15/439,092 Active 2038-03-19 US10824495B2 (en) 2016-02-29 2017-02-22 Cryptographic key storage in a dispersed storage network
US16/019,505 Active 2037-03-02 US10673828B2 (en) 2016-02-29 2018-06-26 Developing an accurate dispersed storage network memory performance model through training
US16/857,719 Active US11204822B1 (en) 2016-02-29 2020-04-24 Distributed storage network (DSN) configuration adaptation based on estimated future loading
US17/538,771 Active US11704184B2 (en) 2016-02-29 2021-11-30 Storage network with enhanced data access performance

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/398,540 Active US10089178B2 (en) 2016-02-29 2017-01-04 Developing an accurate dispersed storage network memory performance model through training

Family Applications After (10)

Application Number Title Priority Date Filing Date
US15/402,378 Active 2037-05-21 US10248505B2 (en) 2016-02-29 2017-01-10 Issue escalation by management unit
US15/403,425 Active 2037-11-09 US10476849B2 (en) 2016-02-29 2017-01-11 Monitoring and alerting for improper memory device replacement
US15/404,560 Abandoned US20170249212A1 (en) 2016-02-29 2017-01-12 Maximizing redundant information in a mirrored vault
US15/410,329 Expired - Fee Related US10326740B2 (en) 2016-02-29 2017-01-19 Efficient secret-key encrypted secure slice
US15/425,553 Expired - Fee Related US10120757B2 (en) 2016-02-29 2017-02-06 Prioritizing dispersed storage network memory operations during a critical juncture
US15/426,380 Active 2037-11-26 US10678622B2 (en) 2016-02-29 2017-02-07 Optimizing and scheduling maintenance tasks in a dispersed storage network
US15/439,092 Active 2038-03-19 US10824495B2 (en) 2016-02-29 2017-02-22 Cryptographic key storage in a dispersed storage network
US16/019,505 Active 2037-03-02 US10673828B2 (en) 2016-02-29 2018-06-26 Developing an accurate dispersed storage network memory performance model through training
US16/857,719 Active US11204822B1 (en) 2016-02-29 2020-04-24 Distributed storage network (DSN) configuration adaptation based on estimated future loading
US17/538,771 Active US11704184B2 (en) 2016-02-29 2021-11-30 Storage network with enhanced data access performance

Country Status (4)

Country Link
US (12) US10089178B2 (en)
CN (1) CN108701197A (en)
DE (1) DE112017000220T5 (en)
WO (1) WO2017149410A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319545A (en) * 2018-02-01 2018-07-24 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN110493247A (en) * 2019-08-29 2019-11-22 南方电网科学研究院有限责任公司 A kind of distribution terminal communication check method, system, equipment and computer media

Families Citing this family (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11991280B2 (en) 2009-04-20 2024-05-21 Pure Storage, Inc. Randomized transforms in a dispersed data storage system
US10447474B2 (en) * 2009-04-20 2019-10-15 Pure Storage, Inc. Dispersed data storage system data decoding and decryption
CN107533623A (en) * 2015-09-14 2018-01-02 慧与发展有限责任合伙企业 Secure memory system
US10277490B2 (en) * 2016-07-19 2019-04-30 International Business Machines Corporation Monitoring inter-site bandwidth for rebuilding
EP3520038A4 (en) * 2016-09-28 2020-06-03 D5A1 Llc Learning coach for machine learning system
US10262751B2 (en) * 2016-09-29 2019-04-16 Intel Corporation Multi-dimensional optimization of electrical parameters for memory training
US10481977B2 (en) * 2016-10-27 2019-11-19 International Business Machines Corporation Dispersed storage of error encoded data objects having multiple resolutions
US11087213B2 (en) * 2017-02-10 2021-08-10 Synaptics Incorporated Binary and multi-class classification systems and methods using one spike connectionist temporal classification
US11080600B2 (en) * 2017-02-10 2021-08-03 Synaptics Incorporated Recurrent neural network based acoustic event classification using complement rule
US10762417B2 (en) * 2017-02-10 2020-09-01 Synaptics Incorporated Efficient connectionist temporal classification for binary classification
US10762891B2 (en) * 2017-02-10 2020-09-01 Synaptics Incorporated Binary and multi-class classification systems and methods using connectionist temporal classification
US11853884B2 (en) * 2017-02-10 2023-12-26 Synaptics Incorporated Many or one detection classification systems and methods
US11100932B2 (en) * 2017-02-10 2021-08-24 Synaptics Incorporated Robust start-end point detection algorithm using neural network
CN108427615B (en) * 2017-02-13 2020-11-27 腾讯科技(深圳)有限公司 Message monitoring method and device
US10762427B2 (en) * 2017-03-01 2020-09-01 Synaptics Incorporated Connectionist temporal classification using segmented labeled sequence data
US10437691B1 (en) * 2017-03-29 2019-10-08 Veritas Technologies Llc Systems and methods for caching in an erasure-coded system
JP6959155B2 (en) * 2017-05-15 2021-11-02 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Verification method, verification device and program
US10379979B2 (en) * 2017-05-31 2019-08-13 Western Digital Technologies, Inc. Power fail handling using stop commands
US10409667B2 (en) * 2017-06-15 2019-09-10 Salesforce.Com, Inc. Error assignment for computer programs
US11157194B2 (en) * 2018-01-12 2021-10-26 International Business Machines Corporation Automated predictive tiered storage system
US11321612B2 (en) 2018-01-30 2022-05-03 D5Ai Llc Self-organizing partially ordered networks and soft-tying learned parameters, such as connection weights
CN108459922A (en) * 2018-03-12 2018-08-28 北京理工大学 Method is discontinuously calculated in a kind of detonation numerical simulation concurrent program
CN108388748A (en) * 2018-03-12 2018-08-10 北京理工大学 Method is discontinuously calculated in a kind of detonation numerical simulation serial program
US11412041B2 (en) 2018-06-25 2022-08-09 International Business Machines Corporation Automatic intervention of global coordinator
CN110659069B (en) * 2018-06-28 2022-08-19 赛灵思公司 Instruction scheduling method for performing neural network computation and corresponding computing system
CN109034413A (en) * 2018-07-11 2018-12-18 广东人励智能工程有限公司 Intelligence manufacture equipment fault prediction technique and system based on neural network model
US10606479B2 (en) 2018-08-07 2020-03-31 International Business Machines Corporation Catastrophic data loss prevention by global coordinator
CN109344036A (en) * 2018-10-08 2019-02-15 郑州云海信息技术有限公司 Alarm display method and system applied to storage system
CN111124793A (en) * 2018-11-01 2020-05-08 中国移动通信集团浙江有限公司 Method and system for detecting performance abnormity of disk array controller
KR20200053886A (en) * 2018-11-09 2020-05-19 삼성전자주식회사 Neural processing unit, neural processing system, and application system
US10970149B2 (en) * 2019-01-03 2021-04-06 International Business Machines Corporation Automatic node hardware configuration in a distributed storage system
US11023307B2 (en) 2019-01-03 2021-06-01 International Business Machines Corporation Automatic remediation of distributed storage system node components through visualization
CN109739213A (en) * 2019-01-07 2019-05-10 东莞百宏实业有限公司 A kind of failure prediction system and prediction technique
US11275672B2 (en) * 2019-01-29 2022-03-15 EMC IP Holding Company LLC Run-time determination of application performance with low overhead impact on system performance
US10880377B2 (en) * 2019-04-05 2020-12-29 Netapp, Inc. Methods and systems for prioritizing events associated with resources of a networked storage system
US20200327025A1 (en) * 2019-04-10 2020-10-15 Alibaba Group Holding Limited Methods, systems, and non-transitory computer readable media for operating a data storage system
CN110058820B (en) * 2019-04-23 2022-05-17 武汉汇迪森信息技术有限公司 Data safe writing, deleting and reading method and device based on solid-state disk array
CN110162923B (en) * 2019-06-03 2020-04-03 北京卫星环境工程研究所 Flexible cable process digital prototype construction system and method for spacecraft assembly
FI129028B (en) * 2019-06-19 2021-05-31 Elisa Oyj Maintenance priority in communication network
US11205319B2 (en) 2019-06-21 2021-12-21 Sg Gaming, Inc. System and method for synthetic image training of a neural network associated with a casino table game monitoring system
US10691528B1 (en) * 2019-07-23 2020-06-23 Core Scientific, Inc. Automatic repair of computing devices in a data center
SG10201906806XA (en) * 2019-07-23 2021-02-25 Mastercard International Inc Methods and computing devices for auto-submission of user authentication credential
WO2021040764A1 (en) * 2019-08-23 2021-03-04 Landmark Graphics Corporation Ai/ml based drilling and production platform
GB2600574B (en) * 2019-08-23 2023-05-31 Landmark Graphics Corp AI/ML based drilling and production platform
JP2021118370A (en) * 2020-01-22 2021-08-10 キオクシア株式会社 Memory system, information processing device, and information processing system
JP7428016B2 (en) * 2020-03-05 2024-02-06 京セラドキュメントソリューションズ株式会社 File sending device
CN112053726B (en) * 2020-09-09 2022-04-12 哈尔滨工业大学 Flash memory mistaken erasure data recovery method based on Er-state threshold voltage distribution
EP4226573A1 (en) 2020-10-05 2023-08-16 Redcom Laboratories, Inc. Zkmfa: zero-knowledge based multi-factor authentication system
CN112468494B (en) * 2020-11-26 2022-05-17 湖北航天信息技术有限公司 Intranet and extranet internet data transmission method and device
CN112764677B (en) * 2021-01-14 2022-12-23 杭州电子科技大学 Method for enhancing data migration security in cloud storage
EP4092963B1 (en) * 2021-05-20 2024-05-08 Ovh Method and system for datacenter network device maintenance
US11722146B1 (en) * 2022-01-21 2023-08-08 Nxp B.V. Correction of sigma-delta analog-to-digital converters (ADCs) using neural networks
CN114785484B (en) * 2022-04-06 2023-05-09 重庆葵林信息科技有限公司 Big data safety transmission method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6434715B1 (en) * 1999-06-14 2002-08-13 General Electric Company Method of detecting systemic fault conditions in an intelligent electronic device
US20110219175A1 (en) * 2004-08-27 2011-09-08 Lexar Media, Inc. Storage capacity status
US8190588B1 (en) * 2005-09-19 2012-05-29 Amazon Technologies, Inc. Providing a distributed transaction information storage service
US20130134891A1 (en) * 2011-07-26 2013-05-30 Hunter Industries, Inc. Systems and methods for providing power and data to lighting devices
US20140101298A1 (en) * 2012-10-05 2014-04-10 Microsoft Corporation Service level agreements for a configurable distributed storage system
US20140108815A9 (en) * 2010-08-25 2014-04-17 Cleversafe, Inc. Securely rebuilding an encoded data slice
US20150308074A1 (en) * 2014-04-24 2015-10-29 Topcon Positioning Systems, Inc. Semi-Automatic Control of a Joystick for Dozer Blade Control

Family Cites Families (131)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4092732A (en) 1977-05-31 1978-05-30 International Business Machines Corporation System for recovering data stored in failed memory unit
US5485474A (en) 1988-02-25 1996-01-16 The President And Fellows Of Harvard College Scheme for information dispersal and reconstruction
US5454101A (en) 1992-09-15 1995-09-26 Universal Firmware Industries, Ltd. Data storage system with set lists which contain elements associated with parents for defining a logical hierarchy and general record pointers identifying specific data sets
US5987622A (en) 1993-12-10 1999-11-16 Tm Patents, Lp Parallel computer system including parallel storage subsystem including facility for correction of data in the event of failure of a storage device in parallel storage subsystem
US6175571B1 (en) 1994-07-22 2001-01-16 Network Peripherals, Inc. Distributed memory switching hub
JP3180940B2 (en) * 1994-11-28 2001-07-03 京セラミタ株式会社 Image forming device maintenance management device
US5848230A (en) 1995-05-25 1998-12-08 Tandem Computers Incorporated Continuously available computer memory systems
US5774643A (en) 1995-10-13 1998-06-30 Digital Equipment Corporation Enhanced raid write hole protection and recovery
US5809285A (en) 1995-12-21 1998-09-15 Compaq Computer Corporation Computer system having a virtual drive array controller
US6012159A (en) 1996-01-17 2000-01-04 Kencast, Inc. Method and system for error-free data transfer
US5802364A (en) 1996-04-15 1998-09-01 Sun Microsystems, Inc. Metadevice driver rename/exchange technique for a computer system incorporating a plurality of independent device drivers
US5890156A (en) 1996-05-02 1999-03-30 Alcatel Usa, Inc. Distributed redundant database
US6058454A (en) 1997-06-09 2000-05-02 International Business Machines Corporation Method and system for automatically configuring redundant arrays of disk memory devices
US6088330A (en) 1997-09-09 2000-07-11 Bruck; Joshua Reliable array of distributed computing nodes
US5991414A (en) 1997-09-12 1999-11-23 International Business Machines Corporation Method and apparatus for the secure distributed storage and retrieval of information
US6272658B1 (en) 1997-10-27 2001-08-07 Kencast, Inc. Method and system for reliable broadcasting of data files and streams
JPH11161505A (en) 1997-12-01 1999-06-18 Matsushita Electric Ind Co Ltd Media send-out device
JPH11167443A (en) 1997-12-02 1999-06-22 Casio Comput Co Ltd Interface device
US6374336B1 (en) 1997-12-24 2002-04-16 Avid Technology, Inc. Computer system and process for transferring multiple high bandwidth streams of data between multiple storage units and multiple applications in a scalable and reliable manner
US6415373B1 (en) 1997-12-24 2002-07-02 Avid Technology, Inc. Computer system and process for transferring multiple high bandwidth streams of data between multiple storage units and multiple applications in a scalable and reliable manner
CA2341014A1 (en) 1998-08-19 2000-03-02 Alexander Roger Deas A system and method for defining transforms of memory device addresses
US6356949B1 (en) 1999-01-29 2002-03-12 Intermec Ip Corp. Automatic data collection device that receives data output instruction from data consumer
US6578144B1 (en) 1999-03-23 2003-06-10 International Business Machines Corporation Secure hash-and-sign signatures
US6609223B1 (en) 1999-04-06 2003-08-19 Kencast, Inc. Method for packet-level fec encoding, in which on a source packet-by-source packet basis, the error correction contributions of a source packet to a plurality of wildcard packets are computed, and the source packet is transmitted thereafter
US6671824B1 (en) * 1999-04-19 2003-12-30 Lakefield Technologies Group Cable network repair control system
US6571282B1 (en) 1999-08-31 2003-05-27 Accenture Llp Block-based communication in a communication services patterns environment
US6516425B1 (en) * 1999-10-29 2003-02-04 Hewlett-Packard Co. Raid rebuild using most vulnerable data redundancy scheme first
US6826711B2 (en) 2000-02-18 2004-11-30 Avamar Technologies, Inc. System and method for data protection with multidimensional parity
US6718361B1 (en) 2000-04-07 2004-04-06 Network Appliance Inc. Method and apparatus for reliable and scalable distribution of data files in distributed networks
US20160078695A1 (en) * 2000-05-01 2016-03-17 General Electric Company Method and system for managing a fleet of remote assets and/or ascertaining a repair for an asset
US6496814B1 (en) * 2000-07-19 2002-12-17 International Business Machines Corporation Method and system for integrating spatial analysis, and scheduling to efficiently schedule and monitor infrastructure maintenance
ATE381191T1 (en) 2000-10-26 2007-12-15 Prismedia Networks Inc METHOD AND SYSTEM FOR MANAGING DISTRIBUTED CONTENT AND CORRESPONDING METADATA
US8176563B2 (en) 2000-11-13 2012-05-08 DigitalDoors, Inc. Data security system and method with editor
US7146644B2 (en) 2000-11-13 2006-12-05 Digital Doors, Inc. Data security system and method responsive to electronic attacks
US7103915B2 (en) 2000-11-13 2006-09-05 Digital Doors, Inc. Data security system and method
US7140044B2 (en) 2000-11-13 2006-11-21 Digital Doors, Inc. Data security system and method for separation of user communities
GB2369206B (en) 2000-11-18 2004-11-03 Ibm Method for rebuilding meta-data in a data storage system and a data storage system
US6785783B2 (en) 2000-11-30 2004-08-31 International Business Machines Corporation NUMA system with redundant main memory architecture
US7080101B1 (en) 2000-12-01 2006-07-18 Ncr Corp. Method and apparatus for partitioning data for storage in a database
US20020080888A1 (en) 2000-12-22 2002-06-27 Li Shu Message splitting and spatially diversified message routing for increasing transmission assurance and data security over distributed networks
US6857059B2 (en) 2001-01-11 2005-02-15 Yottayotta, Inc. Storage virtualization system and methods
US6775792B2 (en) 2001-01-29 2004-08-10 Snap Appliance, Inc. Discrete mapping of parity blocks
US20030037261A1 (en) 2001-03-26 2003-02-20 Ilumin Corporation Secured content delivery system and method
US6879596B1 (en) 2001-04-11 2005-04-12 Applied Micro Circuits Corporation System and method for systolic array sorting of information segments
US7024609B2 (en) 2001-04-20 2006-04-04 Kencast, Inc. System for protecting the transmission of live data streams, and upon reception, for reconstructing the live data streams and recording them into files
GB2377049A (en) 2001-06-30 2002-12-31 Hewlett Packard Co Billing for utilisation of a data storage array
US6944785B2 (en) 2001-07-23 2005-09-13 Network Appliance, Inc. High-availability cluster virtual server system
US7636724B2 (en) 2001-08-31 2009-12-22 Peerify Technologies LLC Data storage system and method by shredding and deshredding
US20050021359A1 (en) * 2001-11-02 2005-01-27 Mckinney Jerry L. Monitoring system and method
US7024451B2 (en) 2001-11-05 2006-04-04 Hewlett-Packard Development Company, L.P. System and method for maintaining consistent independent server-side state among collaborating servers
US7003688B1 (en) 2001-11-15 2006-02-21 Xiotech Corporation System and method for a reserved memory area shared by all redundant storage controllers
US7171493B2 (en) 2001-12-19 2007-01-30 The Charles Stark Draper Laboratory Camouflage of network traffic to resist attack
EP1547252A4 (en) 2002-07-29 2011-04-20 Robert Halford Multi-dimensional data protection and mirroring method for micro level data
US7051155B2 (en) 2002-08-05 2006-05-23 Sun Microsystems, Inc. Method and system for striping data to accommodate integrity metadata
US20040122917A1 (en) 2002-12-18 2004-06-24 Menon Jaishankar Moothedath Distributed storage system for data-sharing among client computers running defferent operating system types
JP2006526204A (en) 2003-03-13 2006-11-16 ディーアールエム テクノロジーズ、エルエルシー Secure streaming container
US7185144B2 (en) 2003-11-24 2007-02-27 Network Appliance, Inc. Semi-static distribution technique
GB0308264D0 (en) 2003-04-10 2003-05-14 Ibm Recovery from failures within data processing systems
GB0308262D0 (en) 2003-04-10 2003-05-14 Ibm Recovery from failures within data processing systems
US7415115B2 (en) 2003-05-14 2008-08-19 Broadcom Corporation Method and system for disaster recovery of data from a storage device
EP1668486A2 (en) 2003-08-14 2006-06-14 Compellent Technologies Virtual disk drive system and method
US7373559B2 (en) * 2003-09-11 2008-05-13 Copan Systems, Inc. Method and system for proactive drive replacement for high availability storage systems
US7899059B2 (en) 2003-11-12 2011-03-01 Agere Systems Inc. Media delivery using quality of service differentiation within a media stream
US8332483B2 (en) 2003-12-15 2012-12-11 International Business Machines Corporation Apparatus, system, and method for autonomic control of grid system resources
US7206899B2 (en) 2003-12-29 2007-04-17 Intel Corporation Method, system, and program for managing data transfer and construction
US7222133B1 (en) 2004-02-05 2007-05-22 Unisys Corporation Method for reducing database recovery time
US7240236B2 (en) 2004-03-23 2007-07-03 Archivas, Inc. Fixed content distributed data storage using permutation ring encoding
US7231578B2 (en) 2004-04-02 2007-06-12 Hitachi Global Storage Technologies Netherlands B.V. Techniques for detecting and correcting errors using multiple interleave erasure pointers
JP2005326935A (en) * 2004-05-12 2005-11-24 Hitachi Ltd Management server for computer system equipped with virtualization storage and failure preventing/restoring method
JP4446839B2 (en) 2004-08-30 2010-04-07 株式会社日立製作所 Storage device and storage management device
JP2006107080A (en) * 2004-10-05 2006-04-20 Hitachi Ltd Storage device system
US7680771B2 (en) 2004-12-20 2010-03-16 International Business Machines Corporation Apparatus, system, and method for database provisioning
US7386758B2 (en) 2005-01-13 2008-06-10 Hitachi, Ltd. Method and apparatus for reconstructing data in object-based storage arrays
US7305579B2 (en) * 2005-03-22 2007-12-04 Xiotech Corporation Method, apparatus and program storage device for providing intelligent rebuild order selection
US7672930B2 (en) 2005-04-05 2010-03-02 Wal-Mart Stores, Inc. System and methods for facilitating a linear grid database with data organization by dimension
US7574623B1 (en) * 2005-04-29 2009-08-11 Network Appliance, Inc. Method and system for rapidly recovering data from a “sick” disk in a RAID disk group
US7546427B2 (en) 2005-09-30 2009-06-09 Cleversafe, Inc. System for rebuilding dispersed data
US7574570B2 (en) 2005-09-30 2009-08-11 Cleversafe Inc Billing system for information dispersal system
US7953937B2 (en) 2005-09-30 2011-05-31 Cleversafe, Inc. Systems, methods, and apparatus for subdividing data for storage in a dispersed data storage grid
US7574579B2 (en) 2005-09-30 2009-08-11 Cleversafe, Inc. Metadata management system for an information dispersed storage system
US8285878B2 (en) 2007-10-09 2012-10-09 Cleversafe, Inc. Block based access to a dispersed data storage network
US7904475B2 (en) 2007-10-09 2011-03-08 Cleversafe, Inc. Virtualized data storage vaults on a dispersed data storage network
US8171101B2 (en) 2005-09-30 2012-05-01 Cleversafe, Inc. Smart access to a dispersed data storage network
EP1798934A1 (en) * 2005-12-13 2007-06-20 Deutsche Thomson-Brandt Gmbh Method and apparatus for organizing nodes in a network
US20070214285A1 (en) 2006-03-08 2007-09-13 Omneon Video Networks Gateway server
US7386827B1 (en) * 2006-06-08 2008-06-10 Xilinx, Inc. Building a simulation environment for a design block
JP2008103936A (en) * 2006-10-18 2008-05-01 Toshiba Corp Secret information management device, and secret information management system
US9697171B2 (en) * 2007-10-09 2017-07-04 Internaitonal Business Machines Corporation Multi-writer revision synchronization in a dispersed storage network
US9084937B2 (en) * 2008-11-18 2015-07-21 Gtech Canada Ulc Faults and performance issue prediction
US8260750B1 (en) * 2009-03-16 2012-09-04 Quest Software, Inc. Intelligent backup escalation system
US10104045B2 (en) 2009-04-20 2018-10-16 International Business Machines Corporation Verifying data security in a dispersed storage network
US9256560B2 (en) * 2009-07-29 2016-02-09 Solarflare Communications, Inc. Controller integration
US9661356B2 (en) * 2009-10-29 2017-05-23 International Business Machines Corporation Distribution of unique copies of broadcast data utilizing fault-tolerant retrieval from dispersed storage
US8458233B2 (en) * 2009-11-25 2013-06-04 Cleversafe, Inc. Data de-duplication in a dispersed storage network utilizing data characterization
US9152489B2 (en) * 2009-12-29 2015-10-06 Cleversafe, Inc. Revision synchronization of a dispersed storage network
US8990585B2 (en) * 2009-12-29 2015-03-24 Cleversafe, Inc. Time based dispersed storage access
US8959366B2 (en) * 2010-01-28 2015-02-17 Cleversafe, Inc. De-sequencing encoded data slices
US8954667B2 (en) * 2010-01-28 2015-02-10 Cleversafe, Inc. Data migration in a dispersed storage network
US9898373B2 (en) * 2010-04-26 2018-02-20 International Business Machines Corporation Prioritizing rebuilding of stored data in a dispersed storage network
US10447767B2 (en) * 2010-04-26 2019-10-15 Pure Storage, Inc. Resolving a performance issue within a dispersed storage network
US9092386B2 (en) * 2010-04-26 2015-07-28 Cleversafe, Inc. Indicating an error within a dispersed storage network
US8959597B2 (en) * 2010-05-19 2015-02-17 Cleversafe, Inc. Entity registration in multiple dispersed storage networks
US9311615B2 (en) * 2010-11-24 2016-04-12 International Business Machines Corporation Infrastructure asset management
AU2012206295B2 (en) * 2011-01-10 2016-07-07 Storone Ltd. Large scale storage system
US10042709B2 (en) * 2011-06-06 2018-08-07 International Business Machines Corporation Rebuild prioritization during a plurality of concurrent data object write operations
US9135098B2 (en) * 2011-07-27 2015-09-15 Cleversafe, Inc. Modifying dispersed storage network event records
US8549518B1 (en) * 2011-08-10 2013-10-01 Nutanix, Inc. Method and system for implementing a maintenanece service for managing I/O and storage for virtualization environment
US9274864B2 (en) * 2011-10-04 2016-03-01 International Business Machines Corporation Accessing large amounts of data in a dispersed storage network
US8898542B2 (en) * 2011-12-12 2014-11-25 Cleversafe, Inc. Executing partial tasks in a distributed storage and task network
US9146810B2 (en) * 2012-01-31 2015-09-29 Cleversafe, Inc. Identifying a potentially compromised encoded data slice
US8719320B1 (en) * 2012-03-29 2014-05-06 Amazon Technologies, Inc. Server-side, variable drive health determination
US9380032B2 (en) * 2012-04-25 2016-06-28 International Business Machines Corporation Encrypting data for storage in a dispersed storage network
US9164841B2 (en) * 2012-06-05 2015-10-20 Cleversafe, Inc. Resolution of a storage error in a dispersed storage network
US9761229B2 (en) * 2012-07-20 2017-09-12 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for audio object clustering
US9811533B2 (en) * 2012-12-05 2017-11-07 International Business Machines Corporation Accessing distributed computing functions in a distributed computing system
US10055441B2 (en) * 2013-02-05 2018-08-21 International Business Machines Corporation Updating shared group information in a dispersed storage network
US9626125B2 (en) * 2013-07-31 2017-04-18 International Business Machines Corporation Accounting for data that needs to be rebuilt or deleted
US9720758B2 (en) * 2013-09-11 2017-08-01 Dell Products, Lp Diagnostic analysis tool for disk storage engineering and technical support
US9264494B2 (en) * 2013-10-21 2016-02-16 International Business Machines Corporation Automated data recovery from remote data object replicas
US9900316B2 (en) * 2013-12-04 2018-02-20 International Business Machines Corporation Accessing storage units of a dispersed storage network
US9075773B1 (en) * 2014-05-07 2015-07-07 Igneous Systems, Inc. Prioritized repair of data storage failures
US20150356305A1 (en) * 2014-06-05 2015-12-10 Cleversafe, Inc. Secure data access in a dispersed storage network
US20160028419A1 (en) * 2014-07-22 2016-01-28 Lsi Corporation Systems and Methods for Rank Independent Cyclic Data Encoding
US10120739B2 (en) * 2014-12-02 2018-11-06 International Business Machines Corporation Prioritized data rebuilding in a dispersed storage network
US10078472B2 (en) * 2015-02-27 2018-09-18 International Business Machines Corporation Rebuilding encoded data slices in a dispersed storage network
US10079887B2 (en) * 2015-03-31 2018-09-18 International Business Machines Corporation Expanding storage capacity of a set of storage units in a distributed storage network
US10601658B2 (en) * 2015-04-08 2020-03-24 Cisco Technology, Inc. Maintenance of consumable physical components of a network
US10067998B2 (en) * 2015-04-30 2018-09-04 International Business Machines Corporation Distributed sync list
US10528540B2 (en) * 2015-05-11 2020-01-07 AtScale, Inc. Dynamic aggregate generation and updating for high performance querying of large datasets
US10410135B2 (en) * 2015-05-21 2019-09-10 Software Ag Usa, Inc. Systems and/or methods for dynamic anomaly detection in machine sensor data
JP7316283B2 (en) * 2018-01-16 2023-07-27 エヌチェーン ライセンシング アーゲー Computer-implemented method and system for obtaining digitally signed data

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6434715B1 (en) * 1999-06-14 2002-08-13 General Electric Company Method of detecting systemic fault conditions in an intelligent electronic device
US20110219175A1 (en) * 2004-08-27 2011-09-08 Lexar Media, Inc. Storage capacity status
US8190588B1 (en) * 2005-09-19 2012-05-29 Amazon Technologies, Inc. Providing a distributed transaction information storage service
US20140108815A9 (en) * 2010-08-25 2014-04-17 Cleversafe, Inc. Securely rebuilding an encoded data slice
US20130134891A1 (en) * 2011-07-26 2013-05-30 Hunter Industries, Inc. Systems and methods for providing power and data to lighting devices
US20140101298A1 (en) * 2012-10-05 2014-04-10 Microsoft Corporation Service level agreements for a configurable distributed storage system
US20150308074A1 (en) * 2014-04-24 2015-10-29 Topcon Positioning Systems, Inc. Semi-Automatic Control of a Joystick for Dozer Blade Control

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319545A (en) * 2018-02-01 2018-07-24 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN110493247A (en) * 2019-08-29 2019-11-22 南方电网科学研究院有限责任公司 A kind of distribution terminal communication check method, system, equipment and computer media

Also Published As

Publication number Publication date
US20170249205A1 (en) 2017-08-31
US20170250809A1 (en) 2017-08-31
US10248505B2 (en) 2019-04-02
US10326740B2 (en) 2019-06-18
US20170249551A1 (en) 2017-08-31
US20170249212A1 (en) 2017-08-31
US20230315557A1 (en) 2023-10-05
US10089178B2 (en) 2018-10-02
US20170249084A1 (en) 2017-08-31
US20170249086A1 (en) 2017-08-31
US11204822B1 (en) 2021-12-21
US20170250965A1 (en) 2017-08-31
US20170249203A1 (en) 2017-08-31
US10673828B2 (en) 2020-06-02
CN108701197A (en) 2018-10-23
US11704184B2 (en) 2023-07-18
DE112017000220T5 (en) 2018-08-09
US10678622B2 (en) 2020-06-09
US10120757B2 (en) 2018-11-06
US20180307561A1 (en) 2018-10-25
US10476849B2 (en) 2019-11-12
WO2017149410A1 (en) 2017-09-08
US10824495B2 (en) 2020-11-03
US20220083415A1 (en) 2022-03-17

Similar Documents

Publication Publication Date Title
US20170249228A1 (en) Persistent device fault indicators
US10656871B2 (en) Expanding slice count in response to low-level failures
US10387080B2 (en) Rebuilding slices in a dispersed storage network
US10489070B2 (en) Proxying read requests when performance or availability failure is anticipated
US10372506B2 (en) Compute architecture in a memory device of distributed computing system
US10496308B2 (en) Using pseudo DSN memory units to handle data in motion within a DSN memory
US10169123B2 (en) Distributed data rebuilding
US10296404B2 (en) Determining slices used in a reconstruction
US10642489B2 (en) Determining when to initiate an intra-distributed storage unit rebuild vs. an inter-distributed storage unit rebuild
US11556435B1 (en) Modifying storage of encoded data slices based on changing storage parameters
US10621021B2 (en) Using dispersed data structures to point to slice or date source replicas
US10310763B2 (en) Forming a distributed storage network memory without namespace aware distributed storage units
US20190026041A1 (en) Shutting down storage units or drives when below threshold in a distributed storage system
US10509577B2 (en) Reliable storage in a dispersed storage network
US10394476B2 (en) Multi-level stage locality selection on a large system
US10523241B2 (en) Object fan out write operation
US10459792B2 (en) Using an eventually consistent dispersed memory to implement storage tiers
US9891995B2 (en) Cooperative decentralized rebuild scanning
US10114698B2 (en) Detecting and responding to data loss events in a dispersed storage network
US20190056995A1 (en) Managing migration of encoded data slices in a dispersed storage network
US20190056996A1 (en) Managing unavailable storage in a dispersed storage network
US20170322734A1 (en) Using locks to prevent multiple rebuilds of the same source

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ATTARD, RYAN J.;DESHMUKH, OMKAR;HENDRICKSON, DUSTIN M.;AND OTHERS;SIGNING DATES FROM 20170104 TO 20170105;REEL/FRAME:040901/0158

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: PURE STORAGE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:050451/0549

Effective date: 20190906

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION