US20210271583A1 - Hyper-converged infrastructure (hci) log system - Google Patents

Hyper-converged infrastructure (hci) log system Download PDF

Info

Publication number
US20210271583A1
US20210271583A1 US17/326,152 US202117326152A US2021271583A1 US 20210271583 A1 US20210271583 A1 US 20210271583A1 US 202117326152 A US202117326152 A US 202117326152A US 2021271583 A1 US2021271583 A1 US 2021271583A1
Authority
US
United States
Prior art keywords
log
database
subset
log information
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/326,152
Other versions
US11836067B2 (en
Inventor
Edward Ding
Drake Yuan Qiu
Lewei Ji
Muzhar S. Khokhar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dell Products LP
Original Assignee
Dell Products LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dell Products LP filed Critical Dell Products LP
Priority to US17/326,152 priority Critical patent/US11836067B2/en
Publication of US20210271583A1 publication Critical patent/US20210271583A1/en
Assigned to DELL PRODUCTS L.P. reassignment DELL PRODUCTS L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DING, EDWARD, JI, LEWEI, KHOKHAR, MUZHAR S, QIU, DRAKE YUAN
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH SECURITY AGREEMENT Assignors: DELL PRODUCTS, L.P., EMC IP Holding Company LLC
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DELL PRODUCTS L.P., EMC IP Holding Company LLC
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DELL PRODUCTS L.P., EMC IP Holding Company LLC
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DELL PRODUCTS L.P., EMC IP Holding Company LLC
Assigned to EMC IP Holding Company LLC, DELL PRODUCTS L.P. reassignment EMC IP Holding Company LLC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (057931/0392) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Assigned to DELL PRODUCTS L.P., EMC IP Holding Company LLC reassignment DELL PRODUCTS L.P. RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (057758/0286) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Assigned to DELL PRODUCTS L.P., EMC IP Holding Company LLC reassignment DELL PRODUCTS L.P. RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (058014/0560) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Publication of US11836067B2 publication Critical patent/US11836067B2/en
Application granted granted Critical
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/40Data acquisition and logging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3476Data logging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/16File or folder operations, e.g. details of user interfaces specifically adapted to file systems
    • G06F16/162Delete operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0652Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • the present disclosure relates generally to information handling systems, and more particularly to log systems for information handling systems that provide a Hyper-Converged Infrastructure (HCl).
  • HCl Hyper-Converged Infrastructure
  • An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information.
  • information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated.
  • the variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications.
  • information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
  • HCl systems provide a software-defined Information Technology (IT) infrastructure that virtualizes the elements of conventional hardware-defined systems, including virtualized computing (e.g., via a hypervisor), virtualized storage (e.g., via a software-defined Storage Area Network (SAN)), virtualized networking (e.g., via software-defined networking), and/or other HCl components known in the art.
  • IT Information Technology
  • Log collection is an important feature in server devices that can be used by administrators to diagnose errors, identify inefficiencies, and/or provide other log uses that would be apparent to one of skill in the art.
  • Conventional log collection systems for conventional server devices may operate to manage structured and unified log files.
  • HCl clusters may be provided that include a plurality of HCl system systems that may be provided by server devices that are built from heterogeneous hardware and software components, and when such HCl systems are treated as a single system, log collections may be unstructured and may contain various log outputs from the heterogeneous hardware and software components.
  • traditional log collection from HCl systems in an HCl cluster may result in unrestricted incremental growth of the log files, which may use up the HCl system storage or dedicated log space, and/or may result in log files being deleted before they can be reviewed.
  • an Information Handling System includes a storage system that provides at least a portion of a log database; a plurality of log generating components; a processing system coupled to the storage system and the plurality of log generating components; and a memory system that is coupled to the processing system and that includes instructions that, when executed by the processing system cause the processing system to provide a log management engine that is configured to: receive a request to store a first log bundle, wherein the first log bundle includes log files generated by each of the plurality of log generating components; determine whether at least one second log bundle that is stored in the log database is at least a size threshold; perform, in response to determining that the at least one second log bundle is at least the size threshold, a log database clean operation on the at least one second log bundle; determine whether the log database clean operation on the at least one second log bundle has provided an available storage capacity in the log database that is sufficient to store the first log bundle; and store, in response to determining that the log database clean operation on the at least one second log bundle has provided an
  • FIG. 1 is a schematic view illustrating an embodiment of an information handling system.
  • FIG. 2 is a schematic view illustrating an embodiment of a network including a Hyper-Converged Infrastructure (HCl) log system that operates according to the teachings of the present disclosure.
  • HCl Hyper-Converged Infrastructure
  • FIG. 3 is a flow chart illustrating an embodiment of a method for storing log bundles in an HCl system.
  • FIG. 4 is a flow chart illustrating an embodiment of a method for cleaning a log database an HCl system.
  • an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, calculate, determine, classify, process, transmit, receive, retrieve, originate, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes.
  • an information handling system may be a personal computer (e.g., desktop or laptop), tablet computer, mobile device (e.g., personal digital assistant (PDA) or smart phone), server (e.g., blade server or rack server), a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price.
  • the information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, touchscreen and/or a video display. The information handling system may also include one or more buses operable to transmit communications between the various hardware components.
  • RAM random access memory
  • processing resources such as a central processing unit (CPU) or hardware or software control logic
  • ROM read-only memory
  • Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, touchscreen and/or a video display.
  • I/O input and output
  • the information handling system may also include one or more buses operable to transmit communications between the various
  • IHS 100 includes a processor 102 , which is connected to a bus 104 .
  • Bus 104 serves as a connection between processor 102 and other components of IHS 100 .
  • An input device 106 is coupled to processor 102 to provide input to processor 102 .
  • Examples of input devices may include keyboards, touchscreens, pointing devices such as mouses, trackballs, and trackpads, and/or a variety of other input devices known in the art.
  • Programs and data are stored on a mass storage device 108 , which is coupled to processor 102 . Examples of mass storage devices may include hard discs, optical disks, magneto-optical discs, solid-state storage devices, and/or a variety other mass storage devices known in the art.
  • IHS 100 further includes a display 110 , which is coupled to processor 102 by a video controller 112 .
  • a system memory 114 is coupled to processor 102 to provide the processor with fast storage to facilitate execution of computer programs by processor 102 .
  • Examples of system memory may include random access memory (RAM) devices such as dynamic RAM (DRAM), synchronous DRAM (SDRAM), solid state memory devices, and/or a variety of other memory devices known in the art.
  • RAM random access memory
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • solid state memory devices solid state memory devices
  • a chassis 116 houses some or all of the components of IHS 100 . It should be understood that other buses and intermediate circuits can be deployed between the components described above and processor 102 to facilitate interconnection between the components and the processor 102 .
  • the network 200 includes a Hyper-Converged Infrastructure (HCl) system that is provided by a server device 202 , and that may include an HCl log system according to the teachings of the present disclosure.
  • HCl Hyper-Converged Infrastructure
  • the server device 202 may be provided by the IHS 100 discussed above with reference to FIG. 1 , and/or may include some or all of the components of the IHS 100 .
  • the server device of the present disclosure is utilized to provide an HCl system that includes a software-defined Information Technology (IT) infrastructure that virtualizes the elements of conventional hardware-defined systems, including virtualized computing (e.g., via a hypervisor), virtualized storage (e.g., via a software-defined Storage Area Network (SAN)), virtualized networking (e.g., via software-defined networking), and/or other HCl components known in the art.
  • IT Information Technology
  • server devices may provide respective HCl systems (similar to the HCl system provided by the server device 202 ) that are each part of an HCl cluster.
  • log files may be generated by each of the HCl components, and those log files may then be stored in a log database included in an HCl storage system.
  • the HCl systems may be provided by a variety of computing devices while remaining within the scope of the present disclosure as well.
  • networks may include HCl clusters having multiple HCl systems provided by respective server devices, and networks may also include multiple HCl clusters while remaining within the scope of the present disclosure as well.
  • the server device 202 includes a chassis 204 that houses the components of the server device 202 , only some of which are illustrated in FIG. 2 .
  • the chassis 204 may house a processing system (not illustrated, but which may include the processor 102 discussed above with reference to FIG. 1 ) and a memory system (not illustrated, but which may include the memory 114 discussed above with reference to FIG. 1 ) that is coupled to the processing system and that include instructions that, when executed by the processing system, cause the processing system to provide an HCl engine 206 that is configured to perform the functions of the HCl engines and server devices discussed below.
  • the HCl engine 206 may include a log management engine 206 a that is configured to perform the functions of the log management engines and server devices discussed below.
  • the chassis 204 may also house a networking system 208 that is coupled to the HCl engine 206 (e.g., via a coupling between the networking system 208 and the processing system) and that may include a Network Interface Controller (NIC), a wireless communication system (e.g., a BLUETOOTH® communication system, a Near Field Communication (NFC) system, a WiFi communication system, etc.), and/or other communication components that would be apparent to one of skill in the art in possession of the present disclosure.
  • NIC Network Interface Controller
  • wireless communication system e.g., a BLUETOOTH® communication system, a Near Field Communication (NFC) system, a WiFi communication system, etc.
  • the chassis 204 may also house a storage system 210 that is coupled to the HCl engine 206 (e.g., via a coupling between the storage system 210 and the processing system) and that may include direct-attached storage device(s) such as Hard Disk Drive(s) (HDD(s)), a Solid State Drive(s) (SSD(s)), and/or other direct-attached storage devices that would be apparent to one of skill in the art in possession of the present disclosure.
  • direct-attached storage device(s) such as Hard Disk Drive(s) (HDD(s)), a Solid State Drive(s) (SSD(s)
  • the networking system 208 and the storage system 210 may be configured to generate log files that may be grouped, by the log management engine 206 a , as log bundles that may be stored in a log database 210 a provided by the storage system 210 , as discussed in further detail below.
  • the chassis 204 may also house other log generating components 212 (e.g., besides the networking system 208 and the storage system 210 ) that are also configured to generate log files that may be grouped into log bundles by the log management engine 206 a and stored in the log database 210 a provided by the storage system 210 as well.
  • a chassis 202 may house a remote access controller (e.g., an integrated DELL® Remote Access Controller (iDRAC) available from DELL® Inc. of Round Rock, Tex., United States, a Baseboard Management Controller (BMC), and/or other components with similar functionality while remaining within the scope of the present disclosure as well), an operating system, Virtualization software stack, virtualization host operating system, a hardware operating service, and/or other log generating hardware or software components that would be apparent to one of skill in the art in possession of the present disclosure.
  • a remote access controller e.g., an integrated DELL® Remote Access Controller (iDRAC) available from DELL® Inc. of Round Rock, Tex., United States, a Baseboard Management Controller (BMC), and/or other components with similar functionality while remaining within the scope of the present disclosure as well
  • iDRAC integrated DELL® Remote Access Controller
  • BMC Baseboard Management Controller
  • the HCl engine 206 may also be configured to virtualize the elements of conventional hardware-defined systems as discussed above, including virtualized computing (e.g., via a hypervisor using the processing system/memory system), virtualized storage (e.g., via a software-defined Storage Area Network (SAN) using the storage system 210 ), virtualized networking (e.g., via software-defined networking using the networking system 208 ), and/or other HCl components known in the art.
  • virtualized computing e.g., via a hypervisor using the processing system/memory system
  • virtualized storage e.g., via a software-defined Storage Area Network (SAN) using the storage system 210
  • virtualized networking e.g., via software-defined networking using the networking system 208
  • server devices may include a variety of components other than those illustrated in order to provide conventional server device functionality, as well as the functionality discussed below, while remaining within the scope of the present disclosure.
  • the server device 202 is coupled via the networking system 208 to a network 214 that may be provided by a Local Area Network (LAN), the Internet, and/or a variety of other networks that would be apparent to one of skill in the art in possession of the present disclosure.
  • a management system 216 may be coupled to the network 214 as well, and may be provided by a server device, a client device, and/or other components that are configured to manage and gather log data generated by components of a server device 202 as described below.
  • the server device, the client device, and/or any of the other components that provide the management system 216 may be provided by the IHS 100 discussed above with reference to FIG.
  • HCl log systems may include a variety of different components and/or component configurations while remaining within the scope of the present disclosure as well.
  • FIG. 3 an embodiment of a method 300 for storing log bundles in HCl systems is illustrated.
  • the systems and methods of the present disclosure prevent unrestricted incremental growth of log files, and maintain historical log data, for HCl systems that produce unstructured logs from the operation of the heterogenous hardware and software that are used to provide those HCl systems. This is achieved, at least in part, by providing for the log database capacity checks and log database clean operations described below. For example, the log database capacity checks determine whether at least one log bundle that is stored in the log storage system is above a size threshold when a request from a management system to store a first log bundle is received.
  • a log database clean operation on the at least one log bundle is performed in an attempt to free up space in the log database.
  • the log database clean operation on the at least one second log bundle has provided an available storage capacity in the log database that is sufficient to store the first log bundle
  • the first log bundle is stored in the log database.
  • the method 300 begins at block 302 where a request to store a first log bundle is received.
  • the management system 216 may issue a request to store a first log bundle.
  • the management system 216 may provide the request over the network 214 , and the request may be received by the log management engine 206 a .
  • the log management engine 206 a may be configured to group log files generated by each of the HCl log generating components in the server device 202 such as, for example, the networking system 208 , the storage system 210 , and/or any of the log generating components 212 included in the server device 202 .
  • Each log file may include log data of events occurring in a respective log generating component.
  • the log data included in a log file of a hardware operating service may include a hardware operating service log, a system log, an update log, a transaction log, a debug log, a virtual machine kernel log, and/or any other log file that may be apparent to one of skill in the art in possession of the present disclosure.
  • the log management engine 206 a may then group the log files into the first log bundle in response to receiving the request from the management system 216 .
  • the log management engine 206 b may group the log files into the first log bundle in response to other events that are detected by the log management engine 206 a while remaining within the scope of the present disclosure as well.
  • the method 300 then proceeds to decision block 304 where it is determined whether at least one second log bundle that is stored in a log database is at least a size threshold.
  • the log management engine 206 a may determine whether at least one second bundle that is stored in the log database 210 a of the storage system 210 is at least a size threshold.
  • the log management engine 206 b may make the determination of whether the at least one second bundle is at least the size threshold in response to receiving the request to store the first log bundle, and before grouping the log files of the log generating components 208 , 210 , and/or 212 into the first log bundle.
  • the size threshold may be a predefined size threshold set by an administrator.
  • the size threshold may be calculated based on a predetermined size coefficient that is multiplied by a full capacity of the log database 210 a .
  • the predetermined size coefficient may be used to control the size of the log database 210 a.
  • the method 300 then proceeds to block 306 where a log database clean operation is performed on the at least one second log bundle.
  • the log management engine 206 a may perform a log database clean operation on the log database 210 a .
  • the log database clean operation may be performed by the log management engine 206 a to attempt to remove at least a portion of the log files included in the at least one second log bundle that is stored in the log database 210 a such that there is sufficient storage space in the log database 210 a to store the first log bundle.
  • a method 400 for performing a log database clean operation in an HCl system (e.g., at block 306 of the method 300 ) is illustrated.
  • the method 400 begins at block 402 where a deleted log file size of at least one log file, which is included in the at least one second log bundle and which is to be deleted, is calculated.
  • the log management engine 206 a may calculate the deleted log file size of the at least one log file that is included in the at least one second log bundle in the log database 210 a , and that is to be deleted from the log database 210 a .
  • the log management engine 206 a may determine the deleted log file size to be the larger of (1) an available storage capacity of the log database that is sufficient to store the first log bundle, and (2) the difference of a log database size of the at least one second log bundle and the size threshold.
  • the method 400 may then proceed to block 404 where a factor weight for each at least one log file, which is included in the at least one second log bundle, is identified.
  • the log management engine 206 a may identify the factor weight of each at least one log file that is included in the at least one second log bundle stored in the log database 210 a .
  • the factor weight may be used to determine which log files included in the at least one second log bundle may be deleted to satisfy the deleted log file size.
  • the factor weight may be based on at least one of a log component priority, a log generated time tag, a log size, a container running status, and/or any other log file conditions that would be apparent to one of skill in the art in possession of the present disclosure.
  • the log component priority may influence the factor weight based on an HCl component's potential for failure and/or HCl component's potential for disrupting the HCl system if that HCl component were to fail (e.g., log files for HCl components that are critical and/or that have a higher potential of failure may be stored longer).
  • the log generated time tag may influence the factor weight based on when a log file is generated (e.g., older log files may be deleted before newer log files).
  • Sizes of log files may also influence the factor weight (e.g., log files that are larger may be deleted before smaller log files).
  • Container running status may contribute to the factor weight (e.g., log files associated with containers that have high container I/O or network payload are given higher priority).
  • each log file condition may disproportionately affect the factor weight.
  • the log component priority may influence the factor weight more than the container running status, the log generated time tag, and the log size. While specific examples of calculating factor weight of a log file are discussed, one of skill in the art in possession of the present disclosure will recognize that log conditions that affect the factor weight, and how each of those log conditions affect the factor weight, may vary while remaining within the scope of the present disclosure.
  • the method 400 then proceeds to block 406 where log files included in the at least one second log bundle are added to a delete list, based on the factor weight for each log file, until a delete list size of the delete list is at least the deleted log file size.
  • the log management engine 206 a may add the log files included in the at least one second log bundle to a delete list based on the factor weight for each log file until a delete list size of the delete list is at least the deleted log file size.
  • the log files that have higher factor weights may be added to the delete list over the log files with the lower factor weights.
  • the factor weight may be calculated such that the log files with the lower factor weights may be added to the delete list over the log files with the higher factor weights.
  • the log management engine 206 a may add, to the delete list, the log file size of each of the log files as each log file is identified for the delete list, and compare the delete list size of the delete list to the deleted log file size to determine whether the delete list size of the delete list is at least the deleted log file size.
  • log management engine 206 a may delete the log files on the delete list when the delete list size of the delete list is at least the deleted log file size. In various embodiments of block 408 , the log management engine 206 a may delete the log files as the log files are added to the delete list until the delete list size of the delete list is at least the deleted log file size.
  • the method 300 may then proceed to decision block 308 where it is determined whether the log database clean operation on the at least one second log bundle has resulted in the at least one second log bundle being at least the size threshold.
  • the log management engine 206 a may determine, similarly as discussed above with respect to decision block 304 , whether the log database clean operation on the at least one second log bundle has resulted in the at least one second log bundle being at least the size threshold. If the log database clean operation on the at least one second log bundle has resulted in the at least one second log bundle being at least the size threshold, then the method 300 then proceeds to block 310 where an insufficient capacity notification is provided.
  • the log management engine 206 a may provide an insufficient capacity notification through the network 214 to the management system 216 , and that insufficient capacity notification may then be displayed to an administrator of the management system 216 via a graphical user interface provided on a display device that is coupled to the management system 216 .
  • the insufficient capacity notification may indicate that the server device 202 needs additional storage in the storage system 210 , or that the predetermined values used during method 300 may need to be adjusted.
  • the method 300 then proceeds to decision block 312 where it is determined whether the log database clean operation on the at least one second log bundle has provided an available storage capacity in the log database that is sufficient to store the first log bundle.
  • the log management engine 206 a may determine that an available storage capacity in the log database 210 a is sufficient to store the first log bundle. For example, the available storage capacity in the log database 210 a may need to be at least the size of the first log bundle.
  • the size of the first log bundle may be estimated based on historical data such as, for example, the size of any of the at least one second log bundles.
  • the size of the first log bundle may be a predetermined factor multiplied by the estimated first log bundle size to ensure enough storage capacity is available for the first log bundle.
  • the size of the first log bundle may be the estimated first log bundle size multiplied by 1.1, 1.2, 2, 3, 4, and/or any other factor that would be apparent to one of skill in the art in possession of the present disclosure.
  • the size of the first log bundle size may be the estimated first log bundle size multiplied by 2 as each log file may be stored and then grouped and compressed together as the first log bundle.
  • the method 300 then proceeds to block 310 where an insufficient capacity notification is returned as discussed above and the method 300 may end.
  • the method 300 then proceeds to block 314 where the first log bundle is stored in the log database.
  • the log management engine 206 a may group the log files from the various HCl log generating components (e.g., the networking system 208 , the storage system 210 , and the log generating components 212 ) into the first log bundle and store the first log bundle in the log database 210 a and the method 300 may end.
  • the method 300 then proceeds to decision block 316 where it is determined whether the available storage capacity in the log database is sufficient to store the first log bundle.
  • the log management engine 206 a may determine that an available storage capacity in the log database 210 a is sufficient to store the first log bundle in a manner that is similar to that described above for decision block 312 .
  • the available storage capacity in the log database 210 a may need to be at least the size of the first log bundle.
  • the size of the first log bundle may be estimated based on historical data such as, for example, the size of any of the at least one second log bundles.
  • the size of the first log bundle may be a predetermined factor multiplied by the estimated first log bundle size to ensure enough storage capacity is available for the first log bundle.
  • the size of the first log bundle may be the estimated first log bundle size multiplied by 1.1, 1.2, 2, 3, 4, and/or any other factor that would be apparent to one of skill in the art in possession of the present disclosure. If, at decision block 316 , it is determined that the available storage capacity in the log database 210 a is sufficient to store the first log bundle, the method 300 then proceeds to block 314 where the first log bundle is stored in the log database and the method 300 ends.
  • the method 300 then proceeds to block 318 where it is determined whether the storage system supports the first log bundle.
  • the log management engine 206 a may determine whether the storage system 210 can support the first log bundle. For example, the log management engine 206 a may calculate the available size of the storage system 210 , which may include the unused storage space in the storage system 210 . The log management engine 206 a may then also calculate a reserved capacity, which may include the capacity of the storage system 210 that is allocated for use by the server device 202 .
  • the log management engine 206 a may then determine the remaining capacity of the storage system 210 , which may include the available size of the storage system 210 minus the reserved capacity. The log management engine 206 a may then add the remaining capacity of the storage system 210 to the log database size of the at least one second log bundle, and if the sum of the remaining capacity of the storage system 210 and the log database size is sufficient to store the first log bundle, the method 300 then proceeds to block 306 where the log database clean operation is performed on the at least one second log bundle, similarly as discussed above. However, if the sum of the remaining capacity of the storage system 210 and the log database size is insufficient to store the first log bundle, the method 300 then proceeds to block 310 where the insufficient capacity notification is provided and the method 300 may end.
  • the systems and methods of the present disclosure will provide a log database clean operation to remove enough log files included in stored log bundles so that that log bundle may be stored in the log database. As such, the systems and methods of the present disclosure prevent unrestricted incremental growth of log files and maintain historical log data for HCl systems that produce unstructured logs from heterogenous hardware and software components.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Debugging And Monitoring (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A Hyper-Converged Infrastructure (HCl) system that includes a plurality of HCl log generating components and an HCl storage system that provides at least a portion of a log database. The HCl system receives a request from a management system to store a first log bundle of the plurality of HCl log generating components and determines the at least one second log bundle that is stored in the log database is at least a size threshold. The HCl system performs a log database clean operation on the at least one second log bundle and determines that the log database clean operation on the at least one second log bundle has provided an available storage capacity in the log database that is sufficient to store the first log bundle. The HCl system then stores the first log bundle in the log database.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 16/179,499, attorney docket no. 16356.1961US01, filed on Nov. 2, 2018, the disclosure of which is incorporated by reference herein in its entirety.
  • BACKGROUND
  • The present disclosure relates generally to information handling systems, and more particularly to log systems for information handling systems that provide a Hyper-Converged Infrastructure (HCl).
  • As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
  • Information handling systems such as, for example, server devices, are sometimes utilized to provide Hyper-Converged Infrastructure (HCl) systems. HCl systems provide a software-defined Information Technology (IT) infrastructure that virtualizes the elements of conventional hardware-defined systems, including virtualized computing (e.g., via a hypervisor), virtualized storage (e.g., via a software-defined Storage Area Network (SAN)), virtualized networking (e.g., via software-defined networking), and/or other HCl components known in the art. Log collection is an important feature in server devices that can be used by administrators to diagnose errors, identify inefficiencies, and/or provide other log uses that would be apparent to one of skill in the art. Conventional log collection systems for conventional server devices may operate to manage structured and unified log files. However, HCl clusters may be provided that include a plurality of HCl system systems that may be provided by server devices that are built from heterogeneous hardware and software components, and when such HCl systems are treated as a single system, log collections may be unstructured and may contain various log outputs from the heterogeneous hardware and software components. As such, traditional log collection from HCl systems in an HCl cluster may result in unrestricted incremental growth of the log files, which may use up the HCl system storage or dedicated log space, and/or may result in log files being deleted before they can be reviewed.
  • Accordingly, it would be desirable to provide an improved HCl log system.
  • SUMMARY
  • According to one embodiment, an Information Handling System (IHS) includes a storage system that provides at least a portion of a log database; a plurality of log generating components; a processing system coupled to the storage system and the plurality of log generating components; and a memory system that is coupled to the processing system and that includes instructions that, when executed by the processing system cause the processing system to provide a log management engine that is configured to: receive a request to store a first log bundle, wherein the first log bundle includes log files generated by each of the plurality of log generating components; determine whether at least one second log bundle that is stored in the log database is at least a size threshold; perform, in response to determining that the at least one second log bundle is at least the size threshold, a log database clean operation on the at least one second log bundle; determine whether the log database clean operation on the at least one second log bundle has provided an available storage capacity in the log database that is sufficient to store the first log bundle; and store, in response to determining that the log database clean operation on the at least one second log bundle has provided an available storage capacity in the log database that is sufficient to store the first log bundle, the first log bundle in the log database.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic view illustrating an embodiment of an information handling system.
  • FIG. 2 is a schematic view illustrating an embodiment of a network including a Hyper-Converged Infrastructure (HCl) log system that operates according to the teachings of the present disclosure.
  • FIG. 3 is a flow chart illustrating an embodiment of a method for storing log bundles in an HCl system.
  • FIG. 4 is a flow chart illustrating an embodiment of a method for cleaning a log database an HCl system.
  • DETAILED DESCRIPTION
  • For purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, calculate, determine, classify, process, transmit, receive, retrieve, originate, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an information handling system may be a personal computer (e.g., desktop or laptop), tablet computer, mobile device (e.g., personal digital assistant (PDA) or smart phone), server (e.g., blade server or rack server), a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, touchscreen and/or a video display. The information handling system may also include one or more buses operable to transmit communications between the various hardware components.
  • In one embodiment, IHS 100, FIG. 1, includes a processor 102, which is connected to a bus 104. Bus 104 serves as a connection between processor 102 and other components of IHS 100. An input device 106 is coupled to processor 102 to provide input to processor 102. Examples of input devices may include keyboards, touchscreens, pointing devices such as mouses, trackballs, and trackpads, and/or a variety of other input devices known in the art. Programs and data are stored on a mass storage device 108, which is coupled to processor 102. Examples of mass storage devices may include hard discs, optical disks, magneto-optical discs, solid-state storage devices, and/or a variety other mass storage devices known in the art. IHS 100 further includes a display 110, which is coupled to processor 102 by a video controller 112. A system memory 114 is coupled to processor 102 to provide the processor with fast storage to facilitate execution of computer programs by processor 102. Examples of system memory may include random access memory (RAM) devices such as dynamic RAM (DRAM), synchronous DRAM (SDRAM), solid state memory devices, and/or a variety of other memory devices known in the art. In an embodiment, a chassis 116 houses some or all of the components of IHS 100. It should be understood that other buses and intermediate circuits can be deployed between the components described above and processor 102 to facilitate interconnection between the components and the processor 102.
  • Referring now to FIG. 2, an embodiment of a network 200 is illustrated that operates according to the teachings of the present disclosure. In the illustrated embodiment, the network 200 includes a Hyper-Converged Infrastructure (HCl) system that is provided by a server device 202, and that may include an HCl log system according to the teachings of the present disclosure. In many embodiments, the server device 202 may be provided by the IHS 100 discussed above with reference to FIG. 1, and/or may include some or all of the components of the IHS 100. As discussed above, the server device of the present disclosure is utilized to provide an HCl system that includes a software-defined Information Technology (IT) infrastructure that virtualizes the elements of conventional hardware-defined systems, including virtualized computing (e.g., via a hypervisor), virtualized storage (e.g., via a software-defined Storage Area Network (SAN)), virtualized networking (e.g., via software-defined networking), and/or other HCl components known in the art. Furthermore, while not illustrated, multiple server devices (similar to the server device 202) may provide respective HCl systems (similar to the HCl system provided by the server device 202) that are each part of an HCl cluster. During operation of the HCl system provided by the server device 202, log files may be generated by each of the HCl components, and those log files may then be stored in a log database included in an HCl storage system. However, while illustrated and described as being provided by server devices, the HCl systems may be provided by a variety of computing devices while remaining within the scope of the present disclosure as well. Furthermore, while a single HCl system is illustrated and described in the examples below, as discussed above networks may include HCl clusters having multiple HCl systems provided by respective server devices, and networks may also include multiple HCl clusters while remaining within the scope of the present disclosure as well.
  • In the illustrated embodiment, the server device 202 includes a chassis 204 that houses the components of the server device 202, only some of which are illustrated in FIG. 2. For example, the chassis 204 may house a processing system (not illustrated, but which may include the processor 102 discussed above with reference to FIG. 1) and a memory system (not illustrated, but which may include the memory 114 discussed above with reference to FIG. 1) that is coupled to the processing system and that include instructions that, when executed by the processing system, cause the processing system to provide an HCl engine 206 that is configured to perform the functions of the HCl engines and server devices discussed below. In various embodiments, the HCl engine 206 may include a log management engine 206 a that is configured to perform the functions of the log management engines and server devices discussed below. The chassis 204 may also house a networking system 208 that is coupled to the HCl engine 206 (e.g., via a coupling between the networking system 208 and the processing system) and that may include a Network Interface Controller (NIC), a wireless communication system (e.g., a BLUETOOTH® communication system, a Near Field Communication (NFC) system, a WiFi communication system, etc.), and/or other communication components that would be apparent to one of skill in the art in possession of the present disclosure.
  • The chassis 204 may also house a storage system 210 that is coupled to the HCl engine 206 (e.g., via a coupling between the storage system 210 and the processing system) and that may include direct-attached storage device(s) such as Hard Disk Drive(s) (HDD(s)), a Solid State Drive(s) (SSD(s)), and/or other direct-attached storage devices that would be apparent to one of skill in the art in possession of the present disclosure. In various embodiments, the networking system 208 and the storage system 210 may be configured to generate log files that may be grouped, by the log management engine 206 a, as log bundles that may be stored in a log database 210 a provided by the storage system 210, as discussed in further detail below. The chassis 204 may also house other log generating components 212 (e.g., besides the networking system 208 and the storage system 210) that are also configured to generate log files that may be grouped into log bundles by the log management engine 206 a and stored in the log database 210 a provided by the storage system 210 as well. For example, a chassis 202 may house a remote access controller (e.g., an integrated DELL® Remote Access Controller (iDRAC) available from DELL® Inc. of Round Rock, Tex., United States, a Baseboard Management Controller (BMC), and/or other components with similar functionality while remaining within the scope of the present disclosure as well), an operating system, Virtualization software stack, virtualization host operating system, a hardware operating service, and/or other log generating hardware or software components that would be apparent to one of skill in the art in possession of the present disclosure.
  • In a particular example, the HCl engine 206 may also be configured to virtualize the elements of conventional hardware-defined systems as discussed above, including virtualized computing (e.g., via a hypervisor using the processing system/memory system), virtualized storage (e.g., via a software-defined Storage Area Network (SAN) using the storage system 210), virtualized networking (e.g., via software-defined networking using the networking system 208), and/or other HCl components known in the art. While a specific server device 202 has been illustrated and described, one of skill in the art in possession of the present disclosure will recognize that server devices may include a variety of components other than those illustrated in order to provide conventional server device functionality, as well as the functionality discussed below, while remaining within the scope of the present disclosure.
  • In the illustrated embodiment, the server device 202 is coupled via the networking system 208 to a network 214 that may be provided by a Local Area Network (LAN), the Internet, and/or a variety of other networks that would be apparent to one of skill in the art in possession of the present disclosure. Furthermore, a management system 216 may be coupled to the network 214 as well, and may be provided by a server device, a client device, and/or other components that are configured to manage and gather log data generated by components of a server device 202 as described below. In many embodiments, the server device, the client device, and/or any of the other components that provide the management system 216 may be provided by the IHS 100 discussed above with reference to FIG. 1, and/or may include some or all of the components of the IHS 100. While a specific network 200 implementing the HCl log system of the present disclosure has been illustrated and described, one of skill in the art in possession of the present disclosure will recognize that HCl log systems provided according to the teachings of the present disclosure may include a variety of different components and/or component configurations while remaining within the scope of the present disclosure as well.
  • Referring now to FIG. 3, an embodiment of a method 300 for storing log bundles in HCl systems is illustrated. As discussed below, the systems and methods of the present disclosure prevent unrestricted incremental growth of log files, and maintain historical log data, for HCl systems that produce unstructured logs from the operation of the heterogenous hardware and software that are used to provide those HCl systems. This is achieved, at least in part, by providing for the log database capacity checks and log database clean operations described below. For example, the log database capacity checks determine whether at least one log bundle that is stored in the log storage system is above a size threshold when a request from a management system to store a first log bundle is received. In response to determining that the at least one log bundle is above the size threshold, a log database clean operation on the at least one log bundle is performed in an attempt to free up space in the log database. In response to determining that the log database clean operation on the at least one second log bundle has provided an available storage capacity in the log database that is sufficient to store the first log bundle, the first log bundle is stored in the log database. As such, historical log bundles and/or other log files of importance may be stored in the log database while limiting the amount of storage space that is dedicated to log bundles.
  • The method 300 begins at block 302 where a request to store a first log bundle is received. In an embodiment of block 302, the management system 216 may issue a request to store a first log bundle. For example, the management system 216 may provide the request over the network 214, and the request may be received by the log management engine 206 a. The log management engine 206 a may be configured to group log files generated by each of the HCl log generating components in the server device 202 such as, for example, the networking system 208, the storage system 210, and/or any of the log generating components 212 included in the server device 202. Each log file may include log data of events occurring in a respective log generating component. In a particular example, the log data included in a log file of a hardware operating service may include a hardware operating service log, a system log, an update log, a transaction log, a debug log, a virtual machine kernel log, and/or any other log file that may be apparent to one of skill in the art in possession of the present disclosure. The log management engine 206 a may then group the log files into the first log bundle in response to receiving the request from the management system 216. However, one of skill in the art in possession of the present disclosure would recognize that the log management engine 206 b may group the log files into the first log bundle in response to other events that are detected by the log management engine 206 a while remaining within the scope of the present disclosure as well.
  • The method 300 then proceeds to decision block 304 where it is determined whether at least one second log bundle that is stored in a log database is at least a size threshold. In an embodiment of decision block 304, the log management engine 206 a may determine whether at least one second bundle that is stored in the log database 210 a of the storage system 210 is at least a size threshold. For example, the log management engine 206 b may make the determination of whether the at least one second bundle is at least the size threshold in response to receiving the request to store the first log bundle, and before grouping the log files of the log generating components 208, 210, and/or 212 into the first log bundle. In some embodiments, the size threshold may be a predefined size threshold set by an administrator. In a particular example, the size threshold may be calculated based on a predetermined size coefficient that is multiplied by a full capacity of the log database 210 a. As such, the predetermined size coefficient may be used to control the size of the log database 210 a.
  • If, at decision block 304, it is determined that the at least one second log bundle that is stored in the log database is at least the size threshold, the method 300 then proceeds to block 306 where a log database clean operation is performed on the at least one second log bundle. In an embodiment of block 306, the log management engine 206 a may perform a log database clean operation on the log database 210 a. For example, the log database clean operation may be performed by the log management engine 206 a to attempt to remove at least a portion of the log files included in the at least one second log bundle that is stored in the log database 210 a such that there is sufficient storage space in the log database 210 a to store the first log bundle.
  • In a particular example, and with reference to FIG. 4, a method 400 for performing a log database clean operation in an HCl system (e.g., at block 306 of the method 300) is illustrated. The method 400 begins at block 402 where a deleted log file size of at least one log file, which is included in the at least one second log bundle and which is to be deleted, is calculated. In an embodiment of block 402, the log management engine 206 a may calculate the deleted log file size of the at least one log file that is included in the at least one second log bundle in the log database 210 a, and that is to be deleted from the log database 210 a. In a specific example, the log management engine 206 a may determine the deleted log file size to be the larger of (1) an available storage capacity of the log database that is sufficient to store the first log bundle, and (2) the difference of a log database size of the at least one second log bundle and the size threshold.
  • The method 400 may then proceed to block 404 where a factor weight for each at least one log file, which is included in the at least one second log bundle, is identified. In an embodiment of block 404, the log management engine 206 a may identify the factor weight of each at least one log file that is included in the at least one second log bundle stored in the log database 210 a. For example, the factor weight may be used to determine which log files included in the at least one second log bundle may be deleted to satisfy the deleted log file size.
  • The factor weight may be based on at least one of a log component priority, a log generated time tag, a log size, a container running status, and/or any other log file conditions that would be apparent to one of skill in the art in possession of the present disclosure. The log component priority may influence the factor weight based on an HCl component's potential for failure and/or HCl component's potential for disrupting the HCl system if that HCl component were to fail (e.g., log files for HCl components that are critical and/or that have a higher potential of failure may be stored longer). The log generated time tag may influence the factor weight based on when a log file is generated (e.g., older log files may be deleted before newer log files). Sizes of log files may also influence the factor weight (e.g., log files that are larger may be deleted before smaller log files). Container running status may contribute to the factor weight (e.g., log files associated with containers that have high container I/O or network payload are given higher priority). In various embodiments each log file condition may disproportionately affect the factor weight. For example, the log component priority may influence the factor weight more than the container running status, the log generated time tag, and the log size. While specific examples of calculating factor weight of a log file are discussed, one of skill in the art in possession of the present disclosure will recognize that log conditions that affect the factor weight, and how each of those log conditions affect the factor weight, may vary while remaining within the scope of the present disclosure.
  • The method 400 then proceeds to block 406 where log files included in the at least one second log bundle are added to a delete list, based on the factor weight for each log file, until a delete list size of the delete list is at least the deleted log file size. In an embodiment of block 406, the log management engine 206 a may add the log files included in the at least one second log bundle to a delete list based on the factor weight for each log file until a delete list size of the delete list is at least the deleted log file size. In various embodiments, the log files that have higher factor weights may be added to the delete list over the log files with the lower factor weights. However, one of skill in the art would recognize that the factor weight may be calculated such that the log files with the lower factor weights may be added to the delete list over the log files with the higher factor weights. The log management engine 206 a may add, to the delete list, the log file size of each of the log files as each log file is identified for the delete list, and compare the delete list size of the delete list to the deleted log file size to determine whether the delete list size of the delete list is at least the deleted log file size.
  • The method 400 then proceeds to block 408 where the log files included on the delete list are deleted. In an embodiment of block 408, log management engine 206 a may delete the log files on the delete list when the delete list size of the delete list is at least the deleted log file size. In various embodiments of block 408, the log management engine 206 a may delete the log files as the log files are added to the delete list until the delete list size of the delete list is at least the deleted log file size.
  • Referring back to FIG. 3, following the performance of the log database clean operations at block 306, the method 300 may then proceed to decision block 308 where it is determined whether the log database clean operation on the at least one second log bundle has resulted in the at least one second log bundle being at least the size threshold. In an embodiment of decision block 308, the log management engine 206 a may determine, similarly as discussed above with respect to decision block 304, whether the log database clean operation on the at least one second log bundle has resulted in the at least one second log bundle being at least the size threshold. If the log database clean operation on the at least one second log bundle has resulted in the at least one second log bundle being at least the size threshold, then the method 300 then proceeds to block 310 where an insufficient capacity notification is provided. For example, the log management engine 206 a may provide an insufficient capacity notification through the network 214 to the management system 216, and that insufficient capacity notification may then be displayed to an administrator of the management system 216 via a graphical user interface provided on a display device that is coupled to the management system 216. The insufficient capacity notification may indicate that the server device 202 needs additional storage in the storage system 210, or that the predetermined values used during method 300 may need to be adjusted.
  • If, at decision block 308, it is determined that the log database clean operation on the at least one second log bundle has resulted in the at least one second log bundle being below the size threshold, the method 300 then proceeds to decision block 312 where it is determined whether the log database clean operation on the at least one second log bundle has provided an available storage capacity in the log database that is sufficient to store the first log bundle. In an embodiment of decision block 312, the log management engine 206 a may determine that an available storage capacity in the log database 210 a is sufficient to store the first log bundle. For example, the available storage capacity in the log database 210 a may need to be at least the size of the first log bundle. In some examples, the size of the first log bundle may be estimated based on historical data such as, for example, the size of any of the at least one second log bundles. In a specific example, the size of the first log bundle may be a predetermined factor multiplied by the estimated first log bundle size to ensure enough storage capacity is available for the first log bundle. For example, the size of the first log bundle may be the estimated first log bundle size multiplied by 1.1, 1.2, 2, 3, 4, and/or any other factor that would be apparent to one of skill in the art in possession of the present disclosure. In a specific example, the size of the first log bundle size may be the estimated first log bundle size multiplied by 2 as each log file may be stored and then grouped and compressed together as the first log bundle. As such there may be two instances of the log file data until the log files are deleted after the first bundle is created. If, at decision block 312, it is determined the log database clean operation on the at least one second log bundle has provided an available storage capacity in the log database that is insufficient to store the first log bundle, the method 300 then proceeds to block 310 where an insufficient capacity notification is returned as discussed above and the method 300 may end.
  • If, at decision block 312, it is determined the log database clean operation on the at least one second log bundle has provided an available storage capacity in the log database that is sufficient to store the first log bundle, the method 300 then proceeds to block 314 where the first log bundle is stored in the log database. In an embodiment of block 314, the log management engine 206 a may group the log files from the various HCl log generating components (e.g., the networking system 208, the storage system 210, and the log generating components 212) into the first log bundle and store the first log bundle in the log database 210 a and the method 300 may end.
  • Returning to decision block 304, if at decision block 304 the at least one second log bundle that is stored in the log database is below the size threshold, the method 300 then proceeds to decision block 316 where it is determined whether the available storage capacity in the log database is sufficient to store the first log bundle. In an embodiment of decision block 316, the log management engine 206 a may determine that an available storage capacity in the log database 210 a is sufficient to store the first log bundle in a manner that is similar to that described above for decision block 312. For example, the available storage capacity in the log database 210 a may need to be at least the size of the first log bundle. In some examples, the size of the first log bundle may be estimated based on historical data such as, for example, the size of any of the at least one second log bundles. In a specific example, the size of the first log bundle may be a predetermined factor multiplied by the estimated first log bundle size to ensure enough storage capacity is available for the first log bundle. For example, the size of the first log bundle may be the estimated first log bundle size multiplied by 1.1, 1.2, 2, 3, 4, and/or any other factor that would be apparent to one of skill in the art in possession of the present disclosure. If, at decision block 316, it is determined that the available storage capacity in the log database 210 a is sufficient to store the first log bundle, the method 300 then proceeds to block 314 where the first log bundle is stored in the log database and the method 300 ends.
  • However, if at decision block 316 it is determined the available storage capacity in the log database is insufficient to store the first log bundle, the method 300 then proceeds to block 318 where it is determined whether the storage system supports the first log bundle. In an embodiment of block 318, the log management engine 206 a may determine whether the storage system 210 can support the first log bundle. For example, the log management engine 206 a may calculate the available size of the storage system 210, which may include the unused storage space in the storage system 210. The log management engine 206 a may then also calculate a reserved capacity, which may include the capacity of the storage system 210 that is allocated for use by the server device 202. The log management engine 206 a may then determine the remaining capacity of the storage system 210, which may include the available size of the storage system 210 minus the reserved capacity. The log management engine 206 a may then add the remaining capacity of the storage system 210 to the log database size of the at least one second log bundle, and if the sum of the remaining capacity of the storage system 210 and the log database size is sufficient to store the first log bundle, the method 300 then proceeds to block 306 where the log database clean operation is performed on the at least one second log bundle, similarly as discussed above. However, if the sum of the remaining capacity of the storage system 210 and the log database size is insufficient to store the first log bundle, the method 300 then proceeds to block 310 where the insufficient capacity notification is provided and the method 300 may end.
  • Thus, systems and methods have been described that provide for the storing of log bundles generated by HCl systems provided by heterogeneous server devices and/or other hardware and software. By performing a log database capacity check and a log database clean operation, unstructured log files from the heterogenous hardware and software components providing the HCl system may be bundled and stored in a log database without some of the issues associated with performing convention log retrieval operations on HCl systems. The log database capacity check determines whether there is available capacity before storing a log bundle and if not, determines whether a log database may support the storage of the log bundle. If there is not enough available capacity in the log database but the log database can support the storage of the log bundle, the systems and methods of the present disclosure will provide a log database clean operation to remove enough log files included in stored log bundles so that that log bundle may be stored in the log database. As such, the systems and methods of the present disclosure prevent unrestricted incremental growth of log files and maintain historical log data for HCl systems that produce unstructured logs from heterogenous hardware and software components.
  • Although illustrative embodiments have been shown and described, a wide range of modification, change and substitution is contemplated in the foregoing disclosure and in some instances, some features of the embodiments may be employed without a corresponding use of other features. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the scope of the embodiments disclosed herein.

Claims (20)

What is claimed is:
1. A log system, comprising:
a management system; and
a computing system that includes:
a plurality of log generating components; and
a storage system that provides at least a portion of a log database, wherein the computing system is configured to:
receive a request from the management system to store first log information generated by each of the plurality of the log generating components;
determine that second log information that is stored in the log database is at least a size threshold;
perform, in response to determining that the second log information that is stored in the log database is at least the size threshold, a log database clean operation on the second log information that includes deleting a subset of a plurality of second log files that are included in the second log information based on respective factor weights of the subset of the plurality of the second log files until an available storage capacity in the log database is sufficient to store the first log information; and
store, in response to the log database clean operation on the second log information providing the available storage capacity in the log database that is sufficient to store the first log information, the first log information in the log database.
2. The system of claim 1, wherein the respective factor weights of the subset of the plurality of second log files are based on at least one of a log component priority, a log generated time tag, a log size, and a container running status.
3. The system of claim 1, wherein the performing the log database clean operation on the second log information includes:
calculating a deleted log file size of the subset of the plurality of second log files.
4. The system of claim 3, wherein the deleted log file size of the subset of the plurality of second log file is the larger of (1) the available storage capacity in the log database that is sufficient to store the first log information, and (2) a difference of a log file size of the second log information and the size threshold.
5. The system of claim 1, wherein the performing the log database clean operation on the second log information includes:
identifying the respective factor weight for each of the subset of the plurality of second log files.
6. The system of claim 1, wherein the performing the log database clean operation on the second log information includes:
adding the subset of the plurality of second log files to a delete list based on the respective factor weight for the subset of the plurality of second log files until a delete list size of the delete list is at least a deleted log file size that is the larger of (1) the available storage capacity in the log database that is sufficient to store the first log information, and (2) a difference of a log file size of the second log information and the size threshold.
7. The system of claim 1, wherein the computing system is a Hyper-Converged Infrastructure (HCl) system.
8. An Information Handling System (IHS), comprising:
a processing system; and
a memory system that is coupled to the processing system and that includes instructions that, when executed by the processing system cause the processing system to provide a log management engine that is configured to:
receive a request to store first log information generated by each of a plurality of the log generating components;
determine that second log information that is stored in a log database is at least a size threshold;
perform, in response to determining that the second log information that is stored in the log database is at least the size threshold, a log database clean operation on the second log information that includes deleting a subset of a plurality of second log files that are included in the second log information based on respective factor weights of the subset of the plurality of the second log files until an available storage capacity in the log database is sufficient to store the first log information; and
store, in response to the log database clean operation on the second log information providing the available storage capacity in the log database that is sufficient to store the first log information, the first log information in the log database.
9. The IHS of claim 8, wherein the respective factor weights of the subset of the plurality of second log files are based on at least one of a log component priority, a log generated time tag, a log size, and a container running status.
10. The IHS of claim 8, wherein the performing the log database clean operation on the second log information includes:
calculating a deleted log file size of the subset of the plurality of second log files.
11. The IHS of claim 10, wherein the deleted log file size of the subset of the plurality of second log file is the larger of (1) the available storage capacity in the log database that is sufficient to store the first log information, and (2) a difference of a log file size of the second log information and the size threshold.
12. The IHS of claim 8, wherein the performing the log database clean operation on the second log information includes:
identifying the respective factor weight for each of the subset of the plurality of second log files.
13. The IHS of claim 8, wherein the performing the log database clean operation on the second log information includes:
adding the subset of the plurality of second log files to a delete list based on the respective factor weight for the subset of the plurality of second log files until a delete list size of the delete list is at least a deleted log file size that is the larger of (1) the available storage capacity in the log database that is sufficient to store the first log information, and (2) a difference of a log file size of the second log information and the size threshold.
14. A method for storing log files in a Hyper-Converged Infrastructure (HCl) system, comprising:
receiving, by a computing system, a request to store first log information generated by each of a plurality of the log generating components;
determining, by the computing system, that second log information that is stored in a log database is at least a size threshold;
performing, by the computing system in response to determining that the second log information that is stored in the log database is at least the size threshold, a log database clean operation on the second log information that includes deleting a subset of a plurality of second log files that are included in the second log information based on respective factor weights of the subset of the plurality of the second log files until an available storage capacity in the log database is sufficient to store the first log information; and
storing, by the computing system in response to the log database clean operation on the second log information providing the available storage capacity in the log database that is sufficient to store the first log information, the first log information in the log database.
15. The method of claim 14, wherein the respective factor weights of the subset of the plurality of second log files are based on at least one of a log component priority, a log generated time tag, a log size, and a container running status.
16. The method of claim 14, wherein the performing the log database clean operation on the second log information includes:
calculating, by the computing system, a deleted log file size of the subset of the plurality of second log files.
17. The method of claim 16, wherein the deleted log file size of the subset of the plurality of second log file is the larger of (1) the available storage capacity in the log database that is sufficient to store the first log information, and (2) a difference of a log file size of the second log information and the size threshold.
18. The method of claim 14, wherein the performing the log database clean operation on the second log information includes:
identifying, by the computing system, the respective factor weight for each of the subset of the plurality of second log files.
19. The method of claim 15, wherein the performing the log database clean operation on the second log information includes:
adding, by the computing system, the subset of the plurality of second log files to a delete list based on the respective factor weight for the subset of the plurality of second log files until a delete list size of the delete list is at least a deleted log file size that is the larger of (1) the available storage capacity in the log database that is sufficient to store the first log information, and (2) a difference of a log file size of the second log information and the size threshold.
20. The method of claim 14, wherein the computing system is a Hyper-Converged Infrastructure (HCl) system.
US17/326,152 2018-11-02 2021-05-20 Hyper-converged infrastructure (HCI) log system Active 2039-05-31 US11836067B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/326,152 US11836067B2 (en) 2018-11-02 2021-05-20 Hyper-converged infrastructure (HCI) log system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/179,499 US11023354B2 (en) 2018-11-02 2018-11-02 Hyper-converged infrastructure (HCI) log system
US17/326,152 US11836067B2 (en) 2018-11-02 2021-05-20 Hyper-converged infrastructure (HCI) log system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/179,499 Continuation US11023354B2 (en) 2018-11-02 2018-11-02 Hyper-converged infrastructure (HCI) log system

Publications (2)

Publication Number Publication Date
US20210271583A1 true US20210271583A1 (en) 2021-09-02
US11836067B2 US11836067B2 (en) 2023-12-05

Family

ID=70458606

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/179,499 Active 2039-10-10 US11023354B2 (en) 2018-11-02 2018-11-02 Hyper-converged infrastructure (HCI) log system
US17/326,152 Active 2039-05-31 US11836067B2 (en) 2018-11-02 2021-05-20 Hyper-converged infrastructure (HCI) log system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/179,499 Active 2039-10-10 US11023354B2 (en) 2018-11-02 2018-11-02 Hyper-converged infrastructure (HCI) log system

Country Status (1)

Country Link
US (2) US11023354B2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11625370B2 (en) * 2020-04-07 2023-04-11 Vmware, Inc. Techniques for reducing data log recovery time and metadata write amplification
US11467746B2 (en) 2020-04-07 2022-10-11 Vmware, Inc. Issuing efficient writes to erasure coded objects in a distributed storage system via adaptive logging
US11474719B1 (en) 2021-05-13 2022-10-18 Vmware, Inc. Combining the metadata and data address spaces of a distributed storage object via a composite object configuration tree

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160077745A1 (en) * 2014-09-12 2016-03-17 Netapp, Inc. Rate matching technique for balancing segment cleaning and i/o workload
US9747057B1 (en) * 2015-09-13 2017-08-29 Amazon Technologies, Inc. Storage auto delete
US20170344311A1 (en) * 2016-05-24 2017-11-30 Samsung Electronics Co., Ltd. Method of operating a memory device
US20200034459A1 (en) * 2018-07-30 2020-01-30 Hewlett Packard Enterprise Development Lp Centralized configuration database cache
US20200311132A1 (en) * 2019-03-27 2020-10-01 Western Digital Technologies, Inc. Key value store using change values for data properties
US20210271645A1 (en) * 2020-02-27 2021-09-02 EMC IP Holding Company LLC Log-Based Storage Space Management for Geographically Diverse Storage

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160077745A1 (en) * 2014-09-12 2016-03-17 Netapp, Inc. Rate matching technique for balancing segment cleaning and i/o workload
US9747057B1 (en) * 2015-09-13 2017-08-29 Amazon Technologies, Inc. Storage auto delete
US20170344311A1 (en) * 2016-05-24 2017-11-30 Samsung Electronics Co., Ltd. Method of operating a memory device
US20200034459A1 (en) * 2018-07-30 2020-01-30 Hewlett Packard Enterprise Development Lp Centralized configuration database cache
US20200311132A1 (en) * 2019-03-27 2020-10-01 Western Digital Technologies, Inc. Key value store using change values for data properties
US20210271645A1 (en) * 2020-02-27 2021-09-02 EMC IP Holding Company LLC Log-Based Storage Space Management for Geographically Diverse Storage

Also Published As

Publication number Publication date
US11023354B2 (en) 2021-06-01
US20200142803A1 (en) 2020-05-07
US11836067B2 (en) 2023-12-05

Similar Documents

Publication Publication Date Title
US11614893B2 (en) Optimizing storage device access based on latency
CN113302584B (en) Storage management for cloud-based storage systems
CN111868676B (en) Servicing I/O operations in a cloud-based storage system
US10885033B2 (en) Query plan management associated with a shared pool of configurable computing resources
US11934260B2 (en) Problem signature-based corrective measure deployment
US20210182190A1 (en) Intelligent die aware storage device scheduler
US11836067B2 (en) Hyper-converged infrastructure (HCI) log system
US10642704B2 (en) Storage controller failover system
US9229819B2 (en) Enhanced reliability in deduplication technology over storage clouds
US9781020B2 (en) Deploying applications in a networked computing environment
US8762583B1 (en) Application aware intelligent storage system
US9501313B2 (en) Resource management and allocation using history information stored in application's commit signature log
US11275509B1 (en) Intelligently sizing high latency I/O requests in a storage environment
US10776218B2 (en) Availability-driven data recovery in cloud storage systems
US20210132812A1 (en) Parallel upgrade of nodes in a storage system
US20150332280A1 (en) Compliant auditing architecture
US20210011779A1 (en) Performance-based workload/storage allocation system
US11188235B2 (en) Reducing data replications among storage locations
US9619153B2 (en) Increase memory scalability using table-specific memory cleanup
US11157322B2 (en) Hyper-converged infrastructure (HCI) ephemeral workload/data provisioning system
US20200371849A1 (en) Systems and methods for efficient management of advanced functions in software defined storage systems
US10452445B2 (en) Dynamically configurable storage clusters
US7613805B1 (en) Data store wrapper enhancements

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DING, EDWARD;QIU, DRAKE YUAN;JI, LEWEI;AND OTHERS;REEL/FRAME:057469/0641

Effective date: 20181030

AS Assignment

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NORTH CAROLINA

Free format text: SECURITY AGREEMENT;ASSIGNORS:DELL PRODUCTS, L.P.;EMC IP HOLDING COMPANY LLC;REEL/FRAME:057682/0830

Effective date: 20211001

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT, TEXAS

Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;REEL/FRAME:058014/0560

Effective date: 20210908

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT, TEXAS

Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;REEL/FRAME:057931/0392

Effective date: 20210908

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT, TEXAS

Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;REEL/FRAME:057758/0286

Effective date: 20210908

AS Assignment

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (058014/0560);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:062022/0473

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (058014/0560);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:062022/0473

Effective date: 20220329

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (057931/0392);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:062022/0382

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (057931/0392);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:062022/0382

Effective date: 20220329

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (057758/0286);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061654/0064

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (057758/0286);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061654/0064

Effective date: 20220329

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP, ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE