CN113342276B - Log saving method, system, device and medium - Google Patents

Log saving method, system, device and medium Download PDF

Info

Publication number
CN113342276B
CN113342276B CN202110657626.XA CN202110657626A CN113342276B CN 113342276 B CN113342276 B CN 113342276B CN 202110657626 A CN202110657626 A CN 202110657626A CN 113342276 B CN113342276 B CN 113342276B
Authority
CN
China
Prior art keywords
cache
logs
subspace
log
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110657626.XA
Other languages
Chinese (zh)
Other versions
CN113342276A (en
Inventor
张彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202110657626.XA priority Critical patent/CN113342276B/en
Publication of CN113342276A publication Critical patent/CN113342276A/en
Application granted granted Critical
Publication of CN113342276B publication Critical patent/CN113342276B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a log storage method, which comprises the following steps: responding to the logs generated by the multiple cores, and determining a cache space for caching the logs according to the current cache address; respectively caching the logs generated by each kernel into a subspace corresponding to the kernels in a cache space; respectively updating the first zone bits corresponding to each subspace and used for identifying the number of the currently cached logs, and comparing the first zone bits with a threshold value; and in response to the detection that the subspace with the first flag bit reaching the threshold exists, switching the current cache address into other cache addresses, taking the other cache addresses as the new current cache address, and returning to the step of determining the cache space for caching the log according to the current cache address. The invention also discloses a system, a computer device and a readable storage medium. The scheme provided by the invention uses different cache addresses in turn when the solid state disk runs, so that the condition that the key log is possibly covered and lost when only one cache address exists can be effectively prevented.

Description

Log saving method, system, device and medium
Technical Field
The invention relates to the field of storage, in particular to a log saving method, a log saving system, log saving equipment and a log saving storage medium.
Background
With the continuous improvement of the informatization degree of the modern society, more and more data are generated and stored, and the modern society has entered the big data era from the internet era. Solid state disks are widely used as high-performance, large-capacity storage media. During the use process of the solid state disk, firmware operation can generate logs which contain important information during the firmware operation process, and if errors occur during the use of the solid state disk, developers can locate the reason of the errors based on the logs. In view of this, it is necessary to research the technology of long-term storage and export of solid state disk logs.
The existing log storage method generally includes that after a solid state disk is powered on, a part of space is allocated from a buffer for storing logs generated during operation, if the space is full, the original log is overwritten from the initial position of the space, a new log is stored, and when the solid state disk is powered off each time, the content of the space is written to different pcas of the NAND Flash. When exporting logs, logs saved before each power down can be exported from different pca. According to the method, the logs are only stored when the solid state disk is powered off every time, if the solid state disk is powered on for a long time, the earlier logs are covered, and the covered logs may contain key information required by a positioning problem.
Disclosure of Invention
In view of this, in order to overcome at least one aspect of the above problem, an embodiment of the present invention provides a log saving method, including the following steps:
responding to the logs generated by the multiple cores, and determining a cache space for caching the logs according to the current cache address;
respectively caching the logs generated by each kernel into a subspace corresponding to the kernels in the cache space;
respectively updating a first flag bit corresponding to each subspace and used for identifying the number of the currently cached logs, and comparing the first flag bit with a threshold value;
and in response to the detection that the subspace with the first flag bit reaching the threshold value exists, switching the current cache address into other cache addresses, taking the other cache addresses as new current cache addresses, and returning to the step of determining the cache space for caching the log according to the current cache addresses.
In some embodiments, further comprising:
setting a second flag bit for identifying a cache space currently used for caching the log;
in response to the detection that the subspace with the first flag bit reaching the threshold value exists, acquiring a log to be flushed from a cache space corresponding to the second flag bit;
and printing the acquired log to be brushed down to a memory.
In some embodiments, the logging to be flushed is obtained by the memory, and the method further comprises:
setting a plurality of blocks for storing the logs to be flushed;
and sequentially and circularly utilizing the block to store the logs to be flushed.
In some embodiments, further comprising:
and in response to the detection that subspaces with the first zone bits reaching a threshold exist, clearing the first zone bits corresponding to each subspace.
In some embodiments, further comprising:
in response to receiving a power-off instruction, storing a first flag bit corresponding to each subspace to a first preset position and clearing the first flag bit corresponding to each subspace;
all logs in a cache space for caching the logs at present are printed in a memory, and corresponding brushing addresses are obtained;
saving the address to a second preset position;
and printing a first zone bit corresponding to each subspace stored in the first preset position and the brushing-down address stored in the second preset position to the storage space.
In some embodiments, further comprising:
responding to the fact that the plurality of cores still generate logs after the power-off instruction is received, and caching the logs generated by the plurality of cores into corresponding subspaces in the cache space determined based on the other cache addresses respectively;
updating a first preset position according to a first flag bit corresponding to a corresponding subspace in the cache space determined by the other cache addresses;
all logs in the cache space determined by the other cache addresses are printed in a memory and corresponding brushing addresses are obtained again;
updating the brushing address of the second preset position based on the newly acquired brushing address;
and brushing the updated first zone bit in the first preset position and the updated brushing address in the second preset position to the storage space.
In some embodiments, further comprising:
responding to a received power-on instruction, and acquiring values of the first preset position and the second preset position which are brushed down before power-off from the memory;
assigning the value of the first storage position to a first identification bit and acquiring a log of power-off pre-flashing in the memory according to a flashing address stored in the second storage position;
respectively caching the acquired logs which are refreshed before power down to a subspace corresponding to the kernel in a cache space determined according to the current cache address;
and determining the initial position of the continuous cache log in the subspace based on the first identification bit corresponding to each subspace.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a log saving system, including:
the determining module is configured to respond to the logs generated by the multiple cores and determine a cache space for caching the logs according to the current cache address;
the cache module is configured to cache the logs generated by each kernel to a subspace corresponding to the kernels in the cache space respectively;
the updating module is configured to update the first flag bit corresponding to each subspace and used for identifying the number of the currently cached logs respectively and compare the first flag bit with a threshold value;
and the switching cycle module is configured to switch the current cache address into other cache addresses in response to detecting that the subspace with the first flag bit reaching the threshold exists, so that the other cache addresses serve as new current cache addresses, and the step of determining the cache space for caching the log according to the current cache addresses is returned.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a computer apparatus, including:
at least one processor; and
a memory storing a computer program operable on the processor, wherein the processor executes the program to perform any of the log saving method steps as described above.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a computer-readable storage medium storing a computer program which, when executed by a processor, performs the steps of any of the log saving methods described above.
The invention has one of the following beneficial technical effects: the scheme provided by the invention uses different cache addresses in turn when the solid state disk runs, so that the condition that the key log is possibly covered and lost when only one cache address exists can be effectively prevented.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a log saving method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a log saving system according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a computer device provided in an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
In the embodiment of the invention, the NAND Flash is one of Flash memories; pca is a physical page cache address and is a physical address of a minimum storage unit in the NAND Flash; the pages are the minimum units for storing data in the NAND Flash, and one pca corresponds to one page; the block is a storage space in the NAND Flash and consists of a plurality of pages.
According to an aspect of the present invention, an embodiment of the present invention provides a log saving method, as shown in fig. 1, which may include the steps of:
s1, responding to logs generated by a plurality of kernels, and determining a cache space for caching the logs according to a current cache address;
s2, respectively caching the logs generated by each kernel into subspaces corresponding to the kernels in the cache space;
s3, respectively updating the first flag bit corresponding to each subspace and used for identifying the number of the currently cached logs, and comparing the first flag bit with a threshold value;
s4, in response to the detection that the subspace with the first flag bit reaching the threshold value exists, the current cache address is switched into other cache addresses, the other cache addresses serve as new current cache addresses, and the step of determining the cache space for caching the log according to the current cache addresses is returned.
The scheme provided by the invention uses different cache addresses in turn when the solid state disk runs, so that the condition that the key log is possibly covered and lost when only one cache address exists can be effectively prevented.
In some embodiments, the main controller of the solid state disk generally includes a plurality of cores (cores), each core runs a firmware segment, and the plurality of cores cooperate with each other to implement the functions of the solid state disk. Thus, each core generates logs at runtime, and the frequency and how often logs are generated varies from core to core. The scheme provided by the invention can only store the key logs in the running process of the solid state disk, so that each core needs to comb codes, screen and mark the key logs, and the key log content is finally printed in NAND Flash (one type of Flash memory).
When the solid state disk is powered on, two spaces with the same size can be applied from the DDR cache area, the cache spaces for caching the logs can be respectively marked as buffer1 and buffer2, both the buffer1 and the buffer2 have corresponding cache addresses, and when the logs need to be stored in the buffer1 or the buffer2, the corresponding cache spaces can be determined by calling the corresponding cache addresses through corresponding functions.
The sizes of the buffer1 and the buffer2 are marked as N, the N is equivalent to the sizes of a plurality of pages in the NAND Flash, the space is equally divided by the number of cores in the main control of the solid state disk, if M cores are provided, the size of each core is N/M, each core writes a key log into the space with the size of N/M corresponding to the core, and the cores are not mutually influenced. And meanwhile, applying for and storing a space log _ position of a first flag bit for identifying the number of the current cached logs from the DDR cache area according to the value of M during power-on, wherein the log _ position is used for indicating the position or the number of the current logs of each of the M cores in the buffer1 or the buffer2.
For example, the solid state disk has 16 cores, and when the solid state disk is powered on, two 64K spaces can be applied from the DDR cache region and are marked as buffer1 and buffer2, then 16 4K subspaces can be partitioned in each buffer, and each core has 4K spaces in which key logs can be written. Then, a 32-bit log _ position is applied from the DDR buffer area, where every two bits of the log _ position indicate a position or a number of a key log written by a core in a 4K space (that is, every two bits indicate a first flag bit), and the value is 0-127 in this embodiment, and when the value is 127 (that is, a threshold is reached), it indicates that there is a core that has written a subspace full, and at this time, it is necessary to switch the buffer address.
Specifically, after the solid state disk is powered on and the initialization is completed, 16 cores start to generate key logs, each core may first write the key logs into the corresponding 4K subspaces in the buffer1, and simultaneously update the corresponding flag bits in the log _ position according to the number of the logs written in each subspace, that is, update the values of the two bit bits corresponding to the cores, which may be incremented from 0, and when the number of the log bits is increased to 127, indicate that there is a core that fills the subspaces, and clear log _ position and stop writing into the buffer1 and start writing into the buffer2 no matter whether other cores fill the corresponding subspaces or not, and similarly, when a certain core in the buffer2 fills the 4K space thereof, clear log _ position and stop writing into the buffer2, and start writing into the buffer 1.
In some embodiments, further comprising:
setting a second flag bit for identifying a cache space currently used for caching the log;
in response to the detection that the subspace with the first flag bit reaching the threshold value exists, acquiring a log to be flushed from a cache space corresponding to the second flag bit;
and printing the acquired log to be brushed down to a memory.
In some embodiments, further comprising:
and in response to the detection that subspaces with the first zone bits reaching a threshold exist, clearing the first zone bits corresponding to each subspace.
Specifically, a full _ flag space may be applied from the DDR cache according to the value of M, where the full _ flag is used to indicate whether each of the M cores is full of buffer1 or buffer2 and to identify which cache space is currently used for caching the journal. For example, when 16 cores are available, a 32-bit full _ flag can be applied, the lower 16 bits of the full _ flag indicate whether the 4K space of each core is full, the full is set to 1, and otherwise, the full is set to 0. The upper 16 bits indicate the current cache space for caching the log. Thus, when the low 16 bit is set to 1, the related function called by each core in the cache log can switch the cache address, thereby changing the cache space for caching the log of each core.
When each core of the master controller firstly writes a key log into the buffer1, the value of a bit corresponding to each core in the log _ position continuously increases along with the writing of the key log, when the value reaches the maximum value, a corresponding bit in the full _ flag is set to be 1, which indicates that the core is full of the key log, and considering that the frequency and the number of the logs generated by each core are different, the full _ flag can be polled and checked by a specific core (assumed to be T core), when the full _ flag is not zero, the log _ position is cleared, each core starts to write the key log into the buffer2, and simultaneously the full _ flag is cleared and triggers to flush the content in the buffer1 into the corresponding pca of the memory, and then the content in the buffer1 is cleared, similarly, when the full _ flag is not zero again, the buffer1 is switched back, and the content in the corresponding pca of the memory in the buffer2 is cleared and the buffer2 is cleared, and the content in the buffer2 is circularly used.
In some embodiments, the logging to be flushed is obtained by the memory, and the method further comprises:
setting a plurality of blocks for storing the logs to be flushed;
and sequentially and circularly utilizing the block to store the logs to be refreshed.
Specifically, which blocks can be used can be preset, a starting pca table of the available blocks is maintained, when all the available blocks are used up, the first available block is erased, a new key log is stored in the first available block, and all the available blocks are recycled.
It should be noted that the size of the cache space applied from the DDR cache region and the number of blocks for storing the log in the NAND Flash need to be planned. Determining the size of a cache space applied from a DDR cache region by combining the capacity of a standby capacitor of a solid state disk, the time spent by other necessary operations when the solid state disk is abnormally powered off and the frequency of refreshing logs in the NAND Flash, wherein if the space is too large, the capacity of the standby capacitor when the solid state disk is abnormally powered off is not enough to provide other necessary operations and the total time required for refreshing the content of the space to the NAND Flash; since the log is flushed to the NAND Flash in a polling manner, the flushing may not be triggered in time, and if the space is too small, there is a risk of covering the critical log.
In some embodiments, when the stored key logs need to be exported, exporting a newly stored key log from a corresponding pca of the NAND Flash to the DDR cache area, reading out the key log from the DDR cache area, and performing binary to ASCII code conversion, where the converted content is a readable key log; meanwhile, any and all key logs saved in history are also supported to be exported for analyzing solid state disk errors occurring in history.
In some embodiments, further comprising:
in response to receiving a power-off instruction, storing a first flag bit corresponding to each subspace to a first preset position and clearing the first flag bit corresponding to each subspace;
all logs in a cache space for caching the logs at present are printed in a memory, and corresponding brushing addresses are obtained;
saving the lower brushing address to a second preset position;
and printing a first flag bit corresponding to each subspace stored in the first preset position and the brushing-down address stored in the second preset position down to the storage space.
Specifically, a first preset position current _ position for storing each first flag bit in the log _ position and a second preset position current _ pca for storing the current pca address of the NAND Flash representing the log may be applied.
Thus, when the solid state disk is powered off, if the log is written into the buffer1 at present, if the full _ flag is 0, the value of the log _ position is stored to the current _ position, and then the log _ position is cleared. And the T core writes the content in the buffer1 to a corresponding address pca and stores the value of the pca to current _ pca, and after the writing is finished, if the log _ position is 0, the log is not written continuously in the power-off process, the current _ pca and the current _ position are directly printed to NAND Flash, and the power-off is finished.
In some embodiments, further comprising:
responding to the fact that the plurality of cores still generate logs after the power-off instruction is received, and caching the logs generated by the plurality of cores into corresponding subspaces in the cache space determined based on the other cache addresses respectively;
updating a first preset position according to a first zone bit corresponding to a corresponding subspace in the cache space determined by the other cache addresses;
all logs in the cache space determined by the other cache addresses are printed in a memory, and corresponding brushing addresses are obtained again;
updating the brushing address of the second preset position based on the newly acquired brushing address;
and printing the updated first zone bit in the first preset position and the updated lower brushing address in the second preset position to the storage space.
Specifically, when the solid state disk is powered off, assuming that logs are written into the buffer1 at present, and key logs are written into the buffer2 by 16 cores, and at this time, the log _ position is not 0, the value of the log _ position is updated to current _ position, the content in the buffer2 is printed to a corresponding pca, the value of the pca is also updated to current _ pca, and then the current _ pca and the current _ position are printed to NAND Flash, so that the power-off is completed.
In some embodiments, further comprising:
responding to a received power-on instruction, and acquiring values of the first preset position and the second preset position which are brushed down before power-down from the memory;
assigning the value of the first storage position to a first identification bit and acquiring a log of power-off before power-off refreshing in the memory according to a refreshing address stored in the second storage position;
respectively caching the acquired logs which are refreshed before power down into subspaces corresponding to the kernels in a cache space determined according to the current cache address;
and determining the initial position of the continuous cache log in the subspace based on the first identification bit corresponding to each subspace.
Specifically, when the solid state disk is electrified subsequently each time, the values of current _ pca and current _ position are recovered from the NAND Flash, the value of current _ position is assigned to log _ position, then the key logs stored when the solid state disk is electrified last time are recovered from the current _ pca to buffer1, at the moment, the number of the logs cached in each subspace in the buffer1 corresponds to the corresponding flag bit in the log _ position, and therefore each core can determine the initial position of the logs which are cached continuously in the subspace according to the corresponding two bit bits in the log _ position.
It should be noted that, if the power is first powered on, the corresponding data does not need to be acquired. The first power-on is the first power-on after firmware is burned into the solid state disk, and at the moment, the NAND flash does not have data, so that the preset position and the log do not need to be recovered from the NAND flash. The first power-on will do necessary initialization, and in the embodiment of the present invention, it may be to allocate cache space addresses, set which blocks are used, allocate flag bits, and so on.
The scheme provided by the invention is that the key log is not covered, and the key log is brushed into NAND Flash for long-term storage when the solid state disk runs and is powered off; in order to prevent the situation that the key log is possibly covered and lost when only one buffer exists, two buffers are used in a rotation mode according to the ping-pong operation principle; when the power is on every time, the key logs saved in last power-off are recovered and new logs are continuously written in to save a fully written log to the NAND Flash as much as possible, so that the problems that the logs are few and inconvenient to locate when the latest log is exported due to frequent power-off for many times are avoided.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a log saving system 400, as shown in fig. 2, including:
the determining module 401 is configured to respond to the multiple cores generating the logs, and determine a cache space for caching the logs according to the current cache address;
a caching module 402 configured to cache the logs generated by each core into a subspace corresponding to the core in the cache space, respectively;
an updating module 403, configured to update the first flag bit corresponding to each subspace and used for identifying the number of currently cached logs, and compare the first flag bit with a threshold;
a switching loop module 404 configured to switch the current cache address to another cache address in response to detecting that there is a subspace where the first flag bit reaches the threshold, so as to use the other cache address as a new current cache address, and return to the step of determining the cache space for caching the log according to the current cache address.
In some embodiments, the apparatus further comprises a detection module configured to:
setting a second flag bit for identifying a cache space currently used for caching the log;
in response to the detection that the subspace with the first flag bit reaching the threshold value exists, acquiring a log to be flushed from a cache space corresponding to the second flag bit;
and printing the acquired log to be brushed down to a memory.
In some embodiments, the detection module is further configured to:
setting a plurality of blocks for storing the logs to be flushed;
and sequentially and circularly utilizing the block to store the logs to be refreshed.
In some embodiments, the system further comprises a zeroing module configured to:
and in response to the detection that subspaces with the first zone bits reaching a threshold exist, clearing the first zone bits corresponding to each subspace.
In some embodiments, a power down module configured to:
in response to receiving a power-off instruction, storing a first flag bit corresponding to each subspace to a first preset position and clearing the first flag bit corresponding to each subspace;
all logs in a cache space for caching the logs at present are printed in a memory, and corresponding brushing addresses are obtained;
saving the address to a second preset position;
and printing a first flag bit corresponding to each subspace stored in the first preset position and the brushing-down address stored in the second preset position down to the storage space.
In some embodiments, the power module is further configured to:
responding to the fact that the plurality of cores still generate logs after the power-off instruction is received, and caching the logs generated by the plurality of cores into corresponding subspaces in the cache space determined based on the other cache addresses respectively;
updating a first preset position according to a first zone bit corresponding to a corresponding subspace in the cache space determined by the other cache addresses;
all logs in the cache space determined by the other cache addresses are printed in a memory, and corresponding brushing addresses are obtained again;
updating the brushing address of the second preset position based on the newly acquired brushing address;
and brushing the updated first zone bit in the first preset position and the updated brushing address in the second preset position to the storage space.
In some embodiments, the system further comprises a power-up module configured to:
responding to a received power-on instruction, and acquiring values of the first preset position and the second preset position which are brushed down before power-off from the memory;
assigning the value of the first storage position to a first identification bit and acquiring a log of power-off pre-flashing in the memory according to a flashing address stored in the second storage position;
respectively caching the acquired logs which are refreshed before power down into subspaces corresponding to the kernels in a cache space determined according to the current cache address;
and determining the initial position of the continuous cache log in the subspace based on the first identification bit corresponding to each subspace.
Based on the same inventive concept, according to another aspect of the present invention, as shown in fig. 3, an embodiment of the present invention further provides a computer apparatus 501, comprising:
at least one processor 520; and
a memory 510, the memory 510 storing a computer program 511 executable on the processor, the processor 520 executing the program to perform the steps of:
s1, responding to logs generated by a plurality of kernels, and determining a cache space for caching the logs according to a current cache address;
s2, respectively caching the logs generated by each kernel into subspaces corresponding to the kernels in the cache space;
s3, respectively updating the first flag bit corresponding to each subspace and used for identifying the number of the currently cached logs, and comparing the first flag bit with a threshold value;
s4, in response to the detection that the subspace with the first flag bit reaching the threshold value exists, the current cache address is switched into other cache addresses, the other cache addresses serve as new current cache addresses, and the step of determining the cache space for caching the log according to the current cache addresses is returned.
In some embodiments, further comprising:
setting a second flag bit for identifying a cache space currently used for caching the log;
in response to the detection that the subspace of which the first flag bit reaches the threshold exists, acquiring a log to be flushed from a cache space corresponding to the second flag bit;
and printing the acquired log to be brushed down to a memory.
In some embodiments, the method further comprises the step of brushing the acquired log to be brushed to a memory, and further comprises the steps of:
setting a plurality of blocks for storing the logs to be flushed;
and sequentially and circularly utilizing the block to store the logs to be flushed.
In some embodiments, further comprising:
and in response to the detection that subspaces with the first zone bits reaching a threshold exist, clearing the first zone bits corresponding to each subspace.
In some embodiments, further comprising:
in response to receiving a power-off instruction, storing a first flag bit corresponding to each subspace to a first preset position and clearing the first flag bit corresponding to each subspace;
all logs in a cache space for caching the logs at present are printed in a memory, and corresponding brushing addresses are obtained;
saving the lower brushing address to a second preset position;
and printing a first flag bit corresponding to each subspace stored in the first preset position and the brushing-down address stored in the second preset position down to the storage space.
In some embodiments, further comprising:
responding to the fact that the plurality of cores still generate logs after the power-off instruction is received, and caching the logs generated by the plurality of cores into corresponding subspaces in the cache space determined based on the other cache addresses respectively;
updating a first preset position according to a first zone bit corresponding to a corresponding subspace in the cache space determined by the other cache addresses;
all logs in the cache space determined by the other cache addresses are printed in a memory, and corresponding brushing addresses are obtained again;
updating the brushing address of the second preset position based on the newly acquired brushing address;
and printing the updated first zone bit in the first preset position and the updated lower brushing address in the second preset position to the storage space.
In some embodiments, further comprising:
responding to a received power-on instruction, and acquiring values of the first preset position and the second preset position which are brushed down before power-down from the memory;
assigning the value of the first storage position to a first identification bit and acquiring a log of power-off pre-flashing in the memory according to a flashing address stored in the second storage position;
respectively caching the acquired logs which are refreshed before power down into subspaces corresponding to the kernels in a cache space determined according to the current cache address;
and determining the initial position of the continuous cache log in the subspace based on the first identification bit corresponding to each subspace.
Based on the same inventive concept, according to another aspect of the present invention, as shown in fig. 4, an embodiment of the present invention further provides a computer-readable storage medium 601, where the computer-readable storage medium 601 stores computer program instructions 610, and the computer program instructions 610, when executed by a processor, perform the following steps:
s1, responding to logs generated by a plurality of kernels, and determining a cache space for caching the logs according to a current cache address;
s2, respectively caching the logs generated by each kernel into subspaces corresponding to the kernels in the cache space;
s3, respectively updating the first zone bits corresponding to each subspace and used for identifying the number of the currently cached logs, and comparing the first zone bits with a threshold value;
s4, in response to the detection that the subspace with the first flag bit reaching the threshold value exists, the current cache address is switched into other cache addresses, the other cache addresses serve as new current cache addresses, and the step of determining the cache space for caching the log according to the current cache addresses is returned.
In some embodiments, further comprising:
setting a second flag bit for identifying a cache space currently used for caching the log;
in response to the detection that the subspace of which the first flag bit reaches the threshold exists, acquiring a log to be flushed from a cache space corresponding to the second flag bit;
and printing the acquired log to be brushed down to a memory.
In some embodiments, the logging to be flushed is obtained by the memory, and the method further comprises:
setting a plurality of blocks for storing the logs to be flushed;
and sequentially and circularly utilizing the block to store the logs to be refreshed.
In some embodiments, further comprising:
and in response to the detection that subspaces with the first zone bits reaching a threshold exist, clearing the first zone bits corresponding to each subspace.
In some embodiments, further comprising:
in response to receiving a power-off instruction, storing a first flag bit corresponding to each subspace to a first preset position and clearing the first flag bit corresponding to each subspace;
all logs in a cache space for caching the logs at present are printed in a memory, and corresponding brushing addresses are obtained;
saving the lower brushing address to a second preset position;
and printing a first flag bit corresponding to each subspace stored in the first preset position and the brushing-down address stored in the second preset position down to the storage space.
In some embodiments, further comprising:
responding to the fact that the plurality of cores still generate logs after the power-off instruction is received, and caching the logs generated by the plurality of cores into corresponding subspaces in the cache space determined based on the other cache addresses respectively;
updating a first preset position according to a first flag bit corresponding to a corresponding subspace in the cache space determined by the other cache addresses;
all logs in the cache space determined by the other cache addresses are printed in a memory, and corresponding brushing addresses are obtained again;
updating the brushing address of the second preset position based on the newly acquired brushing address;
and printing the updated first zone bit in the first preset position and the updated lower brushing address in the second preset position to the storage space.
In some embodiments, further comprising:
responding to a received power-on instruction, and acquiring values of the first preset position and the second preset position which are brushed down before power-off from the memory;
assigning the value of the first storage position to a first identification bit and acquiring a log of power-off before power-off refreshing in the memory according to a refreshing address stored in the second storage position;
respectively caching the acquired logs which are refreshed before power down into subspaces corresponding to the kernels in a cache space determined according to the current cache address;
and determining the initial position of the continuous cache log in the subspace based on the first identification bit corresponding to each subspace.
Finally, it should be noted that, as understood by those skilled in the art, all or part of the processes in the methods of the embodiments described above may be implemented by instructing relevant hardware by a computer program, and the program may be stored in a computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above.
Further, it should be appreciated that the computer-readable storage media (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the above embodiments of the present invention are merely for description, and do not represent the advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant only to be exemplary, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (10)

1. A log saving method, comprising the steps of:
responding to the logs generated by the multiple cores, and determining a cache space for caching the logs according to the current cache address;
respectively caching the logs generated by each kernel into a subspace corresponding to the kernels in the cache space;
respectively updating a first flag bit corresponding to each subspace and used for identifying the number of the currently cached logs, and comparing the first flag bit with a threshold value;
and in response to detecting that the subspace with the first flag bit reaching the threshold value exists, switching the current cache address into other cache addresses, taking the other cache addresses as new current cache addresses, and returning to the step of determining the cache space for caching the log according to the current cache addresses.
2. The method of claim 1, further comprising:
setting a second flag bit for identifying a cache space currently used for caching the log;
in response to the detection that the subspace with the first flag bit reaching the threshold value exists, acquiring a log to be flushed from a cache space corresponding to the second flag bit;
and printing the acquired log to be brushed down to a memory.
3. The method of claim 2, wherein the downloading the obtained log to be flushed to a memory, further comprising:
setting a plurality of blocks for storing the logs to be flushed;
and sequentially and circularly utilizing the block to store the logs to be refreshed.
4. The method of claim 1, further comprising:
and in response to the detection that subspaces with the first zone bits reaching a threshold exist, clearing the first zone bits corresponding to each subspace.
5. The method of claim 1, further comprising:
in response to receiving a power-off instruction, storing a first flag bit corresponding to each subspace to a first preset position and clearing the first flag bit corresponding to each subspace;
all logs in a cache space for caching the logs at present are printed in a memory, and corresponding brushing addresses are obtained;
saving the address to a second preset position;
and printing a first flag bit corresponding to each subspace stored in the first preset position and the brushing-down address stored in the second preset position to a storage space.
6. The method of claim 5, further comprising:
responding to the fact that the plurality of cores still generate logs after the power-off instruction is received, and caching the logs generated by the plurality of cores into corresponding subspaces in the cache space determined based on the other cache addresses respectively;
updating a first preset position according to a first zone bit corresponding to a corresponding subspace in the cache space determined by the other cache addresses;
all logs in the cache space determined by the other cache addresses are printed in a memory and corresponding brushing addresses are obtained again;
updating the brushing address of the second preset position based on the newly acquired brushing address;
and printing the updated first zone bit in the first preset position and the updated lower brushing address in the second preset position to the storage space.
7. The method of claim 5 or 6, further comprising:
responding to a received power-on instruction, and acquiring values of the first preset position and the second preset position which are brushed down before power-down from the memory;
assigning the value of the first preset position to a first zone bit and acquiring a log which is refreshed before power down in the memory according to a refresh address stored in the second preset position;
respectively caching the acquired logs which are refreshed before power down to a subspace corresponding to the kernel in a cache space determined according to the current cache address;
and determining an initial position of the continuous cache log in the subspace based on the corresponding first flag bit of each subspace.
8. A log saving system, comprising:
the determining module is configured to respond to the plurality of kernels to generate the logs and determine a cache space for caching the logs according to the current cache address;
the cache module is configured to cache the logs generated by each kernel into a subspace corresponding to the kernels in the cache space respectively;
the updating module is configured to update the first flag bit corresponding to each subspace and used for identifying the number of the currently cached logs respectively and compare the first flag bit with a threshold value;
and the switching cycle module is configured to switch the current cache address into other cache addresses in response to detecting that the subspace of which the first flag bit reaches the threshold exists, so that the other cache addresses serve as new current cache addresses, and the step of determining the cache space for caching the log according to the current cache addresses is returned.
9. A computer device, comprising:
at least one processor; and
memory storing a computer program operable on the processor, wherein the processor executes the program to perform the steps of the method according to any of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 7.
CN202110657626.XA 2021-06-13 2021-06-13 Log saving method, system, device and medium Active CN113342276B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110657626.XA CN113342276B (en) 2021-06-13 2021-06-13 Log saving method, system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110657626.XA CN113342276B (en) 2021-06-13 2021-06-13 Log saving method, system, device and medium

Publications (2)

Publication Number Publication Date
CN113342276A CN113342276A (en) 2021-09-03
CN113342276B true CN113342276B (en) 2023-01-06

Family

ID=77476862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110657626.XA Active CN113342276B (en) 2021-06-13 2021-06-13 Log saving method, system, device and medium

Country Status (1)

Country Link
CN (1) CN113342276B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114020710B (en) * 2021-10-09 2024-01-16 苏州浪潮智能科技有限公司 Log storage method and device and electronic equipment
CN116719485B (en) * 2023-08-09 2023-11-03 苏州浪潮智能科技有限公司 FPGA-based data reading and writing method, reading and writing unit and FPGA

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959526A (en) * 2018-06-28 2018-12-07 郑州云海信息技术有限公司 Blog management method and log management apparatus
CN109491974A (en) * 2018-10-12 2019-03-19 上海金大师网络科技有限公司 Asynchronous blog management method and system and computer readable storage medium
CN111858531A (en) * 2020-07-14 2020-10-30 苏州浪潮智能科技有限公司 Log storage method and system based on multi-core hard disk and related components

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959526A (en) * 2018-06-28 2018-12-07 郑州云海信息技术有限公司 Blog management method and log management apparatus
CN109491974A (en) * 2018-10-12 2019-03-19 上海金大师网络科技有限公司 Asynchronous blog management method and system and computer readable storage medium
CN111858531A (en) * 2020-07-14 2020-10-30 苏州浪潮智能科技有限公司 Log storage method and system based on multi-core hard disk and related components

Also Published As

Publication number Publication date
CN113342276A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
US10255192B2 (en) Data storage device and data maintenance method thereof
KR101288408B1 (en) A method and system for facilitating fast wake-up of a flash memory system
US7240178B2 (en) Non-volatile memory and non-volatile memory data rewriting method
CN111752487B (en) Data recovery method and device and solid state disk
CN113342276B (en) Log saving method, system, device and medium
CN105718530B (en) File storage system and file storage control method thereof
US8825946B2 (en) Memory system and data writing method
KR20080037283A (en) System comprising flash memory device and data recovery method thereof
JPWO2006067923A1 (en) MEMORY CONTROLLER, NONVOLATILE MEMORY DEVICE, NONVOLATILE MEMORY SYSTEM, AND MEMORY CONTROL METHOD
CN110347338B (en) Hybrid memory data exchange processing method, system and readable storage medium
CN112631950B (en) L2P table saving method, system, device and medium
CN115291815B (en) Memory, control method thereof and memory system
CN115269451B (en) Flash memory garbage collection method, device and readable storage medium
CN114237984A (en) Recovery method and system of Trim data under abnormal power failure and solid state disk
CN117931091B (en) Abnormal power failure processing method, device, equipment, medium and product
CN111324549B (en) Memory and control method and device thereof
CN107544912B (en) Log recording method, loading method and device
CN114089915A (en) File additional writing operation method and device based on FLASH memory
JP2003058417A (en) Storage device
KR20050076156A (en) Data recovery device and method thereof
JP2018028830A (en) Electronic controller and information storage method thereof
CN110928890A (en) Data storage method and device, electronic equipment and computer readable medium
CN113918485B (en) Method, device, equipment and storage medium for preventing flash memory data from being lost
CN115221566A (en) Method and system for restoring power failure of memory and memory
CN109407989B (en) Method and device for flashing metadata

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant