CN109976905B - Memory management method and device and electronic equipment - Google Patents

Memory management method and device and electronic equipment Download PDF

Info

Publication number
CN109976905B
CN109976905B CN201910159040.3A CN201910159040A CN109976905B CN 109976905 B CN109976905 B CN 109976905B CN 201910159040 A CN201910159040 A CN 201910159040A CN 109976905 B CN109976905 B CN 109976905B
Authority
CN
China
Prior art keywords
data
memory
module
writing
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910159040.3A
Other languages
Chinese (zh)
Other versions
CN109976905A (en
Inventor
于连宇
李栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910159040.3A priority Critical patent/CN109976905B/en
Publication of CN109976905A publication Critical patent/CN109976905A/en
Application granted granted Critical
Publication of CN109976905B publication Critical patent/CN109976905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5019Workload prediction

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present disclosure provides a memory management method, which includes obtaining data characteristics of a plurality of sets of data, predicting a usage frequency of the plurality of sets of data based on the data characteristics, and writing part of the data in the plurality of sets of data into a memory based on the usage frequency of the plurality of sets of data. The disclosure also provides a memory management device, a server cluster and a computer readable storage medium.

Description

Memory management method and device and electronic equipment
Technical Field
The disclosure relates to a memory management method and device and an electronic device.
Background
The query speed is a core problem of a computing system, and in data query, compared with disk reading and writing, the performance improvement of a memory has an influence on the query speed in a magnitude order. Currently, memory management generally adopts a runtime loading policy or a static caching policy in advance. The loading during the operation is to apply for the memory immediately when the query is carried out, the memory is insufficient and needs to overflow to a disk, and the speed of the stress reaction method cannot be guaranteed. The pre-static policy is to write specific data into the memory in advance, which is relatively wasteful of memory and is not suitable for applications with a tight memory or large data volume.
Disclosure of Invention
An aspect of the present disclosure provides a memory management method, including obtaining data characteristics of a plurality of sets of data, predicting a frequency of use of the plurality of sets of data based on the data characteristics, and writing a part of the plurality of sets of data into a memory based on the frequency of use of the plurality of sets of data.
Optionally, the method further includes training a machine learning model based on data features and query behavior, and predicting the frequency of use of the plurality of sets of data based on the data features includes: predicting, based on the machine learning model, a frequency of use of a plurality of sets of data corresponding to the data features.
Optionally, the method further includes determining, based on the usage frequency of the multiple sets of data, a portion of data to be written into the memory as a set of data to be cached, and removing data that does not belong to the set of data to be cached from the memory.
Optionally, the writing, based on the usage frequency of the multiple sets of data, part of the data in the multiple sets of data into a memory includes writing, into the memory, part of the data in the data set to be cached that is not stored in the memory.
Optionally, the writing, based on the usage frequency of the multiple sets of data, part of the data in the multiple sets of data into a memory includes obtaining a planning capacity, determining, based on the usage frequency of the multiple sets of data and the planning capacity, that part of the data to be written into the memory is used as a data set to be cached, and writing part or all of the data in the data set to be cached into the memory.
Optionally, in a case that the data includes data in a data table, the partial data includes data in a data slice of the data table.
Another aspect of the disclosure provides a memory management apparatus including an obtaining module, a predicting module, and a writing module. And the obtaining module is used for obtaining the data characteristics of the multiple groups of data. And the prediction module is used for predicting the use frequency of the multiple groups of data based on the data characteristics. And the writing module is used for writing part of data in the multiple groups of data into a memory based on the use frequency of the multiple groups of data.
Optionally, the apparatus further comprises a determining module and a removing module. And the determining module is used for determining partial data to be written into the memory as a data set to be cached based on the use frequency of the multiple groups of data. And the removing module is used for removing the data which does not belong to the data set to be cached from the memory.
Optionally, the writing module is configured to write a part of data, which is not stored in the memory, in the data set to be cached into the memory.
Optionally, the apparatus further comprises a training module for training the machine learning model based on the data features and the query behavior. The prediction module is configured to predict a frequency of use of a plurality of sets of data corresponding to the data features based on the machine learning model.
Optionally, the writing module includes an obtaining submodule, a determining submodule, and a writing submodule. An obtaining submodule for obtaining the planned capacity. And the determining submodule is used for determining partial data to be written into the memory as a data set to be cached based on the use frequency of the multiple groups of data and the planning capacity. And the writing submodule is used for writing part or all of the data in the data set to be cached into a memory.
Optionally, in a case that the data includes data in a data table, the partial data includes data in a data slice of the data table.
Another aspect of the disclosure provides a server cluster including at least one processor and at least one memory. The memory has stored thereon a computer program which, when executed by the processor, causes the processor to perform the method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario of a memory management method according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a memory management method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart of a memory management method according to another embodiment of the present disclosure;
FIG. 4 is a flow chart schematically illustrating writing a portion of the plurality of sets of data to memory based on a frequency of use of the plurality of sets of data, in accordance with an embodiment of the present disclosure;
fig. 5 schematically shows a block diagram of a memory management device according to an embodiment of the present disclosure;
fig. 6 schematically shows a block diagram of a memory management device according to another embodiment of the present disclosure;
FIG. 7 schematically illustrates a block diagram of a write module according to an embodiment of the disclosure; and
FIG. 8 schematically shows a block diagram of a server cluster according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
The embodiment of the disclosure provides a memory management method, which includes obtaining data characteristics of multiple groups of data, predicting the use frequency of the multiple groups of data based on the data characteristics, and writing part of the data in the multiple groups of data into a memory based on the use frequency of the multiple groups of data.
Fig. 1 schematically illustrates an application scenario of a memory management method according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the distributed system 100 may include a plurality of nodes 110 and a network 120. Network 120 serves as a medium for providing communication links between multiple nodes 110. Network 120 may include various connection types, such as wired, wireless communication links, and so forth. The node 110 may be, for example, a dedicated computing or storage device, or a terminal device of multiple users, etc. The distributed system 100 may implement the memory management method of the embodiments of the present disclosure.
In the distributed system 100, common data is stored in a distributed memory, and reading and writing in the memory can effectively improve the calculation efficiency. The disclosed embodiments focus on how to reasonably utilize memory and speed up data query. The distributed system generally plans a data cache region, and the data cache region occupies a data memory with a considerable scale.
Fig. 2 schematically shows a flow chart of a memory management method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S230.
In operation S210, data characteristics of a plurality of sets of data are obtained.
According to embodiments of the present disclosure, data is typically stored in a database in the form of tables. The plurality of sets of data may be a plurality of data tables. The data characteristics may be, for example, a data field repetition rate, a null rate in the data table (a ratio of null data to all data in the table), a range interval data rate, and the like.
The above may be referred to as static characteristics of the data. The embodiments of the present disclosure may also obtain dynamic characteristics of data, for example, by processing the query log, query time, query subject, query condition, and the like of each data query may be obtained. The disclosed embodiments may integrate the static and dynamic characteristics described above to predict the frequency of use of the data.
In operation S220, a frequency of use of the plurality of sets of data is predicted based on the data characteristics. The frequency of use of the data is also referred to as the heat and cold of the data.
According to the embodiment of the disclosure, a machine learning model can be trained based on data features and query behaviors, and the use frequency of multiple groups of data corresponding to the data features can be predicted based on the machine learning model. The machine learning model may include, for example, a decision tree, a support vector machine, an artificial neural network, and so forth. By collecting data for a certain period of time, processing the data as training data, taking data characteristics as input, inquiring behavior as output label, and training the machine learning model, the use frequency of the data in a subsequent period of time can be predicted under the condition of obtaining new data characteristics.
In operation S230, a portion of the data in the plurality of sets of data is written into a memory based on the frequency of use of the plurality of sets of data. For example, a threshold value may be set, and when the frequency of use is greater than the threshold value, the data may be written in the memory in advance. Alternatively, all the use frequencies may be clustered, and the data in a set with a higher use frequency may be written in the memory in advance. Or, all the data can be sorted according to the use frequency and sequentially written into the memory until the space occupied by the written data reaches a preset value.
According to an embodiment of the present disclosure, in a case where the data includes data in a data table, the partial data includes data in a data slice of the data table. For example, for a data table, a portion of the data table with a higher frequency of use may be further or simultaneously determined, and only the portion of the data may be written to the memory instead of the entire data table.
Fig. 3 schematically shows a flow chart of a memory management method according to another embodiment of the present disclosure.
As shown in fig. 3, the method further includes operations S310 and S320 on the basis of the foregoing embodiment.
In operation S310, based on the usage frequency of the multiple sets of data, a portion of data to be written into the memory is determined as a set of data to be cached.
In operation S320, data not belonging to the data set to be cached is removed from the memory.
According to the method, after the predicted use frequency is obtained, the data which are not commonly used are removed from the memory, the memory space is dynamically managed, and the use rate of system resources is improved.
According to the embodiment of the present disclosure, as shown in fig. 3, operation S230 may be specifically implemented as operation S330, where part of data, which is not stored in the memory, in the data set to be cached is written into the memory. At the time of writing, it may be checked whether data to be stored in the memory already exists in the memory, and if so, the writing is not repeated.
Fig. 4 schematically shows a flowchart of writing part of the data in the plurality of data groups into the memory based on the usage frequency of the plurality of data groups according to an embodiment of the present disclosure.
As shown in fig. 4, the method includes operations S410 to S430.
In operation S410, a planned capacity is obtained. According to the embodiment of the present disclosure, the planning capacity is a memory space reserved for the memory management method of the embodiment of the present disclosure. It may be a whole memory space or a part of memory space, and may be defined by a proportion or a fixed size.
According to an embodiment of the present disclosure, the projected capacity may be dynamically determined according to a load of the server. For example, the projected capacity may be increased when the server load is high to cache more data with a higher frequency of use, and may be decreased when the server load is low to allow the server sufficient available memory space when handling various tasks that are difficult to predict.
In operation S420, a portion of data to be written into the memory is determined as a data set to be cached, based on the usage frequency of the multiple sets of data and the planning capacity. According to the embodiment of the disclosure, all data can be sorted according to the use frequency and sequentially written into the memory until the space occupied by the written data reaches the planning capacity.
In operation S430, part or all of the data in the data set to be cached is written into a memory.
The method of the embodiment of the disclosure can be periodically executed to realize dynamic management of the memory.
According to the method, the data are preloaded by predicting the use frequency of the data, the query is accelerated, and the possibility of memory waste or overflow is reduced.
Based on the same inventive concept, the present disclosure also provides a memory management device, and the memory management device according to the embodiment of the present disclosure is described below with reference to fig. 5 to 7.
Fig. 5 schematically shows a block diagram of a memory management device 500 according to an embodiment of the present disclosure.
As shown in fig. 5, the memory management device 500 includes an obtaining module 510, a predicting module 520, and a writing module 530. The memory management device 500 may perform the various methods described above.
The obtaining module 510, for example, performs the operation S210 described with reference to fig. 2 above, for obtaining data characteristics of multiple sets of data.
The prediction module 520, for example, performs operation S220 described with reference to fig. 2 above, for predicting the frequency of use of the plurality of sets of data based on the data characteristics.
The writing module 530, for example, performs the operation S230 described with reference to fig. 2 above, and is configured to write a part of the data in the plurality of sets of data into the memory based on the usage frequency of the plurality of sets of data.
According to an embodiment of the present disclosure, in a case where the data includes data in a data table, the partial data includes data in a data slice of the data table.
Fig. 6 schematically shows a block diagram of a memory management device 600 according to another embodiment of the present disclosure.
As shown in fig. 6, the memory management apparatus 600 further includes a determining module 610 and a removing module 620 based on the illustration in fig. 5.
The determining module 610, for example, performs operation S310 described with reference to fig. 3 above, and is configured to determine, based on the usage frequency of the multiple sets of data, a portion of data to be written into the memory as a set of data to be cached.
The removing module 620, for example, executes the operation S320 described with reference to fig. 3 above, to remove the data not belonging to the data set to be cached from the memory.
According to an embodiment of the present disclosure, the writing module 530, for example, performs the operation S330 described with reference to fig. 3 above, to write the part of the data, which is not stored in the memory, in the data set to be cached into the memory.
According to an embodiment of the present disclosure, the apparatus 500 or 600 may further include a training module for training the machine learning model based on the data features and the query behavior. The prediction module 520 is configured to predict a frequency of use of a plurality of sets of data corresponding to the data features based on the machine learning model.
FIG. 7 schematically shows a block diagram of a write module 530 according to an embodiment of the disclosure.
As shown in fig. 7, the write module 530 includes an obtaining sub-module 710, a determining sub-module 720, and a writing sub-module 730.
The obtaining sub-module 710, for example, performs the operation S410 described with reference to fig. 4 above, for obtaining the planned capacity.
The determining submodule 720, for example, executes the operation S420 described with reference to fig. 4 above, and is configured to determine, based on the usage frequency of the multiple sets of data and the planning capacity, a portion of data to be written into the memory as a set of data to be cached.
The writing sub-module 730, for example, executes the operation S430 described with reference to fig. 4 above, and is configured to write part or all of the data in the data set to be cached into the memory.
According to the method, the data are preloaded by predicting the use frequency of the data, the query is accelerated, and the possibility of memory waste or overflow is reduced.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any number of the obtaining module 510, the predicting module 520, the writing module 530, the determining module 610, the removing module 620, the training module, the obtaining sub-module 710, the determining sub-module 720, and the writing sub-module 730 may be combined in one module to be implemented, or any one of the modules may be split into multiple modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the disclosure, at least one of the obtaining module 510, the predicting module 520, the writing module 530, the determining module 610, the removing module 620, the training module, the obtaining sub-module 710, the determining sub-module 720, and the writing sub-module 730 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware by any other reasonable manner of integrating or packaging a circuit, or may be implemented in any one of three implementations of software, hardware, and firmware, or in a suitable combination of any of them. Alternatively, at least one of the obtaining module 510, the predicting module 520, the writing module 530, the determining module 610, the removing module 620, the training module, the obtaining sub-module 710, the determining sub-module 720 and the writing sub-module 730 may be at least partially implemented as a computer program module which, when executed, may perform a corresponding function.
Fig. 8 schematically shows a block diagram of a server cluster 800 according to an embodiment of the disclosure. The computer system illustrated in FIG. 8 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 8, a server cluster 800 includes at least one processor 810 and at least one memory, including a computer-readable storage medium 820. The electronic device 800 may perform a method according to an embodiment of the disclosure.
In particular, processor 810 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip set and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 810 may also include on-board memory for caching purposes. Processor 810 may be a single processing unit or a plurality of processing units for performing different actions of a method flow according to embodiments of the disclosure.
Computer-readable storage medium 820, for example, may be a non-volatile computer-readable storage medium, specific examples including, but not limited to: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and so on.
The computer-readable storage medium 820 may include a computer program 821, which computer program 821 may include code/computer-executable instructions that, when executed by the processor 810, cause the processor 810 to perform a method according to an embodiment of the present disclosure, or any variation thereof.
The computer program 821 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code in computer program 821 may include one or more program modules, including for example 821A, modules 821B, … …. It should be noted that the division and number of modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, and when the program modules are executed by the processor 810, the processor 810 may execute the method according to the embodiment of the present disclosure or any variation thereof.
According to an embodiment of the present invention, at least one of the obtaining module 510, the predicting module 520, the writing module 530, the determining module 610, the removing module 620, the training module, the obtaining sub-module 710, the determining sub-module 720 and the writing sub-module 730 may be implemented as a computer program module as described with reference to fig. 8, which when executed by the processor 810 may implement the corresponding operations described above.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (9)

1. A memory management method includes:
acquiring data characteristics and query behaviors of a plurality of groups of data;
taking data characteristics as input, inquiring the behavior as an output label, and training a machine learning model;
predicting, based on the machine learning model, a frequency of use of a plurality of sets of data corresponding to the data features;
and writing partial data in the multiple groups of data into a memory based on the use frequency of the multiple groups of data.
2. The method of claim 1, further comprising:
determining partial data to be written into the memory as a data set to be cached based on the use frequency of the multiple groups of data;
and removing the data which do not belong to the data set to be cached from the memory.
3. The method of claim 2, wherein the writing a portion of the plurality of sets of data to memory based on the frequency of use of the plurality of sets of data comprises:
and writing part of data which is not stored in the memory in the data set to be cached into the memory.
4. The method of claim 1, wherein the writing partial data of the plurality of sets of data to memory based on the frequency of use of the plurality of sets of data comprises:
obtaining a planned capacity;
determining partial data to be written into the memory as a data set to be cached based on the use frequency of the multiple groups of data and the planning capacity;
and writing part or all of the data in the data set to be cached into a memory.
5. The method of claim 1, wherein the partial data comprises data in a data slice of a data table where the data comprises data in the data table.
6. A memory management device, comprising:
the acquisition module is used for acquiring data characteristics and query behaviors of a plurality of groups of data;
the prediction module is used for training a machine learning model by taking the data characteristics as input and the query behavior as output labels; predicting, based on the machine learning model, a frequency of use of a plurality of sets of data corresponding to the data features;
and the writing module is used for writing part of data in the multiple groups of data into a memory based on the use frequency of the multiple groups of data.
7. The apparatus of claim 6, further comprising:
the determining module is used for determining partial data to be written into the memory as a data set to be cached based on the use frequency of the multiple groups of data;
a removing module, configured to remove, from the memory, data that does not belong to the data set to be cached,
the writing module is used for writing part of data which is not stored in the memory in the data set to be cached into the memory.
8. A cluster of servers, comprising:
at least one processor;
at least one memory having a computer program stored thereon, the computer program, when executed by the processor, causes the processor to:
acquiring data characteristics and query behaviors of a plurality of groups of data;
taking data characteristics as input, inquiring the behavior as an output label, and training a machine learning model;
predicting, based on the machine learning model, a frequency of use of a plurality of sets of data corresponding to the data features;
and writing partial data in the multiple groups of data into a memory based on the use frequency of the multiple groups of data.
9. A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to:
acquiring data characteristics and query behaviors of a plurality of groups of data;
taking data characteristics as input, inquiring the behavior as an output label, and training a machine learning model;
predicting, based on the machine learning model, a frequency of use of a plurality of sets of data corresponding to the data features;
and writing partial data in the multiple groups of data into a memory based on the use frequency of the multiple groups of data.
CN201910159040.3A 2019-03-01 2019-03-01 Memory management method and device and electronic equipment Active CN109976905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910159040.3A CN109976905B (en) 2019-03-01 2019-03-01 Memory management method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910159040.3A CN109976905B (en) 2019-03-01 2019-03-01 Memory management method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109976905A CN109976905A (en) 2019-07-05
CN109976905B true CN109976905B (en) 2021-10-22

Family

ID=67077784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910159040.3A Active CN109976905B (en) 2019-03-01 2019-03-01 Memory management method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109976905B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110764714B (en) * 2019-11-06 2021-07-27 深圳大普微电子科技有限公司 Data processing method, device and equipment and readable storage medium
CN113127515A (en) * 2021-04-12 2021-07-16 中国电力科学研究院有限公司 Power grid-oriented regulation and control data caching method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831824A (en) * 2006-04-04 2006-09-13 浙江大学 Buffer data base data organization method
CN102388374A (en) * 2011-09-28 2012-03-21 华为技术有限公司 Method and device for data storage
CN103995869A (en) * 2014-05-20 2014-08-20 东北大学 Data-caching method based on Apriori algorithm
CN104834675A (en) * 2015-04-02 2015-08-12 浪潮集团有限公司 Query performance optimization method based on user behavior analysis
CN108241725A (en) * 2017-05-24 2018-07-03 新华三大数据技术有限公司 A kind of data hot statistics system and method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8832371B2 (en) * 2011-04-04 2014-09-09 Hitachi, Ltd. Storage system with multiple flash memory packages and data control method therefor
CN105512184B (en) * 2015-11-25 2019-06-21 国云科技股份有限公司 A method of improving space and time efficiency of the application system in relational database
CN108241690A (en) * 2016-12-26 2018-07-03 北京搜狗信息服务有限公司 A kind of data processing method and device, a kind of device for data processing
CN106844032A (en) * 2017-01-23 2017-06-13 努比亚技术有限公司 The storage processing method and device of a kind of terminal applies
CN107239570A (en) * 2017-06-27 2017-10-10 联想(北京)有限公司 Data processing method and server cluster
CN107563514A (en) * 2017-09-25 2018-01-09 郑州云海信息技术有限公司 A kind of method and device of prediction data access frequency
CN107783801B (en) * 2017-11-06 2021-03-12 Oppo广东移动通信有限公司 Application program prediction model establishing and preloading method, device, medium and terminal
CN108241583A (en) * 2017-11-17 2018-07-03 平安科技(深圳)有限公司 Data processing method, application server and the computer readable storage medium that wages calculate
CN107797871A (en) * 2017-11-30 2018-03-13 努比亚技术有限公司 EMS memory occupation method for releasing resource, mobile terminal and computer-readable recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831824A (en) * 2006-04-04 2006-09-13 浙江大学 Buffer data base data organization method
CN102388374A (en) * 2011-09-28 2012-03-21 华为技术有限公司 Method and device for data storage
CN103995869A (en) * 2014-05-20 2014-08-20 东北大学 Data-caching method based on Apriori algorithm
CN104834675A (en) * 2015-04-02 2015-08-12 浪潮集团有限公司 Query performance optimization method based on user behavior analysis
CN108241725A (en) * 2017-05-24 2018-07-03 新华三大数据技术有限公司 A kind of data hot statistics system and method

Also Published As

Publication number Publication date
CN109976905A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
US11036552B2 (en) Cognitive scheduler
US10902005B2 (en) Parallel scoring of an ensemble model
US20160110224A1 (en) Generating job alert
AU2019216636A1 (en) Automation plan generation and ticket classification for automated ticket resolution
US10628066B2 (en) Ensuring in-storage data atomicity and consistency at low cost
CN109976905B (en) Memory management method and device and electronic equipment
US11321318B2 (en) Dynamic access paths
US20170351598A1 (en) Optimizations for regression tracking and triaging in software testing
CN111859139A (en) Application program recommendation method and device, computing equipment and medium
CN112245932B (en) Game resource processing method and device and server device
US20150309838A1 (en) Reduction of processing duplicates of queued requests
CN110928941B (en) Data fragment extraction method and device
CN110659125A (en) Analysis task execution method, device and system and electronic equipment
CN110162423B (en) Resource checking method and resource checking device
US10831638B2 (en) Automated analytics for improving reuse of application solutions
US10324837B2 (en) Reducing minor garbage collection overhead
CN106648550B (en) Method and device for concurrently executing tasks
CN115248815A (en) Predictive query processing
CN111079390B (en) Method and device for determining selection state of check box list
CN111679924B (en) Reliability simulation method and device for componentized software system and electronic equipment
US20160179487A1 (en) Method ranking based on code invocation
CN111831754A (en) Method, device, system and medium for copying data in database
US20120278648A1 (en) Timer manager architecture based on binary heap
CN111651276B (en) Scheduling method and device and electronic equipment
US11281722B2 (en) Cognitively generating parameter settings for a graph database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant