CN114564506A - Cache scheme updating method, device, equipment and storage medium - Google Patents

Cache scheme updating method, device, equipment and storage medium Download PDF

Info

Publication number
CN114564506A
CN114564506A CN202210233075.9A CN202210233075A CN114564506A CN 114564506 A CN114564506 A CN 114564506A CN 202210233075 A CN202210233075 A CN 202210233075A CN 114564506 A CN114564506 A CN 114564506A
Authority
CN
China
Prior art keywords
cache
strategy
system state
cache strategy
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210233075.9A
Other languages
Chinese (zh)
Inventor
胡停雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Puhui Enterprise Management Co Ltd
Original Assignee
Ping An Puhui Enterprise Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Puhui Enterprise Management Co Ltd filed Critical Ping An Puhui Enterprise Management Co Ltd
Priority to CN202210233075.9A priority Critical patent/CN114564506A/en
Publication of CN114564506A publication Critical patent/CN114564506A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • G06F9/548Object oriented; Remote method invocation [RMI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application relates to the technical field of software development, and discloses a method, a device, equipment and a storage medium for updating a cache scheme, wherein a system state of a system executing a cache strategy is inquired, and a cache strategy updating model is input according to a system state feature set corresponding to the system state to obtain an updated cache strategy predicted by the cache strategy, so that the corresponding system state can be formulated according to the system state; when the updated cache strategy is the same as the initial cache strategy in execution, the initial cache strategy in execution is kept, and when the updated cache strategy is different from the initial cache strategy in execution, the updated cache strategy is executed, so that the cache strategy is flexibly updated according to the state of the system, and the stability of the system is improved.

Description

Cache scheme updating method, device, equipment and storage medium
Technical Field
The present application relates to the field of software development technologies, and in particular, to a method and an apparatus for updating a cache scheme, a computer device, and a storage medium.
Background
The caching scheme is one of the main problems of stability of a project in project design, and is used for solving the problems of high concurrent access of a database and the like, so that the project access rate is improved. Common caching schemes generally use a multi-level caching mode, cache breakdown is prevented by setting a fixed peak-to-peak validity period, so that the item access rate is improved, but the caching schemes occupy more memories, and the fixed caching period in the schemes cannot timely respond to various service scenes facing the system state, so that the stability of the system is reduced.
Disclosure of Invention
The application provides an updating method and device of a cache scheme, computer equipment and a storage medium, and solves the problem that in the prior art, when a system state is in multiple service scenes, the cache validity period in the cache scheme cannot be timely changed, so that the stability of the system is reduced.
In a first aspect, an embodiment of the present application provides a method for updating a cache scheme, including:
inquiring the system state of the cache policy system to obtain a system state feature set;
carrying out cache strategy prediction on a plurality of system state characteristics in the system state characteristic set through a classification algorithm in a trained cache strategy updating model to obtain an updated cache strategy;
if the updated cache strategy is different from the pre-stored initial cache strategy, modifying the executed initial cache strategy into the updated cache strategy;
and if the updated cache strategy is the same as the pre-stored initial cache strategy, the initial cache strategy is kept to be executed.
In a second aspect, an embodiment of the present application further provides an updating apparatus for a cache scheme, including:
the system state acquisition module is used for inquiring the system state of the cache strategy system to obtain a system state feature set;
the cache strategy prediction module is used for carrying out cache strategy prediction on a plurality of system state characteristics in the system state characteristic set through a classification algorithm in a trained cache strategy updating model to obtain an updated cache strategy;
the cache strategy execution module is used for modifying the executed initial cache strategy into the updated cache strategy if the updated cache strategy is different from the pre-stored initial cache strategy; and if the updated cache strategy is the same as the pre-stored initial cache strategy, the initial cache strategy is kept to be executed.
In a third aspect, an embodiment of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method for updating the cache scheme when executing the computer program.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps of the method for updating the cache scheme are implemented.
According to the updating method and device of the cache scheme, the computer equipment and the storage medium, wherein the updating method of the cache scheme is characterized in that a system state of a system executing a cache strategy is inquired, and a cache strategy updating model is input according to a system state feature set corresponding to the system state to obtain an updated cache strategy predicted by the cache strategy, so that the corresponding system state can be formulated according to the system state; when the updated cache strategy is the same as the initial cache strategy in execution, the initial cache strategy in execution is kept, and when the updated cache strategy is different from the initial cache strategy in execution, the updated cache strategy is executed, so that the cache strategy is flexibly updated according to the state of the system, and the stability of the system is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic application environment diagram of an update method of a caching scheme according to an embodiment of the present application;
fig. 2 is a flowchart illustrating an implementation of a method for updating a cache scheme according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating steps S51-S53 of a method for updating a cache scheme according to another embodiment of the present application;
fig. 4 is a schematic structural diagram of an updating apparatus of a caching scheme according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a computer device provided by an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The updating method of the caching scheme provided by the embodiment of the application can be applied to an application environment shown in fig. 1. As shown in fig. 1, a client (computer device) communicates with a server through a network. The client (computer device) includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
The updating method of the caching scheme provided in this embodiment may be executed by the server, for example, the client sends the system state feature set to the server, and the server executes the updating method of the caching scheme provided in this embodiment based on the system state feature set, so as to obtain an updated caching policy predicted by the caching policy, and finally sends the updated caching policy to the client.
In some scenarios other than fig. 1, the client may also execute the updating method of the caching scheme, and obtain the updated caching policy predicted by the caching policy by executing the updating method of the caching scheme provided in this embodiment directly according to the system state feature set of the client, and then send the updated caching policy to the server for storage.
The application includes but is not limited to using a Redis service, which is described in this embodiment by a Remote Dictionary service (Remote Dictionary service), where the Redis includes a Redis key, which may also be referred to as a key, the key may be added with an expiration date configuration, the expiration date is a time when the cache data is retained in the cache Server, and the expiration date configuration may avoid long-term existence of the key with a low usage frequency, thereby occupying a memory resource. Redis will deposit each key with its validity period configuration set in a separate dictionary and will periodically traverse this dictionary to delete expired keys.
It is understood that Redis is a key-value database of key-value types, the key and value correspond one-to-one, wherein the stored value types supported include string, list, set, zset, and hash. The redis supports various different sequencing modes, and in order to ensure efficiency, by caching data in a memory as a cache server, the redis periodically writes updated data into a disk or writes a modification operation into an additional recording file, and realizes master-slave synchronization on the basis.
Fig. 2 is a flowchart illustrating an implementation of a method for updating a caching scheme according to an embodiment of the present application. As shown in fig. 2, an updating method of a cache scheme is provided, which mainly includes the following steps S10-S40:
and step S10, inquiring the system state of the cache policy system to obtain a system state feature set.
In step S10, a caching policy is generally used to decide the status of the cached data of the current system' S caching system, and the caching policy includes the validity configuration of the cached data. According to the cache policy system executing the cache policy, the system state query of the cache policy system is performed to obtain a plurality of system states, so as to perform dynamic cache side pull adjustment according to the system states of the system, thereby improving the stability and the operation efficiency of the system.
The method comprises the steps of obtaining a plurality of corresponding system state characteristics by discretizing a plurality of system state items of the obtained system state, combining the plurality of system state characteristics to obtain a system state characteristic set, and discretizing the system state to be used for decision prediction of a cache strategy by the system state characteristic set.
In an embodiment, the system states include, but are not limited to, data such as a CPU usage state, a memory usage state, a traffic state, and an access volume state, and the system states are collected and a plurality of system states are converted into corresponding system state features through one hot encoding, so that a system state feature set is obtained for prediction of a caching scheme.
In one embodiment, the system state items are converted into corresponding discrete values one by one through a one-hot (one-hot) algorithm, so as to obtain a system state feature set. An example of the system state item conversion system state feature is as follows, a certain item of system state item is a "disk occupation state", and the general academic information includes [ "1% -20%", "21% -40%", "41% -60%", "61% -80%", and "81% -100%" ], and according to the principle of encoding N states by an N-bit state register, the element has only two states, so that N ═ 5, and the processing is as follows: (1% -20% → 10000), (21% -40% → 01000), (41% -60% → 00100), (61% -80% → 00010), (81% -100% → 00001); the method further comprises a solid state disk and a mechanical hard disk, wherein the N states are coded according to an N-bit state register, the element has only two states, so that N is 2, and the method comprises the following steps: (solid state disk → 10), (mechanical disk → 01), when the "disk occupancy state" is the system state item [ "solid state disk 81% -100% ], the complete discretization result has the system state characteristics: [1, 0, 0, 0, 0, 0, 1], which corresponds to the codes of the solid state disk (10) together with 81% -100% (00001).
In another embodiment, the system state items in the system state item set are processed into corresponding character vectors by a word segmenter, and the character vectors are taken as corresponding system state features. The word segmenter includes, but is not limited to, Ansj, word2vec, ICTCLAS, and HanLP. In this embodiment, a word2vec algorithm is used as a word segmentation device, the word2vec algorithm is an algorithm for converting words into word embedding vectors, each word or word in the project information text is subjected to vector conversion, and the character embedding vectors and the word embedding vectors after the vector conversion are spliced to obtain a vector text as a corresponding system state feature. Wherein the vector conversion comprises a conversion process of converting characters into character embedding vectors and converting words into word embedding vectors, and the vector text is a vector array comprising the character embedding vectors and/or the word embedding vectors.
It will be appreciated that the One-Hot algorithm, also known as One-Hot encoding, mainly uses an N-bit status register to encode N states, each state being represented by its own independent register bit and only One bit being active at any time. For each feature, if it has m possible values, it becomes m binary features after One-Hot encoding. And, these features are mutually exclusive, with only one activation at a time.
And step S20, carrying out cache strategy prediction on a plurality of system state characteristics in the system state characteristic set through a classification algorithm in the trained cache strategy updating model to obtain an updated cache strategy.
In step S20, based on the current system state, the cache policy is adjusted based on the system state by inputting the collected system state feature set into the pre-trained cache policy update model to perform cache prediction, so as to cope with the access pressure in different scenarios, and the accuracy and timeliness of the cache policy are improved by using the cache policy predicted by the cache policy update model. The cache strategy updating model utilizes a classification algorithm, needs to be trained through a large number of cache scheme samples, carries out classification prediction through a plurality of system state characteristics in a system state characteristic set, and updates the predicted result as a cache scheme.
In one embodiment, a classification algorithm is utilized to determine the system state features by matching the system state features to corresponding state feature weights; and inputting the system state characteristics into a decision tree in the classification algorithm according to the state characteristic weight to predict a cache strategy, and taking a cache strategy prediction result as the updated cache strategy. The classification algorithm includes, but is not limited to, a decision tree classification algorithm, a neural network classification algorithm, a Support Vector Machine (SVM) classification algorithm, a random forest algorithm, a Logistic Regression algorithm (LR) and an XGBoost algorithm. In this embodiment, the decision tree algorithm is used to sequentially input the system state features according to the state feature weights, and each system state feature is determined according to the node transmission sequence of the decision tree, so as to obtain a leaf node; and obtaining the prediction result of the initial cache strategy according to the classification decision corresponding to the leaf node.
In an embodiment, the decision tree algorithm is a method for approximating a discrete function value, firstly processing data, generating readable rules and decision trees by using an inductive algorithm, then branching into the judgment and decision of the next node by using the decision tree according to the judging flow of the system state characteristics from top to bottom and according to the node category of the system state characteristics, and if the node is not a judgment condition any more but a leaf node, then the leaf node is a classification decision. In this embodiment, an example is performed in a manner that is easy to understand, when service features represented by reserved system state features are { a CPU state, a disk state, a memory state, and a serial port occupation state }, the service features are sequentially judged according to state feature weights trained by a decision tree using information entropy, and the obtained state feature weights are ordered as follows { a CPU occupation state > a memory occupation state > a serial port occupation state > a disk occupation state }; firstly, the CPU occupation state with larger state characteristic weight is passed
61% -80% } as a root node, wherein the node categories include { CPU occupancy state e "1% -20%", "21% -40%", "41% -60%", "61% -80%", "81% -100%" }; secondly, the nodes are { memory occupation state ═ 32G }, wherein the node types comprise node types corresponding to the nodes below { memory occupation state ∈ 16G, 32G and 64G }, the nodes are { serial port occupation state }, and the node transmission sequence judges the system state characteristics, which is not described in detail; until the leaf node is finished, obtaining a corresponding cache strategy prediction result according to a classification decision corresponding to the leaf node, wherein the cache strategy prediction result is used as an updated cache strategy, and comprises the steps of more memory occupation, a large number of access request quantities, screening partial preset cache data of the situation, and refreshing the effective period for prolonging to reduce the pressure of larger access quantity.
It will be appreciated that decision tree algorithm is a method of approximating discrete function values, which is a typical classification method, by first processing the data, using an inductive algorithm to generate readable rules and decision trees, and then using the decisions to analyze the new data.
In another embodiment, the classification algorithm may utilize a random forest algorithm, where the random forest is made up of many decision trees with no associations between different decision trees. When a classification task is performed, updated input samples enter, each decision tree in the forest is used for judging and classifying through the classification mode of the decision tree in the previous embodiment, each decision tree can obtain a classification result, and the random forest can take the result as a final result if the decision tree has the most classification in the classification results.
It can be understood that the random forest algorithm is a supervised learning algorithm, and the random forest is a forest which establishes a plurality of decision trees to form a decision tree, and the decision is made by voting on a plurality of trees. The method can effectively improve the classification accuracy of the new sample. The random forest further introduces random attribute selection in the training process of the decision tree on the basis of building Bagging integration (random selection of samples) by taking the decision tree as a base learner. Specifically, when the partition attribute is selected, the traditional decision tree selects an optimal attribute from the attribute set of the current node; in the random forest, for each node of the base decision tree, a subset containing a plurality of attributes is randomly selected from the attribute set of the node, and then an optimal attribute is selected from the subset for division.
Step S30, if the updated caching policy is different from the pre-stored initial caching policy, modifying the executed initial caching policy into the updated caching policy.
In step S30, the updated caching policy is compared with the pre-stored initial caching policy according to the system, if the updated caching policy is different from the pre-stored initial caching policy, the executed initial caching policy is modified into the updated caching policy, and after the caching policy is modified in time according to the system state, the cached data in the cache area can be more reasonably utilized, thereby reducing the energy consumption of the system.
In one embodiment, the plurality of validity period configurations in the initial caching policy are replaced by the plurality of validity period configurations in the updated caching policy; each validity period is configured to maintain a cache period of the corresponding cache data.
In an embodiment, the caching policy includes that after receiving access requests sent by a plurality of clients, a server responds according to the client access requests, and since data access sent by the plurality of clients to the server includes access to the same data, when the server accesses the data for the first time, the accessed data is stored in a caching server of the server, and when an access terminal accesses the data to the server, the data is directly obtained from a caching area. However, due to the limited storage space of the area, the cache area is configured with the validity period according to the cache scheme, and when the validity period of the data expires, the data with the expired validity period is cleared.
In an embodiment, this embodiment is described by a Remote Dictionary service (Remote Dictionary service), since the validity period configuration in the Redis service can only be performed by modifying a key value, but cannot be directly modified by an alter command, the key value in the initial caching policy needs to be replaced according to the validity period configuration in the updated caching policy by modifying corresponding key value data. The valid period configuration is refreshed so as to flexibly adjust the cache data of the cache region.
And step S40, if the updated caching policy is the same as the pre-stored initial caching policy, keeping executing the initial caching policy.
In step S40, the updated caching policy of the system is compared with the pre-stored initial caching policy, and if the updated caching policy is the same as the initial caching policy, the pre-stored initial caching policy is maintained for execution, and no more processes need to be occupied. Therefore, dynamic adjustment of the cache strategy is achieved, the stability of system operation is improved, cache data in the cache region is utilized more reasonably, and the energy consumption of the system is reduced.
Fig. 3 is another embodiment of the present application, and before the step of predicting the caching policy of the plurality of system state features in the system state feature set by using the classification algorithm in the trained caching policy update model to obtain the updated caching policy in step S20, the method further includes further embodiment steps S51-S53, specifically:
step S51, training a cache strategy updating model through a plurality of cache scheme samples;
step S52, obtaining a plurality of cache scheme samples, and converting sample state items in the cache scheme samples into sample state characteristics to obtain a plurality of sample state characteristic sets;
step S53, inputting the sample state feature set into the cache strategy updating model, and the cache strategy updating model utilizes a decision tree in a decision tree algorithm to carry out judgment one by one according to a plurality of sample state features, thereby training the cache strategy prediction of the sample state feature set.
In step S51, the caching strategy updating model is trained by a plurality of caching scheme samples, where the caching scheme samples are the caching strategies adjusted according to different system states, so as to be used for training the caching strategy updating model.
In step S52, the method in step S10 of this embodiment is used to convert the sample state items in the obtained multiple cache scheme samples into sample state features, and obtain multiple sample state feature sets, so as to be used for learning the cache policy update model.
In step S53, the sample state feature set is input into the cache policy update model, and the cache policy update model performs a one-by-one judgment according to the plurality of sample state features by using a decision tree in a decision tree algorithm, so as to train the cache policy prediction on the sample state feature set.
In an embodiment, the cache strategy updating model is trained through a plurality of cache scheme samples, and classification learning of a service scene is performed according to a sample state feature set by using a decision tree algorithm so as to ensure the accuracy of cache strategy prediction. The decision tree algorithm performs replaced sampling on the sample state feature set, and randomly extracts N samples from the original N training samples in a replaced manner every time to obtain a plurality of sample sets. Randomly extracting m characteristics from the candidate characteristics to serve as alternative characteristics of a current decision, constructing a decision tree by taking the sample state characteristic set as a training sample, measuring and calculating the sample state characteristics in the plurality of sample state characteristic sets by the decision tree to obtain the weight of each sample state characteristic, and predicting and outputting the trained decision tree according to the input sample state characteristics in the sample state characteristic set.
In another embodiment, a random forest algorithm is used for carrying out classification learning on business scenes according to the sample state feature set, after the decision trees with the required number are obtained, the random forest method votes the output of the decision trees, the class with the most votes is used as the decision of the random forest, and therefore the business scenes corresponding to the sample state feature set are determined.
In an embodiment, an updating apparatus of a cache scheme is provided, and the updating apparatus of the cache scheme corresponds to the updating method of the cache scheme in the foregoing embodiment one to one. As shown in fig. 4, the updating apparatus of the caching scheme includes a system state obtaining module 11, a caching policy predicting module 12, and a caching policy executing module 13, and each functional module is described in detail as follows:
the system state acquisition module 11 is used for inquiring the system state of the cache strategy system to obtain a system state feature set;
the cache strategy prediction module 12 is configured to perform cache strategy prediction on a plurality of system state characteristics in the system state characteristic set through a classification algorithm in a trained cache strategy update model to obtain an updated cache strategy;
a cache policy executing module 13, configured to modify the executed initial cache policy into the updated cache policy if the updated cache policy is different from a pre-stored initial cache policy; and if the updated cache strategy is the same as the pre-stored initial cache strategy, the initial cache strategy is kept to be executed.
For the specific limitation of the updating apparatus of the cache scheme, reference may be made to the above limitation on the updating method of the cache scheme, and details are not described herein again. All or part of the modules in the updating device of the cache scheme can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a client or a server, and its internal structure diagram may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a readable storage medium and an internal memory. The readable storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the readable storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of updating a caching scheme.
In one embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the method for updating the caching scheme in the above embodiments.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed by a processor, implements the updating method of the caching scheme in the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for updating a cache scheme, comprising:
inquiring the system state of the cache policy system to obtain a system state feature set;
carrying out cache strategy prediction on a plurality of system state characteristics in the system state characteristic set through a classification algorithm in a trained cache strategy updating model to obtain an updated cache strategy;
if the updated cache strategy is different from the pre-stored initial cache strategy, modifying the executed initial cache strategy into the updated cache strategy;
and if the updated cache strategy is the same as the pre-stored initial cache strategy, the initial cache strategy is kept to be executed.
2. The method for updating a caching scheme according to claim 1, wherein the querying a system state of a caching policy system to obtain a system state feature set comprises:
discretizing a plurality of system state items of the system state to obtain a plurality of corresponding system state characteristics;
and combining the plurality of system state features to obtain the system state feature set.
3. The method for updating a caching scheme according to claim 1, wherein the predicting the caching strategy of the plurality of system state features in the system state feature set by using the classification algorithm in the trained caching strategy updating model to obtain the updated caching strategy comprises:
matching the system state features with corresponding state feature weights;
and inputting the system state characteristics into a decision tree in the classification algorithm according to the state characteristic weight to predict a cache strategy, and taking a cache strategy prediction result as the updated cache strategy.
4. The method for updating a cache scheme according to claim 3, wherein the step of inputting the system state features into the decision tree in the classification algorithm according to the state feature weights to perform cache policy prediction, and using a cache policy prediction result as the updated cache policy comprises:
sequentially inputting the system state features according to the state feature weights by utilizing the decision tree algorithm;
judging the state characteristics of each system according to the node transmission sequence of the decision tree to obtain leaf nodes;
and obtaining the prediction result of the initial cache strategy according to the classification decision corresponding to the leaf node.
5. The method according to claim 1, wherein if the updated caching policy is different from a pre-stored initial caching policy, modifying the executing initial caching policy into the updated caching policy includes:
replacing the plurality of validity period configurations in the initial caching policy with the plurality of validity period configurations in the updated caching policy;
each validity period is configured to maintain a cache period of the corresponding cache data.
6. The updating method of the caching scheme as recited in claim 5, wherein the replacing the plurality of validity period configurations in the initial caching policy with the plurality of validity period configurations in the updated caching policy comprises:
and replacing the key value in the initial cache strategy according to the valid period configuration in the updated cache strategy.
7. The method for updating a caching scheme according to claim 1, wherein before the step of predicting the caching policy for the plurality of system state features in the system state feature set by using the classification algorithm in the trained caching policy update model to obtain the updated caching policy, the method further comprises:
training a caching strategy updating model through a plurality of caching scheme samples;
obtaining a plurality of cache scheme samples, and converting sample state items in the cache scheme samples into sample state characteristics to obtain a plurality of sample state characteristic sets;
and inputting the sample state feature set into the cache strategy updating model, and judging the sample state feature set one by one according to a plurality of sample state features by the cache strategy updating model by utilizing a decision tree in a decision tree algorithm so as to train the cache strategy prediction of the sample state feature set.
8. An apparatus for updating a caching scheme, comprising:
the system state acquisition module is used for inquiring the system state of the cache strategy system to obtain a system state feature set;
the cache strategy prediction module is used for carrying out cache strategy prediction on a plurality of system state characteristics in the system state characteristic set through a classification algorithm in a trained cache strategy updating model to obtain an updated cache strategy;
the cache strategy execution module is used for modifying the executed initial cache strategy into the updated cache strategy if the updated cache strategy is different from the pre-stored initial cache strategy; and if the updated cache strategy is the same as the pre-stored initial cache strategy, the initial cache strategy is kept to be executed.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of updating the caching scheme as claimed in any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, carries out a method for updating a caching scheme according to any one of claims 1 to 7.
CN202210233075.9A 2022-03-09 2022-03-09 Cache scheme updating method, device, equipment and storage medium Pending CN114564506A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210233075.9A CN114564506A (en) 2022-03-09 2022-03-09 Cache scheme updating method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210233075.9A CN114564506A (en) 2022-03-09 2022-03-09 Cache scheme updating method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114564506A true CN114564506A (en) 2022-05-31

Family

ID=81717255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210233075.9A Pending CN114564506A (en) 2022-03-09 2022-03-09 Cache scheme updating method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114564506A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866462A (en) * 2022-07-06 2022-08-05 广东新宏基信息技术有限公司 Internet of things communication routing method and system for smart campus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866462A (en) * 2022-07-06 2022-08-05 广东新宏基信息技术有限公司 Internet of things communication routing method and system for smart campus

Similar Documents

Publication Publication Date Title
CN109684554B (en) Method for determining potential users of news and news pushing method
US9984336B2 (en) Classification rule sets creation and application to decision making
CN113194079B (en) Login verification method, device, equipment and storage medium
CN110636445B (en) WIFI-based indoor positioning method, device, equipment and medium
Hu et al. Mining mobile intelligence for wireless systems: a deep neural network approach
Zeng et al. Smart caching based on user behavior for mobile edge computing
CN111356998A (en) Machine learning query processing system
CN112259247A (en) Method, device, equipment and medium for confrontation network training and medical data supplement
Li et al. Research on QoS service composition based on coevolutionary genetic algorithm
CN114564506A (en) Cache scheme updating method, device, equipment and storage medium
CN114840869A (en) Data sensitivity identification method and device based on sensitivity identification model
CN114327857A (en) Operation data processing method and device, computer equipment and storage medium
CN112529100A (en) Training method and device for multi-classification model, electronic equipment and storage medium
Jahandideh et al. Allocating duplicate copies for IoT data in cloud computing based on harmony search algorithm
CN113791909B (en) Method and device for adjusting server capacity, computer equipment and storage medium
KR102352356B1 (en) Method, apparatus and computer program for preprocessing personal information using pre-trained artificial intelligence model
CN113127515A (en) Power grid-oriented regulation and control data caching method and device, computer equipment and storage medium
CN112541556A (en) Model construction optimization method, device, medium, and computer program product
CN111324344A (en) Code statement generation method, device, equipment and readable storage medium
CN110765393A (en) Method and device for identifying harmful URL (uniform resource locator) based on vectorization and logistic regression
CN115034507A (en) Power load prediction method of charging pile and related components
Kusuma et al. Intelligence and cognitive computing at the edge for IoT: architecture, challenges, and applications
CN113836302A (en) Text classification method, text classification device and storage medium
Bai et al. An efficient skyline query algorithm in the distributed environment
CN113191527A (en) Prediction method and device for population prediction based on prediction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination