CN116974780A - Data caching method, device, software program, equipment and storage medium - Google Patents

Data caching method, device, software program, equipment and storage medium Download PDF

Info

Publication number
CN116974780A
CN116974780A CN202310138233.7A CN202310138233A CN116974780A CN 116974780 A CN116974780 A CN 116974780A CN 202310138233 A CN202310138233 A CN 202310138233A CN 116974780 A CN116974780 A CN 116974780A
Authority
CN
China
Prior art keywords
data
cache
transfer protocol
caching
hypertext transfer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310138233.7A
Other languages
Chinese (zh)
Inventor
邓志豪
周成宇
张胜利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Network Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Network Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Network Information Technology Co Ltd filed Critical Shenzhen Tencent Network Information Technology Co Ltd
Priority to CN202310138233.7A priority Critical patent/CN116974780A/en
Publication of CN116974780A publication Critical patent/CN116974780A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • G06F9/44526Plug-ins; Add-ons

Abstract

The invention provides a data caching method, a device, a software program, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a caching strategy of the data caching process according to the configuration information; extracting a cache keyword in a cache policy; determining hit results of the hypertext transfer protocol request information; when the hit result determines that the hypertext transfer protocol request information hits, cache information corresponding to the hypertext transfer protocol request information is sent; when determining that the hypertext transfer protocol request information is missed according to the hit result, acquiring a response state code and response data of the rear-end server; and caching the data of the hypertext transfer protocol request information according to the caching strategy, the response state code and the response data. Therefore, the cache strategy formed by the configuration information of the extension plug-in can be utilized to improve the cache hit rate, reduce the pressure of the back-end server and improve the processing speed of the access server on the user access request, thereby increasing the throughput of the access server.

Description

Data caching method, device, software program, equipment and storage medium
Technical Field
The present invention relates to data caching technology, and in particular, to a data caching method, apparatus, system, software program, electronic device, and storage medium.
Background
In the related art, data caching is a general technical means, and interaction with a cache server can be realized through multiple programming languages, so that data reading, setting, expiration time management and the like are realized. In a multi-language development environment, there are multiple code frameworks; each framework needs to use a corresponding language to maintain an independent cache client, and has the problems of code copying, repeated development, language compatibility and the like. In addition, in the development mode, a serial logic such as generating a buffer key (buffer key), acquiring a buffer, judging whether the buffer exists, directly returning if the buffer exists, requesting an external data interface if the buffer does not exist, setting the buffer and the like is embedded into a service code, and repeated development and service logic coupling are sometimes performed.
The repeated development and service logic coupling conditions lead to low development efficiency and difficult management and maintenance of the cache client codes, and the problem that languages cannot be effectively compatible is easily caused to influence the use of the cache client.
Disclosure of Invention
In view of this, embodiments of the present invention provide a data caching method, apparatus, software program, electronic device, and storage medium, which can implement that the provided data caching method is irrelevant to a programming language, and can improve the cache hit rate, reduce the pressure of a back-end server, and improve the processing speed of an access server on a user access request, thereby increasing the throughput of the access server, and simultaneously, the configuration information of an expansion plug-in is flexibly adjusted and iteratively updated according to the needs of different users, so that the maintenance efficiency of a service APP can be effectively improved, and the reliability and expansibility of data caching are improved.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a data caching method, which comprises the following steps:
the network proxy server process obtains configuration information of an expansion plug-in of the data caching process;
responding to the received hypertext transfer protocol request information, and determining a caching strategy of a data caching process;
extracting a cache keyword in a cache policy;
determining hit results of the hypertext transfer protocol request information according to the cache keywords;
when determining that the hypertext transfer protocol request information is missed according to the hit result, acquiring a response state code and response data of the rear-end server;
And caching the data of the hypertext transfer protocol request information according to the caching strategy, the response state code and the response data.
The embodiment of the invention also provides a data caching device, which comprises:
the information transmission device is used for the network proxy server process to acquire the configuration information of the expansion plug-in of the data cache process;
an information processing device for determining a caching policy of the data caching process in response to the received hypertext transfer protocol request information;
the information processing device is used for extracting the cache key words in the cache strategy;
the information processing device is used for determining hit results of the hypertext transfer protocol request information according to the cache key words;
the information processing device is used for acquiring a response state code and response data of the rear-end server side when the hypertext transfer protocol request information is determined to be missed according to the hit result;
and the information processing device is used for caching the data of the hypertext transfer protocol request information according to the caching strategy, the response state code and the response data.
In the above-described arrangement, the first and second embodiments,
the information processing device is used for the network proxy server process to acquire an expansion plug-in of the data cache process;
the information processing device is used for analyzing the configuration file of the expansion plug-in to obtain a request path of the hypertext transfer protocol, a generation mode of the cache key word, the cache expiration time, the cache mode and the encryption mode.
In the above-described arrangement, the first and second embodiments,
the information processing device is used for analyzing the received hypertext transfer protocol request information to obtain message header information, message body information and request parameters of the hypertext transfer protocol request information;
and the information processing device is used for generating a caching strategy of the data caching process according to the message header information, the message body information and the request parameters.
In the above-described arrangement, the first and second embodiments,
the information processing device is used for extracting corresponding fields from the uniform resource locator parameters to generate cache keywords; or alternatively
The information processing device is used for extracting corresponding fields from the parameters of the message body information to generate cache keywords; or alternatively
And the information processing device is used for extracting the corresponding field from the parameters of the message header information to generate the cache key.
In the above-described arrangement, the first and second embodiments,
the information processing device is used for receiving the data interaction result sent by the remote dictionary service system through the memory server;
the information processing device is used for analyzing the data interaction result through the memory server and determining a hit result of the hypertext transfer protocol request information;
an information processing device for transmitting the hypertext transfer protocol request information to the interface of the back-end server when determining that the hypertext transfer protocol request information is missed according to the hit result;
And the information processing device is used for sending the hit result of the hypertext transfer protocol request information to the initiating terminal of the hypertext transfer protocol request information when the hypertext transfer protocol request information hits.
In the above-described arrangement, the first and second embodiments,
the information processing device is used for determining a memory server and a corresponding cache storage process through a cache strategy when the response state code and the response data accord with the cache condition;
the information processing device is used for sending the data of the hypertext transfer protocol request information to the cache storage process through the memory server;
the information processing device is used for configuring the buffer expiration time for the buffer storage process;
and the information processing device is used for caching the data of the hypertext transfer protocol request information through a caching and storing process.
In the above-described arrangement, the first and second embodiments,
the information processing device is used for verifying the authority information of the service object based on the service object identification of the network proxy service process;
and triggering the data caching process to be adjusted when the identification of the service object is consistent with the corresponding authority information.
The embodiment of the invention also provides electronic equipment, which comprises:
a memory for storing executable instructions;
And the processor is used for realizing the preamble data caching method when running the executable instructions stored in the memory.
The embodiment of the application also provides a computer readable storage medium which stores executable instructions and is characterized in that the executable instructions realize the data caching method of the preamble when being executed by a processor.
The embodiment of the application has the following beneficial effects:
1) The embodiment of the application triggers a data caching process by responding to the received data caching request; the network proxy server process obtains configuration information of an expansion plug-in of the data caching process; responding to the received hypertext transfer protocol request information, and determining a caching strategy of a data caching process; extracting a cache keyword in a cache policy; determining hit results of the hypertext transfer protocol request information according to the cache keywords; when determining that the hypertext transfer protocol request information is missed according to the hit result, acquiring a response state code and response data of the rear-end server; and caching the data of the hypertext transfer protocol request information according to the caching strategy, the response state code and the response data. Therefore, the data caching method provided by the application is irrelevant to programming language, and the cache strategy formed by the configuration information of the expansion plug-in can improve the cache hit rate, reduce the pressure of the back-end server and improve the processing speed of the access server on the user access request, thereby increasing the throughput of the access server.
2) According to the embodiment of the invention, through the configured extension plugin WasmPlugin, the generation of the Cache key based on the HttpURL parameter and the key field in HttpHeaders, httpBody can be realized by reading the corresponding configuration file and combining with the memory server (Cache-Svr), whether the Cache is cached or not is judged based on the state code and the error code of the external service, how long the Cache is cached and the instance configuration is stored, the configuration information of the extension plugin can be flexibly adjusted and iteratively updated according to the requirements of different users, the maintenance efficiency of the service APP is effectively improved, and the reliability and the expansibility of the data Cache are improved.
Drawings
FIG. 1 is a schematic view of a service environment of a data caching method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a data buffering device according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a cache mechanism according to the related art of the present invention;
FIG. 4 is a schematic flow chart of an alternative data buffering method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an application scenario of a data caching method in an embodiment of the present invention;
FIG. 6 is a schematic diagram of configuration information according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a process for receiving a HTTP request message according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a front end display of a data buffering method according to the present application;
FIG. 9 is a schematic flow chart of an alternative data buffering method according to an embodiment of the present application;
FIG. 10 is a data flow diagram of a data buffering method according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a database system used in the data caching of the present application;
fig. 12 is a schematic process diagram of a data buffering method according to the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
2) Terminals, including but not limited to: the device comprises a common terminal and a special terminal, wherein the common terminal is in long connection and/or short connection with a sending channel, and the special terminal is in long connection with the sending channel.
3) The client, the carrier for realizing the specific function in the terminal, such as the mobile client (APP), is the carrier for realizing the specific function in the mobile terminal, such as the function of executing report making or the function of displaying report.
4) An applet (Mini Program), a Program developed based on a front-end oriented language (e.g., javaScript) that implements services in hypertext markup language (HTML, hyper Text Markup Language) pages, is software downloaded by a client (e.g., a browser or any client with an embedded browser core) via a network (e.g., the internet) and interpreted and executed in the client's browser environment, saving steps installed in the client. For example, applets for implementing various services such as ticket purchase, report making, data presentation, etc. may be downloaded and run in the social networking client.
5) The runtime environment, the engine that interprets and executes the code, for example, for an applet, may be the JavaScript Core of the iOS platform, the X5 JS Core of the android platform.
6) And a third party application process, wherein the third party development framework (e.g. Vue development framework) is based on components for realizing various functions, which are developed by the instant messaging client and can be used by the instant messaging client.
7) The container cluster management system Kubernetes, which can be called K8S, is an open-source container operation platform, can realize the functions of combining a plurality of containers into one service, dynamically distributing the host machines for container operation and the like, and provides great convenience for users to use the containers. The application can be rapidly deployed, rapidly expanded, seamlessly docked with new application functions and the use of hardware resources can be optimized through the Kubernetes.
Nodes are the basic elements of a container cluster. The nodes depend on the traffic, and can be virtual machines or physical machines. Each node contains the basic components required to run the container group Pod, including Kubelet, kubeproxy, etc.
The Master node (Master node) refers to a cluster control node, which manages and controls the entire cluster, and to which all control commands of k8s are issued, which is responsible for the specific execution process. Kube-apiserver (resource access component), kube-controller-manger (operation management controller component) and kube-schedule (scheduling component) running on the Master Node maintain the healthy operating state of the entire cluster by constantly communicating with kubelet and kube-proxy on the working Node (Node). If the service of the Master Node cannot access a certain Node, the Node is marked as unavailable, and a newly built Pod (container group) is not scheduled to the Node. However, additional detection is required for the Master itself, so that the Master is not a single failure point of the cluster, and therefore high availability deployment is also required for Master services.
Nodes other than a Master are referred to as nodes or Worker nodes (working nodes), and Node nodes in the cluster can be viewed in the Master using a Node view command (kubectl get nodes). Each Node is assigned with some workload (Docker container) by the Master Node, and when a Node is down, the workload on the Node is automatically transferred to other nodes by the Master Node.
Pod (container group): the smallest/simplest basic unit of kubernetes creation or deployment-container group, one Pod represents a micro-service process running on the cluster, and one micro-service process encapsulates an edge container (there may also be multiple edge containers) that provides micro-service applications, storage resources, an independent network IP, and policy options that govern the way the containers run.
8) Istio: a service mesh provides a transparent, language independent method to flexibly and easily implement automation of application network functions.
9) WasmPlugin: an expansion plug-in of a data caching process is a generic term of an envoy plug-in expansion mechanism based on an relation. Wasm is an efficient, low-level programming language that can be written in languages other than Java scripts, such as C, C++, rust, etc., and then compiled into WebAssemble, thereby creating a Web application that is faster to load and execute. Wasm has the characteristics of high operation efficiency, safe memory, no undefined behavior, independence and the like, and is cultivated for many years by a compiler and a standardization team, so that a mature community exists at present. Wasm is a key technology for realizing H.265, and can play video without a plug-in unit, but can only be processed by a central processing unit (CentralProcessing Unit, CPU), and the Wasm needs to occupy hardware resources of the CPU.
On the development side, a developer can develop and obtain plug-in codes based on a plug-in development tool kit, wherein the plug-in development tool kit is an SDK corresponding to language used for developing a WASM plug-in, the WASM is a WebAssemblely technical abbreviation, so that a developer can compile codes by using a programming language familiar to the developer, and then the virtual machine can run the codes through a virtual machine. The development of gateway extension plugins using multiple languages may be supported by a plugin development kit such that the development of gateway extension plugins is no longer limited to a particular programming language.
10 Envoy): a Servicemesh-oriented high-performance network proxy service process runs in parallel with an application program to abstract a network by providing generic functionality in a platform-independent manner.
11 Dis): a cross-platform non-relational key-value remote dictionary service system and a database are fast and support high concurrency reading and writing.
12 Ingress/Egress): the ingress/egress gateway of an istio cluster is also a pod.
13 ServiceEntry): external services may be registered into the ission service grid.
14 Virtual service): traffic behavior may be specified for one or more hostnames within the istio service grid, routed to the appropriate destination.
Fig. 1 is a schematic view of a usage scenario of a data caching method provided in an embodiment of the present invention, referring to fig. 1, a terminal (including a terminal 10-1 and a terminal 10-2) is provided with corresponding clients capable of executing different functions, where the corresponding clients browse by acquiring different corresponding information from corresponding servers 200 through a network 300 for the terminal (including the terminal 10-1 and the terminal 10-2), the terminal is connected to the server 200 through the network 300, the network 300 may be a wide area network or a local area network, or a combination of the two, and data transmission is implemented by using a wireless link, where in a process of information interaction between the terminal and the network, the server 200 may configure corresponding service processes according to different service requirements of different terminals, process service requests in a cloud network, implement a manner of exposing tcp or http protocol, provide a certain capability/function to the outside, and generate corresponding output parameters by accepting specified input parameters for different product lines. For example, when a user opens a client installed on a terminal device and triggers a page link on a User Interface (UI) of the client, the access server stores data corresponding to the page link in a cache regardless of whether the data is hot data. The hot data is data in which the number of user requests exceeds a preset threshold value in a certain period of time. Otherwise, the data that the number of times of the user request does not exceed the preset threshold value in a certain time period is called cold door data. When the number of pages selected by the user is sufficiently large, the access server's cache memory space will be exhausted, resulting in the data not reaching the expiration time being purged. If the cold data stored in the cache occupies more storage space, the storage space of the hot data is occupied, so that the hot data which does not reach the expiration time is cleared. However, for hot data that has not been cleared until the expiration time is reached, it is likely that other users will request the hot data later. When the user requests the hot data, the access server needs to re-acquire the hot data from the back-end server and store the hot data in the local cache, so that the cache hit rate of the hot data is reduced. Particularly, for the page pushed by the server (PUSH), the number of accessed users is usually more, if the cold data occupies more storage space, the cache hit rate is further reduced, and the access server needs to frequently acquire the data of the PUSH page from the back-end server, so that the pressure of the back-end server is increased, the processing speed of the access server on the user access request is reduced, the throughput of the access server is reduced, and the waiting time of the user is prolonged. Therefore, the server caches the data requested by the terminal in the last period of time, and when the terminal requests the same data again, the server directly acquires the data from the cache and returns the data to the terminal.
In some embodiments of the present invention, the service processes generated in server 200 may be written in software code environments in different programming languages, and the code objects may be different types of code entities. For example, in software code in the C language, a code object may be a function. In software code in JAVA language, a code object may be a class, and in IOS side OC language may be a piece of object code. In the software code in the c++ language, a code object may be a class or a function to execute the request instructions from different terminals.
The structure of the data caching device according to the embodiments of the present invention will be described in detail below, and the data caching device may be implemented in various forms, such as a dedicated terminal with a processing function of the data caching device, or a server provided with a processing function of the data caching device, for example, the server 200 in fig. 1. Fig. 2 is a schematic diagram of a composition structure of a data buffering device according to an embodiment of the present invention, and it can be understood that fig. 2 only shows an exemplary structure of the data buffering device, but not all the structures, and some or all of the structures shown in fig. 2 may be implemented as required.
The data caching device provided by the embodiment of the invention comprises: at least one processor 201, a memory 202, a user interface 203, and at least one network interface 204. The various components in the data caching apparatus are coupled together via a bus system 205. It is understood that the bus system 205 is used to enable connected communications between these components. The bus system 205 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled as bus system 205 in fig. 2.
The user interface 203 may include, among other things, a display, keyboard, mouse, trackball, click wheel, keys, buttons, touch pad, or touch screen, etc.
It will be appreciated that the memory 202 may be either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The memory 202 in embodiments of the present invention is capable of storing data to support operation of the terminal (e.g., 10-1). Examples of such data include: any computer program, such as an operating system and application programs, for operation on the terminal (e.g., 10-1). The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application may comprise various applications.
In some embodiments, the data caching apparatus provided in the embodiments of the present invention may be implemented by combining software and hardware, and as an example, the data caching apparatus provided in the embodiments of the present invention may be a processor in the form of a hardware decoding processor, which is programmed to execute the data caching method provided in the embodiments of the present invention. For example, a processor in the form of a hardware decoding processor may employ one or more application specific integrated circuits (ASICs, application Specific Integrated Circuit), DSPs, programmable logic devices (PLDs, programmable Logic Device), complex programmable logic devices (CPLDs, complex Programmable Logic Device), field programmable gate arrays (FPGAs, field-Programmable Gate Array), or other electronic components.
As an example of implementation of the data caching apparatus provided by the embodiment of the present invention by combining software and hardware, the data caching apparatus provided by the embodiment of the present invention may be directly embodied as a combination of software modules executed by the processor 201, the software modules may be located in a storage medium, the storage medium is located in the memory 202, and the processor 201 reads executable instructions included in the software modules in the memory 202, and performs the data caching method provided by the embodiment of the present invention in combination with necessary hardware (including, for example, the processor 201 and other components connected to the bus 205).
By way of example, the processor 201 may be an integrated circuit chip having signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
As an example of a hardware implementation of the data caching apparatus provided by the embodiment of the present invention, the apparatus provided by the embodiment of the present invention may be implemented directly by the processor 201 in the form of a hardware decoding processor, for example, by one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSPs, programmable logic devices (PLDs, programmable Logic Device), complex programmable logic devices (CPLDs, complex Programmable Logic Device), field programmable gate arrays (FPGAs, field-Programmable Gate Array), or other electronic components to implement the data caching method provided by the embodiment of the present invention.
The memory 202 in embodiments of the present invention is used to store various types of data to support the operation of the data caching apparatus. Examples of such data include: any executable instructions, such as executable instructions, for operation on a data caching apparatus, a program implementing a slave data caching method of an embodiment of the present invention may be included in the executable instructions.
In other embodiments, the data caching apparatus provided in the embodiments of the present invention may be implemented in software, and fig. 2 shows the data caching apparatus stored in the memory 202, which may be software in the form of a program, a plug-in, etc., and includes a series of modules, and as an example of the program stored in the memory 202, may include the data caching apparatus, where the data caching apparatus includes the following software modules: an information transmission module 2081 and an information processing module 2082. When the software modules in the data caching device are read by the processor 201 into the RAM and executed, the data caching method provided by the embodiment of the present invention is implemented, where the functions of each software module in the data caching device include:
the information processing device 2081 is configured to obtain, by using the network proxy server process, configuration information of an extension plug-in of the data caching process.
Information processing device 2082 is configured to determine a caching policy for the data caching process in response to the received hypertext transfer protocol request message.
Information processing device 2082 is configured to extract a cache key in a cache policy.
Information processing device 2082 is configured to determine a hit of the hypertext transfer protocol request message based on the cache key.
Information processing device 2082 is configured to obtain the response status code and the response data of the backend server when it is determined that the hypertext transfer protocol request information is missing according to the hit result.
Information processing device 2082 is configured to buffer data of the hypertext transfer protocol request message according to the buffering policy, the response status code, and the response data.
According to the electronic device shown in fig. 2, in one aspect of the application, the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the methods provided in various alternative implementations of the data caching method described above.
The data caching method provided by the embodiment of the present application is described with reference to the data caching device shown in fig. 2, where, in a data caching process, referring to fig. 3, fig. 3 is a schematic diagram of a caching mechanism in the related art of the present application, where 1) a nganx (high performance HTTP and reverse proxy web server) pre-local caching mechanism: a universal distributed cache scheme realized based on a nginx cache module; the method is mainly characterized in that a reverse proxy server nginx is added on the upper layer of a service workload, and a cache basic module of the nginx is utilized to configure and cache strategies irrelevant to languages such as domain names, URIs, request parameters, HTTP status codes and the like, and the back-end data is cached in a local file system, so that whether a request hits a cache or not is quickly judged. The disadvantage of this approach is that: the nginx is multi-node, has the problems of inconsistent cache results and cache expiration time among a plurality of nodes, and is relatively difficult to maintain consistency; b. when nodes are newly added and deleted, a transient cache penetration phenomenon occurs, so that the flow of a rear-end server is excessively increased; c. the cache data reading is a file IO operation, and has a certain performance bottleneck. 2) General caching mechanism for business public library: and the cache related module is extracted from the service code and is divided into a service logic layer and a basic framework layer, so that a universal cache mechanism for external data is realized. The disadvantage of this approach is that: the main implementation is relatively complex, and has language compatibility problem, and the cache component is difficult to upgrade independently from the whole framework. And the later maintenance cost is increased.
In order to solve the above-mentioned drawbacks, referring to fig. 4, an embodiment of the present invention provides a data caching method, which can strip a cache module from a service code based on configuration information of a service process and a target cloud server resource, dynamically expand a function of a network proxy service process to which an APP belongs in a plug-in manner, so that the data cache module supports pluggable, can be independently updated independently of the service APP, can effectively improve maintenance efficiency of the service APP, and improve reliability and expansibility of data cache. The embodiment of the invention can be realized by combining Cloud technology, wherein Cloud technology (Cloud technology) refers to a hosting technology for integrating hardware, software, network and other series resources in a wide area network or a local area network to realize calculation, storage, processing and sharing of data, and can also be understood as the general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a Cloud computing business model. Background services of technical network systems require a large amount of computing and storage resources, such as video websites, picture websites and more portal websites, so cloud technologies need to be supported by cloud computing.
It should be noted that cloud computing is a computing mode, which distributes computing tasks on a resource pool formed by a large number of computers, so that various application systems can acquire computing power, storage space and information service as required. The network that provides the resources is referred to as the "cloud". Resources in the cloud are infinitely expandable in the sense of users, and can be acquired at any time, used as needed, expanded at any time and paid for use as needed. As a basic capability provider of cloud computing, a cloud computing resource pool platform, referred to as a cloud platform for short, is generally called infrastructure as a service (IaaS, infrastructure as a Service), and multiple types of virtual resources are deployed in the resource pool for external clients to select for use. The cloud computing resource pool mainly comprises: computing devices (which may be virtualized machines, including operating systems), storage devices, and network devices. When a user uses the cloud server to store data or deploys different application processes, the running parameters of the server cluster hard disk are detected, so that the possible server cluster hard disk faults can be timely found, and the user data loss caused by the server cluster hard disk faults with failure warning is avoided.
Cloud storage (cloud storage) is a new concept that extends and develops in the concept of cloud computing, and a distributed cloud remote dictionary service system (hereinafter referred to as remote dictionary service system) refers to a remote dictionary service system that provides data storage and service access functions for the outside through aggregation of a large number of different types of storage devices (storage devices are also referred to as storage nodes) in a network through application software or application interfaces by means of functions such as cluster application, grid technology, and distributed storage file system. At present, the storage method of the remote dictionary service system is as follows: when creating logical volumes, each logical volume is allocated a physical storage space, which may be a disk composition of a certain storage device or of several storage devices. The client stores data on a certain logical volume, that is, the data is stored on a file system, the file system divides the data into a plurality of parts, each part is an object, the object not only contains the data but also contains additional information such as a data Identification (ID) and the like, the file system writes each object into a physical storage space of the logical volume, and the file system records storage position information of each object, so that when the client requests to access the data, the file system can enable the client to access the data according to the storage position information of each object. The process of allocating physical storage space for the logical volumes by the remote dictionary service system specifically comprises the following steps: physical storage space is divided into stripes in advance according to the set of capacity measures for objects stored on a logical volume (which measures tend to have a large margin with respect to the capacity of the object actually to be stored) and redundant array of independent disks (RAID, redundant Array of Independent Disk), and a logical volume can be understood as a stripe, whereby physical storage space is allocated for the logical volume.
When the method is applied to cloud products, the front end of the cloud products can be a Web UI component, and the Web UI component is used for receiving Spark related parameters filled by users and generating job data according to the Spark related parameters. The Cluster Manager (Cluster Manager) may be an open source Cluster resource scheduling platform such as YARN, mesos or Kubernetes. Spark itself has supported that these open source platforms, i.e., the protocols between Spark and clusterimanager components, are compatible. Driver is a job Driver, work Node is a Work Node, executor is a task execution component, and task is the smallest execution unit. Further, a structured data package (Spark SQL) is a package used by Spark to manipulate structured data, through which the data can be queried using the SQL language, which supports a variety of data sources such as data warehouse tools (Hive) tables, and the like. The streaming computing component is a component provided by Spark and used for streaming computing real-time data, an application programming interface (API Application Programming Interface) used for operating a data stream is provided, when a software program product of a data caching method provided by the cloud system is deployed, an expansion plug-in of a data caching process can be used in combination with a service app or in an exit gateway of an istio cluster, and when configuration information of the expansion plug-in is changed, the configuration information of the expansion plug-in is changed to flexibly adjust and iteratively upgrade according to requirements of different users.
Fig. 4 is an optional flowchart of a data caching method according to an embodiment of the present invention, and it can be appreciated that the steps shown in fig. 4 may be performed by various electronic devices running the data caching apparatus, for example, a server with a data caching function or a cloud server group. The server with the data caching function is the server 200 shown in fig. 1, so as to execute corresponding software modules in the data caching device shown in fig. 2. The following is a description of the steps shown in fig. 4.
Step 401: the data caching device responds to the received data caching request and triggers a data caching process.
In some embodiments of the present invention, the data cache request may originate from a different APP, such as: in a target object attribute data intensive service system such as a user credit system, when loading and calculating a target object attribute data tag model, concurrent query is performed on external data sources required by the tag model, and data support is provided for tag calculation by using query results of the external data sources. The external data source related in the link is charged according to the request magnitude in terms of cost, and the data is updated according to the unit of day, so that the frequent data inquiry not only increases the pressure of a back-end server, but also reduces the processing speed of an access server for a user access request, thereby reducing the throughput of the access server and prolonging the waiting time of the user.
The server system corresponding to the plurality of APPs of the data cache request may be any type of cache device, for example, may be an optional PHP cache accelerator (Alternative PHP Cache, APC), redis (REmote DIctionaryServer) storage system, memcache system, or the like. Wherein APC is an open source caching tool effective for hypertext processors (Hypertext Preprocessor, PHP) and is used to cache PHP code and user data. PHP is a universal open source scripting language, and is mainly applicable to the field of Web development. PHP can execute dynamic Web pages more quickly, and thus is widely used for development of client programs, such as Web browsers, development of various types of APPs, and the like. Therefore, APC is also widely used as a cache device by an access server providing client services.
When a plurality of APPs corresponding to data cache requests in the present application access a server cluster, taking K8S as an example, a Kubernetes cluster generally includes a Master Node (Master), and a plurality of computing nodes (nodes) respectively connected with the Master Node in a communication manner, where the Master Node is used to manage and control the plurality of computing nodes, the computing nodes are used as workload nodes, and include an original application program directly deployed in the nodes and a plurality of Container groups (Pod), each Container group is packaged with one or more containers (containers) for carrying the application program, and Pod is a basic operation unit of Kubernetes, and is a minimum deployment unit capable of being created, debugged and managed. The types of the working copies are resource types (replyment types), the types of tasks can be deployed, the replyment integrates the functions of online Deployment, rolling upgrading, creating copies, suspending the online tasks, recovering the online tasks, rolling back to a previous replyment of a certain version (success/stability) and the like, the replyment can realize unattended online to a certain extent, the complex communication and operation risks in the online process are greatly reduced, for the working copies of the replyment types, a replyment object list associated with the replyment types can be firstly determined, and then the associated Pod list is found from a cache through a duplicate controller, wherein the replyment is one type of duplicate controllers in kuubenes and the main function is to control the Pod managed by the replyment so that the number of the Pod copies is always maintained at a preset number.
Referring to fig. 5, fig. 5 is a schematic application scenario diagram of a data caching method in an embodiment of the present application, in which a request result for a single App is directly cached, such as App1 and App2 shown in fig. 5. In the scene, the expansion plug-in can be dynamically expanded to the network proxy service process of the corresponding app, so that the data caching method provided by the application is completed, and the caching operation is completed by the cache-Wasm.
Meanwhile, some external request results of some apps which are more dependent on the outside are cached, such as App3 shown in fig. 5. In this scenario, app relies on multiple external data sources, only desiring to cache data for a certain external data source. At this time, external services need to be registered into an atio cluster through an atio component ServiceEntry, a Cache gateway egr is registered, cache-Wasm is injected into the egr gateway, the atio component virtual service is used for binding to the gateway, the output flow of a designated interface+port sent by the App is matched, the output flow is guided to the Cache gateway egr, the flow is forwarded out of the atio cluster, external data is requested, and the data caching method provided by the application is executed, so that the queried data is stored in a container group (Pod).
Step 402: the network proxy server process of the data caching device acquires configuration information of an expansion plug-in of the data caching process.
In some embodiments of the present invention, referring to fig. 6, fig. 6 is a schematic diagram of configuration information in an embodiment of the present invention, where a network proxy service process obtains configuration information of an extension plug-in of a data caching process, which may be implemented by: the network proxy server process acquires an expansion plug-in of the data caching process; analyzing the configuration file of the expansion plug-in to obtain a request path of the hypertext transfer protocol, a generation mode of the cache key, cache expiration time, a cache mode and an encryption mode. As shown in fig. 6, when Pod is started, a network proxy service process is pulled up, and an extension plug-in can be managed in a corresponding repository, and at this time, the plug-in is pulled to be loaded into the network proxy service process, and the configuration file is read, and a caching policy configured by a user is loaded, where the caching policy supported by configuration information shown in fig. 6 includes: 1. cache_url_path: and setting a URL to be cached, if the plug-in is used on an unsupported app, the caching strategy cannot be validated, and directly forwarding the request to a back-end server without caching. 2. cache_key_prefix: and generating a prefix required by caching the key (memory key), and avoiding key collision. 3. cache_mode: 3 modes are supported, including generating a caching key based on URL parameters, httpHeaders, httpBody. 4. cache_param_fields: in the URL parameter mode, corresponding fields are extracted from the Http URL parameters to generate a cache key. 5. cache_body_fields: in the HttpBody mode, a corresponding field is extracted from the HttpBody parameters to generate a cache key. 6. cache_headers_fields: and under the HttpHears mode, extracting corresponding fields from the HttpHears parameters to generate a cache key. 7. cache_route: the data needs to be saved to the redis instance. 8. cache_expire: the buffer expiration time of the data is in seconds. 9. cache_ret_code_field, cache_ret_code, cache_http_status: the method is used for configuring the condition that the back-end server side External-Svr sets the caching condition under the condition of responding to error codes and http state codes. 10. cache_key_encode: the key coding mode supports two modes of SHA256 and MD 5.
In some embodiments of the present application, because different data caching scenarios or different users have different use requirements for data caching, configuration information of the extension plug-in the present application can be flexibly adjusted and iteratively upgraded according to the requirements of different users, and because the extension plug-in is used in combination with a service APP or in an exit gateway of an istio cluster, the adjustment and the iteration upgrade do not need to synchronously upgrade the service APP, thereby improving maintenance efficiency of the service APP.
In some embodiments of the present application, when the expansion plug-in is flexibly adjusted and iteratively upgraded according to the needs of different users, a preset rule template may be used to upgrade the expansion plug-in, so that the difficulty of upgrading and iterating the expansion plug-in is further reduced, and the template fields in the rule template include a name field, a preheating time, a data query statement, a cache key prefix, a data expiration time, a cache server address, a cache server user name, and a cache server password. The field data corresponding to the name field is used for indicating the template name of any rule template, and the field data corresponding to the preheating time is used for indicating the time of sending the data to the cache server for caching; the field data corresponding to the data query statement is used for querying service data associated with any rule template in the database; the field data corresponding to the cache key is a key value and is used for indicating a data storage position where the service data is located in the database; the field data corresponding to the cache key prefix is the naming of the cache key; the field data corresponding to the expiration time of the cache data is used for indicating how long the service data can be stored in the cache server, and the expiration time of the cache data is empty, which means that the service data is never expired in the cache server, and means that the service data can be permanently stored in the cache server; the field data corresponding to the cache server address is used for indicating the address of the cache server for storing the service data; the field data corresponding to the user name of the cache server and the field data corresponding to the password of the cache server are used for connecting the cache server corresponding to the address of the cache server. The template field is a general template field of a rule template, the name field is required to be unique, and the template field can be subjected to custom expansion according to the needs of development users, so that the embodiment of the application is not limited.
Step 403: the data caching device responds to the received hypertext transfer protocol request information and determines a caching strategy of the data caching process according to the configuration information.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating a process of receiving a hypertext transfer protocol request message according to an embodiment of the present application, and in some embodiments of the present application, determining a caching policy of a data caching process may be implemented by: analyzing the received hypertext transfer protocol request information to obtain message header information, message body information and request parameters of the hypertext transfer protocol request information; and generating a caching strategy of the data caching process according to the message header information, the message body information and the request parameters. The user initiates hypertext transfer protocol request information to a plurality of APP service processes deployed in the atio cluster, and a service response result is expected to be acquired. The incoming traffic of the app is hijacked by the network proxy server process and enters the cache process flow of the expansion plug-in deployed at the gateway of the istio cluster, in fig. 7, the network proxy server process intercepts the incoming traffic of the app and forwards the traffic to the app after passing through a Filter Chain (Filter Chain), wherein multiple filters can be developed and written in one Web application, and these filters are called a Filter Chain in combination. The Web server decides which filter to call first according to the registration order of the filters in the Web xml file (the mapping configuration order), calls the following filters in turn, and calls the target resource if there is no next filter. The network proxy service process filter and the memory server in fig. 7 form a universal cache component based on the istio cloud native, and the data caching method provided by the application can be executed.
Step 404: the data caching device extracts a cache key in a cache policy.
Wherein, the process of extracting the cache key (cache key) in step 404 may be referred to as an onhttprequest headers (hypertext transfer protocol request header information processing)/onhttprequest body (hypertext transfer protocol request body information processing) stage. In this stage, there are three cases for cache keys: 1) Extracting corresponding fields from uniform resource locator parameters (URL parameters) to generate cache keywords; 2) Extracting corresponding fields from parameters of message body information (HttpBody) to generate cache keywords; 3) The corresponding field is extracted from the parameters of the message header information (httphemers) to generate the cache key. Specifically, the expansion plug-in firstly analyzes the cache configuration of the user in the plug-in, extracts corresponding fields from the hypertext transfer protocol request information to form a cache key expected by the user, and the cache server acquires data from the local optional cache (APC Alternative PHP Cache) through an apc_fetch (key_data) function according to the cache key of the data. If the apc_fetch (key_data) function returns a value of 1, the data acquisition is successful; if the apc_fetch (key_data) function returns a value of 0, this indicates a data acquisition failure.
In step 404, the header information is a core part of the hypertext transfer protocol request transmitting network request and receiving response, and both the hypertext transfer protocol request information and the response contain the header information; the message header information consists of 3 parts, start line + header line + entity body.
In addition, the information processing stage of the hypertext transfer protocol request body can extract the message body of the hypertext transfer protocol request information, and the message body corresponds to three modes of URL parameter cache, httpHears parameter cache and HttpBody parameter cache respectively. After the cache key is generated, a memory lookup (CacheLookUp) stage is entered, and an interface provided by the memory server is called to confirm whether the data requested at this time hits the cache.
Step 405: the data caching device determines hit results of the hypertext transfer protocol request information according to the cache keywords.
In some embodiments of the present invention, determining the hit of the hypertext transfer protocol request message may be accomplished by:
receiving a data interaction result sent by a remote dictionary service system (redis) through a memory server (Cache-Svr); the memory server side sends the data interaction result to the expansion plug-in; analyzing the data interaction result through the expansion plug-in, and determining a hit result of the hypertext transfer protocol request information; when determining that the hypertext transfer protocol request information is missed according to the hit result, sending the hypertext transfer protocol request information to an interface of a back-end server; when the hypertext transfer protocol request information hits, the hit result of the hypertext transfer protocol request information is sent to an initiating terminal of the hypertext transfer protocol request information. In step 405, the process of determining the hit of the hypertext transfer protocol request message may be referred to as a hypertext transfer protocol message reply (httpcall response callback) phase. At this stage, the memory server can complete data interaction with the remote dictionary service system once, confirm whether the hypertext transfer protocol request hits the existing cache or not, and then notify the expansion plug-in of the result of whether the existing cache is hit or not. The expansion plug-in triggers the callback to receive the response result of the memory server, analyzes whether the cache is hit, if so, executes step 406, and if not, executes step 407.
Step 406: and the data caching device sends cache information corresponding to the hypertext transfer protocol request information when determining that the hypertext transfer protocol request information hits according to the hit result.
Step 407: and the data caching device acquires the response state code and the response data of the back-end server when determining that the hypertext transfer protocol request information is missed according to the hit result.
When the hypertext transfer protocol request information is missed, it is indicated that the data to be queried by the user is not stored in the cache server, and at this time, the data to be queried by the user needs to be cached.
Step 408: and the data caching device caches the data of the hypertext transfer protocol request information according to the caching strategy, the response state code and the response data.
When the response state code and the response data accord with the caching condition, determining a memory server and a corresponding caching process through a caching strategy; transmitting data of the hypertext transfer protocol request information to a cache storage process through a memory server; configuring a cache expiration time for a cache saving process; and caching the data of the hypertext transfer protocol request information through a caching and storing process. Specifically, the expansion plug-in can acquire the response status code of the back-end server, respond to the data, and determine whether to cache the request result in combination with the caching strategy configured by the user. For example, the user specifies that the data of the hypertext transfer protocol request message is buffered only when the back-end response status code is 200 and the error code field ret in the response data is 0. When the set buffer condition is met, the data of the current hypertext transfer protocol request information is buffered through a processing stage (SaveCache) of a buffer storage process. When the cache condition is met, the expansion plug-in completes data interaction with the remote dictionary service system by calling an interface provided by the memory service end, the memory service end stores the request data to the corresponding remote dictionary service system instance, and the cache expiration time expected by the user is set.
After the execution of step 408 is completed, the back-end server returns the cached data to the user, and if the user performs the data query again, the user can directly hit the cached data, thereby saving the time consumption of the data query.
In some embodiments of the present invention, an authentication service is provided for each service process due to Istio. The Istio agent running together with each network proxy service process agent is used together with the Istio to automatically perform key and certificate rotation, so that the authority of a service object (APP user) can be adjusted, wherein the authority information of the service object is verified based on the service object identification of the network proxy service process; when the identification of the service object is consistent with the corresponding authority information, the data caching process is triggered to be adjusted, so that the safety of the data caching is ensured. Meanwhile, in response to receiving the hypertext transfer protocol request message including at least one URL, for each URL in the at least one URL, whether the URL exists or not is queried from a preset blacklist, and if so, access to the URL is forbidden. Some resources may be harmful and access to these resources or domain name dimensions may need to be prohibited by a blacklist mechanism. Thereby protecting the security of the terminal.
The data caching method provided by the application is further described below in connection with different implementation scenarios. Referring to fig. 8, fig. 8 is a schematic diagram of a front end display of a data buffering method according to the present application. Wherein terminals (e.g., terminal 10-1 and terminal 10-2 in fig. 1) are provided with clients capable of displaying software for making financial payments, such as clients or plug-ins for financial activities by or through virtual or physical resources. The user can conduct loan service from the financial institution through the corresponding client. The method can be applied to cross-industry cooperation scenes of financial wind control scenes, for example, the APP of a business terminal is a management APP of a bank A and a management APP of a bank B respectively, and can also be an applet of the bank A and an applet of the bank B.
The bank a server and the bank B server may both be servers under the micro-service architecture (ATF Total Application Framework), supporting the ission service. The ATF TARS framework is a high-performance RPC development framework using a TARS protocol based on name service, and is matched with an integrated service management platform, so that individuals or enterprises can be helped to quickly construct stable and reliable distributed application in a micro-service mode, and a container cluster management system applied by the data caching method is supported. The TARS framework is an open source project summarizing the practical achievements of TAFs. The TARS framework is a framework which has the advantages of easy use, high performance and service management. The design idea of the protocol layer at the bottommost layer of the TARS framework is to unify the protocols of service network communication, and develop a unified protocol which supports multiple platforms, expandability and automatic generation of protocol codes in an IDL mode. In the development process, a developer only needs to pay attention to the content of a protocol field of communication and does not need to pay attention to the implementation details of the protocol field, so that the problems of whether the protocol to be considered can be used in a cross-platform manner, whether compatibility, expansion and the like are possibly required in the process of developing the service are greatly solved. After acquiring the data using the TAF, the response structure is disassembled into a common js object. By the expansion plug-in provided by the application, the service layer can buffer the information of the user in the buffer server for timely inquiry, and timely buffer the data of the missed hypertext transfer protocol request information, so that the fastest data inquiry speed is ensured.
As shown in fig. 1, the data caching method provided by the embodiment of the present application may be implemented by a corresponding cloud device, for example: servers of different business parties (e.g., bank a server 10-1 and bank B server 10-2) are connected directly to the service architecture operator server 200 at the cloud. It should be noted that the service architecture operator server 200 may be a physical device, or may be a virtualized device in a cloud network. The data cache requests in table 1 may originate from different bank management APPs when the bank inquires about the deposit of the user, for example: in a target object attribute data intensive service system such as a user credit system, when a link of inquiring target object attribute data is involved, concurrent inquiry is conducted on deposit information and credit scores of a plurality of target objects, and data support is provided for label calculation by using an inquiry result of an external data source. The external data source related in the link has the advantages that the cost is charged according to the request magnitude, and the data is updated and updated according to the unit of day, so that the time for frequently inquiring the data is saved due to the need of data caching.
TABLE 1
Fig. 9 is a schematic diagram of processing steps of a data caching method according to an embodiment of the present application, which specifically includes the following steps:
Step 901: the network proxy server process acquires configuration information of an expansion plug-in of a data cache process of the bank management APP.
Fig. 10 is a data flow diagram of a data buffering method according to an embodiment of the present application. The filter and the memory server form a universal cache component based on the Istio cloud, the data caching method provided by the application can be executed and deployed in a bank server, and a user of the bank APP initiates hypertext transfer protocol request information to a bank APP service process deployed in an Istio cluster, and is expected to acquire a service response result. The incoming traffic of the app is hijacked by the network proxy server process and enters the cache process flow of the expansion plug-in deployed at the gateway of the istio cluster. In the data flow process shown in fig. 10, the cloud native universal caching mechanism based on the relation is composed of an extension plug-in unit injected into a network proxy service process and a storage server, and a non-relational key-value storage system remote dictionary service system is used by a bottom layer caching component. The expansion plug-in manages the whole general flow of the external dependency cache, and the memory server is responsible for data interaction with the remote dictionary service system, including key value pair acquisition, setting, expiration time and interaction of storage instance routing data.
Step 902: and receiving the hypertext transfer protocol request information sent by the bank management APP, and determining a caching strategy of the data caching process according to the configuration information.
Step 903: and extracting the cache keywords in the cache policy, and determining the hit result of the hypertext transfer protocol request information according to the cache keywords.
Step 904: when the hypertext transfer protocol request information is determined to be missed according to the hit result, a response state code and response data of the back-end server are acquired and stored in table 1.
When the hypertext transfer protocol requests information to hit the cache data in table 1, the user ID, deposit amount data and credit score in table 1 can be directly sent to the user of the bank management APP.
Step 905: and caching the data of the hypertext transfer protocol request information according to the caching strategy, the response state code and the response data.
When data caching is performed, the caching position of the data can be judged according to the caching position identification. And obtaining corresponding target cache positions in a plurality of cache positions (HTTP cache 1 and HTTP cache 2 and … … HTTP cache n), judging whether the cache space of the target cache positions is full, screening the local HTTP cache according to a preset cache policy if the cache space is full, and then caching the screened data. If not, the data of the transmission protocol request information is directly cached to form a new table 2.
TABLE 2
In this way, through step 901-step 905, the data caching process is triggered to cache the data, so that the links of loading and calculating the bank account information of the user can directly use the cached data to directly calculate, and the time consumption of frequently inquiring the data is saved.
The data caching method provided by the application is further described below in connection with different implementation scenarios. With the explosive growth of game data, the traditional relational database is more and more difficult to meet the requirements of high concurrency read-write, high-efficiency storage and access of mass data, high expandability, high availability and the like. The NoSQL database has been developed very rapidly due to its simple expansion, and the advantages of fast read-write, etc., where, referring to fig. 11, fig. 11 is a schematic diagram of a database system used in the data caching of the present application, a database table is stored in a game database, and optionally, the game database includes data storage protocols supported by the database table in TcaplusDB, tcaplusDB, including the tdr protocol and ProtocolBuffers (pb) protocol. Because the dynamic game time is fragmented, players have more interaction, the data volume is large, the whole-area whole service and the partitioned service are common, the game development change is quick, the operation activity is more, the low-delay requirement of a data storage layer is high, and meanwhile, the data volume of cache data generated by different game operations of users is more, therefore, the existing data caching method needs to be improved, the cache hit rate of game data is improved, the pressure of a rear-end game server is reduced, the processing speed of an access game server to game user access requests is improved, and the throughput of the access server is increased.
In the structure shown in fig. 11, the TcaplusDB record is composed of a row of strings and each field number supports a nesting type, nesting at most 32 layers. The single record size is up to 10MB, and the commonly used object files can be serialized into binary file for storage.
Referring to fig. 12, fig. 12 is a schematic process diagram of a data caching method according to the present application. The game data to be cached in fig. 12 refers to table 3, in which the number of props C for which the game user ID, the purchase of game equipment a, the participation of game activity B, and the acquisition of game rewards are required to be simultaneously cached as concurrent data.
TABLE 3 Table 3
Fig. 12 specifically includes the following steps:
step 1201: the network proxy server process obtains configuration information of an expansion plug-in of a data cache process of the target game APP.
The data flow diagram shown in fig. 10 in the previous embodiment is combined. The filter and the memory server form a universal cache component based on the relation cloud primordia, the data caching method can be executed, the data caching method can be deployed in a server of a game operator or in a cloud server of a cloud server cluster operator according to the requirements of the game operator, and a game user initiates hypertext transfer protocol (HTTP) request information to a game Application (APP) service process deployed in the relation cluster and expects to acquire a service response result. The access flow of the game APP is hijacked by the network proxy service process and enters the cache processing flow of the expansion plug-in deployed at the gateway of the istio cluster. According to the data caching method provided by the application, the cloud native general caching mechanism based on the relation is composed of the expansion plug-in unit injected into the network proxy service process and the storage server, and the bottom layer caching component uses the remote dictionary service system of the non-relational key-value storage system, so that the maintenance efficiency of the game APP can be effectively improved by depending on the cloud server cluster.
Step 1202: and receiving the hypertext transfer protocol request information sent by the target game APP, and determining a caching strategy of the data caching process according to the configuration information.
Step 1203: and extracting the cache keywords in the cache policy, and determining the hit result of the hypertext transfer protocol request information according to the cache keywords.
Step 1204: when the hypertext transfer protocol request information is determined to be missed according to the hit result, a response state code and response data of the back-end server are acquired and stored in table 3.
When the hypertext transfer protocol requests information to hit the data cached in table 3, the number of props C in table 3, which are used for the game user ID, purchasing game equipment a, participating in game activity B, and obtaining game rewards, can be directly transmitted to the user of the target game APP. The game user can learn the identity ID of himself, the situation of purchasing game accessory a, the situation of participating in game activity B, the number of props C that are awarded for the game.
Step 1205: and judging the caching position of the data according to the caching strategy, the response state code and the response data, and caching the data of the hypertext transfer protocol request information according to the caching position of the data.
In step 1205, it is first determined whether the buffer space of the target buffer location is full, if so, the local HTTP buffer is screened according to a preset buffer policy, and then the screened data is buffered. If not, the data of the transmission protocol request information is directly cached, so that the game data caching process is triggered through steps 1201-1205, game data caching is performed in the database structure shown in fig. 10, the game APP can directly use the cached data to perform direct calculation when calling the game data, time for frequently inquiring the data is saved, a user can directly use the cached data to directly obtain an inquiring result when inquiring the game data, and the time for inquiring the concurrent data shown in table 3 is saved.
The beneficial technical effects are as follows:
1) The embodiment of the application triggers a data caching process by responding to the received data caching request; the network proxy server process obtains configuration information of an expansion plug-in of the data caching process; responding to the received hypertext transfer protocol request information, and determining a caching strategy of a data caching process; extracting a cache keyword in a cache policy; determining hit results of the hypertext transfer protocol request information according to the cache keywords; when determining that the hypertext transfer protocol request information is missed according to the hit result, acquiring a response state code and response data of the rear-end server; and caching the data of the hypertext transfer protocol request information according to the caching strategy, the response state code and the response data. Therefore, the data caching method provided by the application is irrelevant to programming language, and the cache strategy formed by the configuration information of the expansion plug-in can improve the cache hit rate, reduce the pressure of the back-end server and improve the processing speed of the access server on the user access request, thereby increasing the throughput of the access server.
2) According to the embodiment of the application, the configured expansion plug-in of the expansion plug-in can be used for generating the cache key based on the HttpURL parameter and the key field in HttpHeaders, httpBody by reading the corresponding configuration file and combining with the memory server (memory server), judging whether to cache or not and how long to cache and save the instance configuration based on the state code and the error code of the external service, and the configuration information of the expansion plug-in can be flexibly adjusted and iteratively updated according to the requirements of different users, so that the maintenance efficiency of the service APP is effectively improved, and the reliability and expansibility of the data cache are improved.
The above embodiments are merely examples of the present invention, and are not intended to limit the scope of the present invention, so any modifications, equivalent substitutions and improvements made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (11)

1. A data caching method, the method comprising:
the network proxy server process obtains configuration information of an expansion plug-in of the data caching process;
responding to the received hypertext transfer protocol request information, and determining a caching strategy of the data caching process according to the configuration information;
extracting a cache keyword in the cache policy;
determining hit results of the hypertext transfer protocol request information according to the cache keywords;
when the hit result determines that the hypertext transfer protocol request information hits, sending cache information corresponding to the hypertext transfer protocol request information;
when the hypertext transfer protocol request information is determined to be missed according to the hit result, a response state code and response data of the rear-end server are obtained;
and caching the data of the hypertext transfer protocol request information according to the caching strategy, the response status code and the response data.
2. The method of claim 1, wherein the network proxy service process obtains configuration information of an extension plug-in of a data caching process, comprising:
the network proxy server process acquires an expansion plug-in of a data caching process;
and analyzing the configuration file of the expansion plug-in to obtain a request path of the hypertext transfer protocol, a generation mode of a cache keyword, a cache expiration time, a cache mode and an encryption mode.
3. The method of claim 1, wherein responsive to the received hypertext transfer protocol request message, determining a caching policy for the data caching process based on the configuration information comprises:
analyzing the received hypertext transfer protocol request information to obtain message header information, message body information and request parameters of the hypertext transfer protocol request information;
and generating a caching strategy of the data caching process according to the message header information, the message body information and the request parameter.
4. The method of claim 1, wherein extracting the cache key in the cache policy comprises:
extracting corresponding fields from the uniform resource locator parameters to generate cache keywords; or alternatively
Extracting corresponding fields from parameters of the message body information to generate cache keywords; or alternatively
And extracting corresponding fields from the parameters of the message header information to generate a cache key.
5. The method of claim 1, wherein determining the hit of the hypertext transfer protocol request message based on the cache key comprises:
receiving a data interaction result sent by a remote dictionary service system through a memory server;
transmitting the data interaction result to an expansion plug-in through the memory server;
analyzing the data interaction result through the expansion plug-in, and determining a hit result of the hypertext transfer protocol request information;
when the hit result determines that the hypertext transfer protocol request information is missed, sending the hypertext transfer protocol request information to an interface of a back-end server;
and when the hypertext transfer protocol request information hits, sending the hit result of the hypertext transfer protocol request information to an initiating terminal of the hypertext transfer protocol request information.
6. The method of claim 1, wherein buffering the data of the hypertext transfer protocol request message according to the buffering policy, the response status code, and the response data, comprises:
When the response state code and the response data accord with the caching condition, determining a memory server and a corresponding caching and preserving process through the caching strategy;
transmitting data of the hypertext transfer protocol request information to a cache storage process through a memory server;
configuring a cache expiration time for the cache saving process;
and caching the data of the hypertext transfer protocol request information through the caching and storing process.
7. The method according to claim 1, wherein the method further comprises:
verifying authority information of the service object based on the service object identification of the network proxy service process;
and triggering the data caching process to be adjusted when the identification of the service object is consistent with the corresponding authority information.
8. A data caching apparatus, the apparatus comprising:
the information transmission device is used for the network proxy server process to acquire the configuration information of the expansion plug-in of the data cache process;
the information processing device is used for responding to the received hypertext transfer protocol request information and determining a caching strategy of the data caching process according to the configuration information;
the information processing device is used for extracting the cache key words in the cache strategy;
The information processing device is used for determining a hit result of the hypertext transfer protocol request information according to the cache key;
information processing means for transmitting cache information corresponding to the hypertext transfer protocol request information when it is determined that the hypertext transfer protocol request information hits according to a hit result;
the information processing device is used for acquiring a response state code and response data of the back-end server side when the hypertext transfer protocol request information is determined to be missed according to the hit result;
and the information processing device is used for caching the data of the hypertext transfer protocol request information according to the caching strategy, the response state code and the response data.
9. An electronic device, the electronic device comprising:
a memory for storing executable instructions;
a processor for implementing the data caching method of any one of claims 1 to 7 when executing executable instructions stored in said memory.
10. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the data caching method of any one of claims 1 to 7.
11. A computer readable storage medium storing executable instructions which when executed by a processor implement the data caching method of any one of claims 1 to 7.
CN202310138233.7A 2023-02-13 2023-02-13 Data caching method, device, software program, equipment and storage medium Pending CN116974780A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310138233.7A CN116974780A (en) 2023-02-13 2023-02-13 Data caching method, device, software program, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310138233.7A CN116974780A (en) 2023-02-13 2023-02-13 Data caching method, device, software program, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116974780A true CN116974780A (en) 2023-10-31

Family

ID=88482059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310138233.7A Pending CN116974780A (en) 2023-02-13 2023-02-13 Data caching method, device, software program, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116974780A (en)

Similar Documents

Publication Publication Date Title
US8978023B2 (en) Canonical mechanism for securely assembling features into a mobile application
US8572033B2 (en) Computing environment configuration
US11070648B2 (en) Offline client replay and sync
US8983935B2 (en) Methods for utilizing a javascript emulator in a web content proxy server and devices thereof
CN107315972B (en) A kind of big data unstructured document dynamic desensitization method and system
US10574724B2 (en) Automatic discovery of management nodes and generation of CLI using HA module
US11556348B2 (en) Bootstrapping profile-guided compilation and verification
US20230036980A1 (en) Micro-frontend system, sub-application loading method, electronic device, computer program product, and computer-readable storage medium
US11436066B2 (en) System for offline object based storage and mocking of rest responses
US11595299B2 (en) System and method of suppressing inbound payload to an integration flow of an orchestration based application integration
CN114586011A (en) Insertion of owner-specified data processing pipelines into input/output paths of object storage services
US11853806B2 (en) Cloud computing platform that executes third-party code in a distributed cloud computing network and uses a distributed data store
US20170017380A1 (en) Mobile enabling a web application developed without mobile rendering capabilities
CN111328394A (en) Locally secure rendering of WEB content
CN114553960A (en) Data caching method, device, equipment and storage medium
CN108156009B (en) Service calling method and device
US20130159468A1 (en) Computer implemented method, computer system, electronic interface, mobile computing device and computer readable medium
CN111245890B (en) Method and device for downloading files in webpage
US20080065679A1 (en) Method for rules-based drag and drop processing in a network environment
CN116974780A (en) Data caching method, device, software program, equipment and storage medium
CN116954810A (en) Method, system, storage medium and program product for creating container application instance
CN114048409A (en) Cache management method and device, computing equipment and storage medium
US11055196B1 (en) System and method for optimizing technology stack architecture
CN110531982B (en) Self-adaptive packing method and system
CN117850771A (en) Business application development platform, method and storage medium for web service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication