CN113590665B - Cache monitoring management method, device, system, equipment and storage medium - Google Patents

Cache monitoring management method, device, system, equipment and storage medium Download PDF

Info

Publication number
CN113590665B
CN113590665B CN202110924879.9A CN202110924879A CN113590665B CN 113590665 B CN113590665 B CN 113590665B CN 202110924879 A CN202110924879 A CN 202110924879A CN 113590665 B CN113590665 B CN 113590665B
Authority
CN
China
Prior art keywords
cache
data
target
monitoring
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110924879.9A
Other languages
Chinese (zh)
Other versions
CN113590665A (en
Inventor
王涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lian Intellectual Property Service Center
Xinjiang Beidou Tongchuang Information Technology Co ltd
Original Assignee
Xinjiang Beidou Tongchuang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinjiang Beidou Tongchuang Information Technology Co ltd filed Critical Xinjiang Beidou Tongchuang Information Technology Co ltd
Priority to CN202110924879.9A priority Critical patent/CN113590665B/en
Publication of CN113590665A publication Critical patent/CN113590665A/en
Application granted granted Critical
Publication of CN113590665B publication Critical patent/CN113590665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3034Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a storage system, e.g. DASD based or network based
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/252Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application

Abstract

The application provides a cache monitoring management method, a device, a system, equipment and a storage medium, wherein the method comprises the following steps: intercepting a data request sent by a first client, wherein the data request carries first access parameters and attribute information of target data to be requested; determining a first cache mode according to the first access parameter; determining a target cache region corresponding to target data in a first cache mode according to the attribute information; requesting target data from the target cache area according to the attribute information; intercepting target data acquired by a target cache region; and carrying out data analysis on the data request and the target data to obtain the cache monitoring data of the target cache region. The application can manage and monitor the caches of various servers through the embedded intermediate tool, has high flexibility, and can specifically monitor the cache hit rate of each cache and the specific occupation condition of the cache memory.

Description

Cache monitoring management method, device, system, equipment and storage medium
Technical Field
The present application relates to the field of data buffering technologies, and in particular, to a method, an apparatus, a system, a device, and a storage medium for monitoring and managing a buffer.
Background
With the rapid development of internet technology, the requirement on the response speed of services is higher and higher, and the first sharps for improving the service response are to use caching. In the prior art, the cache is mainly divided into a local cache and a distributed cache.
For local caching, the prior art only can acquire the situation of occupying the memory as a whole through jvm (java virtual machine), but cannot distinguish the service key and the corresponding value situation of the specific caching.
In a distributed service such as redis, memcache, only a portion of the data can be queried by command and no data is summarized.
In the prior art, a method is also provided, namely, a log is manually added to the service code and then judged according to log analysis, so that the method has high invasiveness to the service code, and in a system with complex service, if each service line is manually subjected to cache monitoring, the system code is disordered, and each service line is independently analyzed, so that the cost is high.
Disclosure of Invention
The method aims at solving the technical problems that monitoring of the server cache is too coarse and not specific and the monitoring is complicated in the prior art. The application provides a cache monitoring management method, a device, a system, equipment and a storage medium, which mainly aim to simply, conveniently and specifically monitor cache data of a server.
In order to achieve the above object, the present application provides a method for monitoring and managing a cache, which includes:
intercepting a data request sent by a first client, wherein the data request carries first access parameters and attribute information of target data to be requested;
determining a first cache according to interface parameters or annotation parameters in the first access parameters, wherein the first cache is the local cache or the distributed cache;
determining a target sub-cache region corresponding to target data in the first cache according to the attribute information;
requesting target data from the target sub-cache area according to the attribute information;
intercepting target data acquired from a target sub-cache area;
and carrying out data analysis on the data request and the target data to obtain cache monitoring data of the target sub-cache region, wherein the cache monitoring data comprises at least one of the number of keys in the sub-cache region, the memory size occupied by the keys and related data of each key.
In addition, in order to achieve the above object, the present application further provides a cache monitoring and managing device, which includes:
the first interception module is used for intercepting a data request sent by the first client, wherein the data request carries first access parameters and attribute information of target data to be requested;
The first determining module is used for determining a first cache according to interface parameters or annotation parameters in the first access parameters;
the second determining module is used for determining a target sub-cache area corresponding to the target data in the first cache according to the attribute information;
the first request module is used for requesting target data from the target sub-cache area according to the attribute information;
the second interception module is used for intercepting the target data acquired from the target sub-cache area;
the data analysis module is used for carrying out data analysis on the data request and the target data to obtain cache monitoring data of the target sub-cache region, wherein the cache monitoring data comprises at least one of the number of keys in the sub-cache region, the memory size occupied by the keys and related data of each key.
In addition, in order to achieve the above object, the present application further provides a cache monitoring management system, which includes: the system comprises an application server and a monitoring server, wherein a cache monitoring management tool is installed in the application server;
the cache monitoring management tool is used for intercepting a data request sent by a first client, wherein the data request carries first access parameters and attribute information of target data to be requested;
the buffer monitoring management tool is also used for determining a first buffer according to the interface parameter or the annotation parameter in the first access parameter;
The buffer monitoring management tool is also used for determining a target sub-buffer area corresponding to the target data in the first buffer according to the attribute information;
the cache monitoring management tool is also used for requesting target data from the target sub-cache area according to the attribute information;
the cache monitoring management tool is also used for intercepting target data acquired from the target sub-cache area;
the monitoring server is used for carrying out data analysis on the received data request and target data uploaded by the cache monitoring management tool to obtain cache monitoring data of the target sub-cache area, wherein the cache monitoring data comprises at least one of the number of keys in the sub-cache area, the memory size occupied by the keys and related data of each key.
To achieve the above object, the present application also provides a computer device including a memory, a processor, and computer readable instructions stored on the memory and executable on the processor, the processor executing the steps of the cache monitor management method as in any one of the preceding claims when executing the computer readable instructions.
To achieve the above object, the present application further provides a computer-readable storage medium having computer-readable instructions stored thereon, which when executed by a processor, cause the processor to perform the steps of the cache monitor management method as in any one of the preceding claims.
The cache monitoring management method, the device, the system, the equipment and the storage medium can be installed in various servers supported by any configuration through the cache monitoring management tool as an intermediate tool, have wide application, do not need to manually write logs to track cache data like the prior art, therefore, the writing work of codes is reduced, and for a service system where an application server is located, the original codes of the service system are not changed, so that the disorder of the codes of the service system is not caused, the invasiveness of the codes is low for the service system, and the stability and the safety of the service system are ensured to a certain extent. In addition, the buffer memory monitoring management tool monitors all the data related to the buffer memory in real time, so that each service line in the service system can be monitored, the cost is low, and the monitoring range is wide.
In addition, the embedded cache monitoring management tool can manage and monitor caches of various servers, has high flexibility, can specifically monitor the cache hit rate and the specific occupation condition of a cache memory of each cache (such as a local cache and a distributed cache), and can only roughly obtain the approximate condition of the cache in the prior art.
Drawings
FIG. 1 is a diagram illustrating an application scenario of a cache monitoring management method according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for cache monitoring and management according to an embodiment of the present application;
FIG. 3 is a block diagram illustrating a buffer monitor management device according to an embodiment of the present application;
fig. 4 is a block diagram showing an internal structure of a computer device according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The cache monitoring management method provided by the application can be applied to a cache monitoring management system as shown in figure 1. The cache monitoring management system comprises an application server, a monitoring server and a distributed cache, wherein a cache monitoring management tool is installed in the application server, and a local cache is arranged in the application server. The cache monitoring management tool is communicated with the monitoring server, the first client and the distributed cache through a network, and the first client is communicated with the application server through the network. And the cache monitoring management tool performs data transmission with the local cache through software and hardware in the application server. The cache monitoring management tool is specifically a sdk or program package. The cache monitoring management tool is used for intercepting a data request sent by a first client, wherein the data request carries first access parameters and attribute information of target data to be requested; the buffer monitoring management tool is also used for determining a first buffer according to the interface parameter or the annotation parameter in the first access parameter; the buffer monitoring management tool is also used for determining a target sub-buffer area corresponding to the target data in the first buffer according to the attribute information; the cache monitoring management tool is also used for requesting target data from the target sub-cache area according to the attribute information; the cache monitoring management tool is also used for intercepting target data acquired from the target sub-cache area; the monitoring server is used for carrying out data analysis on the received data request and target data uploaded by the cache monitoring management tool to obtain cache monitoring data of the target sub-cache area.
The first client is a client corresponding to an application server, and the application server is an origin server of the first client. The user of the first client is typically a user of an application program or web page or address corresponding to the application server. The application server further comprises a corresponding database, when the first client sends a data request to the application server, corresponding target data is searched from a local cache or a distributed cache according to a cache mode in the data request, and if the corresponding target data cannot be searched in the cache, the application server searches the corresponding target data from the corresponding database and returns the corresponding target data to the first client. And the application server also writes the target data into a local cache or a distributed cache. Whether the target data is written into the local cache or the distributed cache can be specified by the first client, defaults by a system, and can be determined according to the actual data type of the target data and the data types which can be stored in the local cache and the distributed cache in a supporting way.
The application server may be implemented as a stand-alone server or as a cluster of servers.
In another embodiment, the cache monitoring management system may further include a second client corresponding to the monitoring server, where the monitoring server communicates with the second client through a network. The user of the second client is typically an engineer or other professional monitoring the cached data. And the engineering personnel can manage and configure the cache monitoring management tool in the application server through the second client and the monitoring server in sequence. The second client can also display the cache monitoring data acquired by the monitoring server on an interface in a visual mode for engineering personnel to view in real time.
The first client and the second client may be, but are not limited to, various personal computers, notebook computers, smartphones, tablet computers, and portable wearable devices.
Of course, the cache monitoring management method provided by the application can also be independently realized by a cache monitoring management tool in the application server.
FIG. 2 is a flowchart illustrating a method for cache monitoring management according to an embodiment of the present application. Referring to fig. 2, an example of the method applied to the cache monitoring management system in fig. 1 will be described. The cache monitoring management method comprises the following steps S100-S600.
S100: intercepting a data request sent by a first client, wherein the data request carries first access parameters and attribute information of target data to be requested.
Specifically, the data request is also an access request sent by the first client to the application server. The first access parameters include interface parameters or annotation parameters. The interface parameter or the annotation parameter is used to indicate from which cache provided interface the target data needs to be read. Wherein either the local cache or the distributed cache provides an API interface for data reading or writing. The interface parameters include interface information provided when requesting read data from a cache in the form of an interface call, for indicating from which cache-provided interface the read target data is required. The annotation parameters include interface information provided when requesting read data from the caches in an annotated manner to indicate from which cache provided interface the read target data is required. The first access parameter is set in advance before sending the data request. The first access parameter may be modified according to the actual situation. It is possible to determine from which buffer the target data is read, and also in which way the target data is read, based on the first access parameter.
In the prior art, the application server is not provided with a cache monitoring management tool, so that the data request of the first client is directly processed by the application server. The buffer memory monitoring management tool of the embodiment of the application can be used as a bridge between an application server and a buffer memory to manage the buffer memory and monitor the buffer memory. Managing the cache specifically includes writing data to be cached to the cache, reading data from the cache according to a data request, and the like, without limitation.
The service system code corresponding to the application server can be introduced into the sdk jar of the cache monitoring management tool only by using one java annotation, so that the cache access and monitoring can be finished, and the code invasiveness to the service system is extremely low.
The target data is data requested by the first client from a local cache or a distributed cache or a database of the application server. The attribute information may include, but is not limited to, a unique identification of the target data, a data type of the target data, and the like. The unique identifier is identification information of the target data, and is used for uniquely determining the target data, for example, may be a key of the target data.
S200: the first cache is determined according to the interface parameters or the annotation parameters in the first access parameters.
Specifically, the interface parameter or the annotation parameter in the first access parameter carries a unique identifier for locating the first cache. Thus, the first cache, i.e. from which cache area the target data is queried, may be determined based on the interface parameters or the annotation parameters. The first cache is a local cache or a distributed cache. The local cache is a local cache of the application server, and the local cache comprises a local memory and a local hard disk. The distributed cache is a cache introduced from outside by the application server, and the distributed cache is a cluster formed by one or more cache servers.
The buffer monitoring management tool can provide an API interface for accessing the buffer, and can also provide an annotation access mode for accessing the buffer.
According to the interface parameters, the local cache or the distributed cache can be accessed by calling the API interface, and the data is read from the cache. The API interfaces include a local cache API interface and a distributed cache API interface.
And accessing the local cache or the distributed cache in a comment access mode according to the comment parameters, and reading data from the cache.
S300: and determining a target sub-cache region corresponding to the target data in the first cache according to the attribute information.
Specifically, the first cache includes at least one sub-cache region, and each sub-cache region corresponds to a different cache mode, that is, the sub-cache region corresponds to the cache mode one by one. Each cache type may support different types of data to be stored, and the performance of processing the data is not exactly the same. Therefore, different data is written into a sub-buffer in the buffer according to a certain buffer rule. Therefore, according to the embodiment of the application, which target sub-cache area in the first cache is stored in is determined according to the attribute information of the target data.
The caching manner of the local cache may include: spring Cache, guavaCache, caffine, hashMap, concurrentHashMap, list, vector, H5-LocalStorage, etc. The cache mode of the distributed cache includes Redis, memcache and the like.
S400: and requesting target data from the target sub-cache area according to the attribute information.
Specifically, after the cache monitoring management tool determines the target sub-cache area, a query instruction for requesting target data from the target sub-cache area is generated according to the attribute information and the identification or address of the target sub-cache area, so that the first cache queries the target data in the target sub-cache area according to the query instruction.
S500: and intercepting the target data acquired from the target sub-cache area.
Specifically, the first cache searches for a corresponding target resource from its memory according to the resource information in the resource request, and if the target resource is stored in the target cache region, the target resource can be found. However, the target cache area cannot return the target resource to the client, but needs to be forwarded through the cache monitoring tool, so the cache monitoring tool can intercept the target resource.
After the first cache queries the target data, the target data is not directly returned to the first client by the application server, but is intercepted by the cache monitoring management tool, and then the cache monitoring management tool returns the target data to the first client through the application server. And the buffer monitoring management tool is equivalent to a buffer monitoring management tool serving as a bridge for data transmission. Thus, the cache monitor management tool may obtain all data to and from or related to the cache.
S600: and carrying out data analysis on the data request and the target data to obtain the cache monitoring data of the target sub-cache area.
Specifically, the cache monitoring data includes at least one of the number of keys in the sub-cache area, the memory size occupied by the keys, and the related data of each key. The Key related data may include call data and cache data of the Key. The data analysis specifically comprises the operations of cleaning, extracting, counting and the like of the data. The service system identification, the application server identification, the ip address of the application server and other first cache monitoring data can be analyzed from the data request.
The service system is supported as a service by at least one application server. Each application server has a unique identity and a unique ip address, and each application server is provided with a corresponding local cache. The client may request to obtain data from any one of the application servers in the business system. The data request of any one application server can be intercepted by the cache monitoring tool.
The data analysis can be executed by a cache monitoring management tool, or the cache monitoring tool can send the access request and the target resource to a monitoring server, and the monitoring server performs the data analysis to obtain cache monitoring data. That is, the steps S100 to S500 are performed by the cache monitoring management tool, and the step S600 may be performed by the cache monitoring management tool or the monitoring server.
The target data comprises keys to be called and values corresponding to the keys, the number of times that each key is called can be counted through data analysis, and the second cache monitoring data such as the size, calling time, cache time, expiration, the number of the keys in the cache, the memory size occupied by all the keys, the number of the keys of the same type, the occupied memory size and various specific cache occupation conditions can be counted through the value corresponding to each key, and the cache hit rate can be counted. The cache hit rate is the ratio of the hit times to the total times; the number of hits is the number of times that the target data can be obtained from the cache, and the total number of times is the total number of times that the target data is called.
The buffer memory monitoring management tool can be installed in various servers supported by any configuration as an intermediate tool, has wide application, does not need to manually write logs to track buffer memory data like the prior art, thus reducing code writing work, and for a service system where an application server is located, the original code of the service system is not changed, thus avoiding disorder of the service system code, having low code invasiveness for the service system and ensuring the stability and safety of the service system to a certain extent. In addition, the buffer memory monitoring management tool monitors all the data related to the buffer memory in real time, so that each service line in the service system can be monitored, the cost is low, and the monitoring range is wide.
In addition, the embedded cache monitoring management tool can manage and monitor caches of various servers, has high flexibility, can specifically monitor the cache hit rate and the specific occupation condition of a cache memory of each cache (such as a local cache and a distributed cache), and can only roughly obtain the approximate condition of the cache in the prior art.
In one embodiment, prior to step S200, the method further comprises:
receiving first cache configuration data issued by a user, and generating or updating a corresponding first cache rule according to the first cache configuration data;
intercepting a data writing request, wherein the data writing request carries data to be cached and a second access parameter;
determining a second cache according to the second access parameter;
and writing the data to be cached into the sub-cache area corresponding to the second cache according to the data type of the data to be cached and the first cache rule.
Specifically, the first caching rule is used for defining a location of the data cache, namely defining a writing rule of the data. I.e. in which cache the data to be written is written.
The first cache configuration data are sequentially sent to a cache monitoring management tool by engineering personnel through a second client corresponding to the monitoring server and the monitoring server. The first cache configuration data is used for configuring a data writing rule and/or a data reading rule of the local cache and/or the distributed cache. The method comprises the steps that a buffer monitoring management tool receives first buffer configuration data issued by a monitoring server; and correspondingly configuring the first cache rule according to the first cache configuration data to generate the first cache rule or update the existing first cache rule.
When the target data cannot be found from the cache, the application server queries the corresponding database for the target data and returns the target data to the first client. Meanwhile, the application server can also store the queried target data into a local cache or a distributed cache as data to be cached. Therefore, the data writing request is initiated by the application server, and the cache monitoring management tool intercepts the data writing request to serve as an intermediate bridge to manage the cached data writing.
Of course, the cache monitoring management tool may also be used as a bridge between the application server and the database of the application server, instead of the application server directly requesting the target data from the corresponding database, and then returning the target data to the first client through the application server. And meanwhile, the buffer monitoring management tool stores the acquired target data into the corresponding sub-buffer area as the data to be buffered according to the first buffer rule.
The second access parameter comprises an interface parameter or an annotation parameter. It is possible to determine to which buffer the data to be buffered is written, and also in which way the data to be buffered is written, based on the second access parameters. The second access parameter may be modified according to the actual situation.
The interface parameter or the annotation parameter in the second access parameter carries a unique identification for locating the second cache. Thus, the second buffer, i.e. the buffer into which the data to be buffered is written, can be determined from the interface parameters or the annotation parameters. The second cache is a local cache or a distributed cache.
The buffer monitoring management tool can provide an API interface for accessing the buffer, and can also provide an annotation access mode for accessing the buffer.
According to the interface parameters, the local cache or the distributed cache can be accessed by calling the API interface, and the data to be cached is written into the corresponding cache. The API interfaces include a local cache API interface and a distributed cache API interface.
And accessing the local cache or the distributed cache in an annotation access mode according to the annotation parameters, and writing the data to be cached into the cache.
The first cache configuration data may be modified according to a specific upgrade condition of the service system, so as to update a version of the cache monitoring management tool to adapt to the update of the service system. The buffer monitoring management tool can process the buffer of more types of data, namely, support the writing of more types of data, and does not need to carry out large changes on a service system, thereby saving the development cost.
In one embodiment, writing the data to be cached into the sub-cache area corresponding to the second cache according to the data type of the data to be cached and the first cache rule includes:
determining an available cache mode supporting the data to be cached from all first candidate cache modes corresponding to the second cache according to the data type of the data to be cached, wherein the first candidate cache mode has a mapping relation with a sub-cache area in the second cache;
determining an optimal first target cache mode in available cache modes according to a first cache rule;
and writing the data to be cached into a sub-cache area corresponding to the first target cache mode in the second cache.
Specifically, if the second cache is a local cache, the first candidate cache mode is a plurality of cache modes corresponding to the local cache. If the second cache is a distributed cache, the first candidate cache mode is a plurality of cache modes corresponding to the distributed cache. The available caching modes are one or more caching modes which support the data types of the data to be cached in a plurality of caching modes corresponding to the local caching or the distributed caching.
There may be an overlap in the types of data that can be supported between the caching schemes. That is, there may be multiple caching modes for one type of data. Therefore, the cache mode with the highest performance needs to be screened, the data to be cached is stored in the sub-cache area corresponding to the optimal first target cache mode, and the speed and the efficiency of processing the data are also highest. Therefore, the speed of acquiring the target resource can be increased. Wherein, the sub-buffer area corresponding to each buffer mode is different.
In one embodiment, step S300 specifically includes the steps of:
acquiring the data type of the target data according to the attribute information;
determining a second target cache mode supporting target data from all second candidate cache modes corresponding to the first cache according to the data type and the first cache rule;
and determining the sub-buffer area corresponding to the second target buffer mode in the first buffer as the target buffer area corresponding to the target data.
In particular, the first cache rule is also used to define how data is correctly read from the cache, i.e. to define the read rule of the data. I.e. how to look up data from the correct cache according to the data type, avoiding invalid queries.
Each sub-buffer area in the first buffer corresponds to a buffer mode. The candidate cache mode is all the cache modes corresponding to all the sub-cache areas of the first cache. The second target caching mode is a caching mode supporting the data type of the target data, namely, the target data can be stored. If the first cache is a local cache, the second target cache manner may be: spring Cache, guavaCache, caffine, hashMap, concurrentHashMap, list, vector, H5-LocalStorage, etc. If the first cache is a distributed cache, the second target cache may be one of Redis, memcache. The same data is written according to the writing rule in the first cache rule, so that the same data is read according to the reading rule corresponding to the writing rule during reading. Therefore, the second target cache mode can be rapidly determined according to the data type of the target data, and the sub-cache region corresponding to the second target cache mode is determined.
In one embodiment, the method further comprises:
and receiving second cache configuration data issued by the user, and generating or updating a corresponding second cache rule according to the second cache configuration data, wherein the second cache rule is used for defining the cache size and the expiration policy.
Specifically, the second cache configuration data is used to configure a cache size and an expiration policy of the local cache and/or the distributed cache. The second cache rule is used to define or modify the number of caches, the second cache rule is also used to define a cache expiration time, etc. And the engineering personnel can send second cache configuration data to the cache monitoring management tool through a second client corresponding to the monitoring server. For example, the buffer expiration time needs to be adjusted, before the buffer monitoring management tool is not used, each service system needs to modify the code to restart the service to adjust, for example, the buffer item is controlled to be 100, the buffer quantity may be adjusted to be 500 after the operation condition of the production buffer service is observed, and in the prior art, the service system needs to modify the code and restart the service to take effect. After the cache monitoring management tool is used, the cache monitoring management tool can be validated only by being configured on a cache monitoring service interface and issued to the cache monitoring management tool. The method realizes the efficient management of the performance of the service system, reduces the modification of codes in the original service system, and further maintains the stability of the service system.
In addition, the possible supported caching modes are different for different service system frameworks, for example, if the system framework is jdk1.6, the cache cannot be used by using the caffeine framework, and jdk1.8 can be used. In the prior art, it is necessary to match corresponding cache frames according to the frames of the service system. According to the embodiment of the application, the business system and the cache can perform barrier-free data interaction only by updating or adding the cache framework supported by the business system in the cache monitoring management tool according to the framework of the business system. At this time, the buffer monitoring management tool has a buffer frame supported by the service system, so that the buffer monitoring management tool can interact with data of the buffer without barriers, and the buffer monitoring management tool can interact with the buffer, and because the service system does not interact with the buffer directly, even if the buffer frame supported by the service system is not supported by the buffer, the buffer monitoring management tool serves as an intermediate bridge, so that the data between the service system and the buffer can be well forwarded, and barrier-free butt joint and dynamic selection of a buffer mode are realized.
In one embodiment, the method further comprises:
Uploading the cache monitoring data to a monitoring server;
and generating and displaying a corresponding cache monitoring chart in real time at a monitoring service client corresponding to the monitoring server according to the cache monitoring data.
Specifically, the second client corresponding to the monitoring server is a monitoring management platform, and the cache service condition can be displayed through an interface by processing cache monitoring data generated by the bottom layer. So that engineering personnel can observe the service condition of the local cache or the distributed cache in real time.
In one embodiment, the method further comprises: detecting whether abnormal data exist in the cache monitoring data based on a preset early warning rule, and if so, sending out real-time early warning.
Specifically, the early warning rule can be adjusted according to actual conditions. The early warning is a single-table operation of the interface, and can be added and deleted. A default configuration is provided. The specific service key may be associated with a corresponding pre-alarm configuration. When detecting that a certain business key is abnormal, triggering corresponding early warning.
The corresponding pre-warning mechanism may be triggered, for example, by: the size of a single key exceeds a first threshold, the size of cached data exceeds a second threshold, the size of a memory occupied by value data corresponding to the key of the same type comprehensively exceeds a third threshold, and the like.
More specifically, for example: when the size of a single key exceeds 500kb, namely, the value corresponding to one key occupies more than 500kb of memory.
Also for example: and alarming when the cached data quantity is more than 10000, namely, alarming when the quantity of all the cached keys is more than 10000.
And alarming when the sum of the value data corresponding to the cache key of the same type is larger than 100M.
In addition, in the prior art, there are many problems with server caching. For example, the cache eviction policy problem, not set to expire, causes too much loading data as version data increases, the JVM develops a fullGC, and the service is not available. For another example: the problem of large Key of the cache, the value of the cache stores the data of the full dictionary table, 6000 pieces of data are newly added in the dictionary table according to version requirements, the value of the cache is excessively large, and the whole system is unavailable due to the performance problem of a single interface. For another example: and the problem of cache pollution is solved, the service thread takes out the cached data, and the deleting operation is performed on a specific branch, so that the correctness of the interface data is affected. (the cache itself does not make a deep copy, the source data of the image cache after deletion). For another example: cache data is inconsistent, and in the process of updating the cache, empty scenes can occur in the transient process to cause data inconsistency. For another example: the landing page reads the env field in the cache, resulting in a page blank (the production environment does not have an env field).
The problems are continuously caused by the influence of different understanding degrees of the use of the buffer memory by engineering personnel. The buffer monitoring management tool can help engineering personnel to realize buffer management, reduce hidden danger of a system caused by improper buffer use, and simultaneously provide data support and verification support for the buffer optimization of developers and reduce the occurrence of the problems.
The service system can introduce the buffer monitoring tool in the annotation mode or the interface calling mode to finish buffer access and monitoring, and the code invasiveness of the service system is extremely high, so that the stability and the safety of the service system are effectively ensured. The distributed cache and the local cache can be automatically monitored through the cache monitoring tool, specific cache occupation conditions and cache hit rate are obtained, codes do not need to be written manually, and development cost is saved. The method makes up the technical blank that no complete technology in the prior art can automatically monitor the hit rate of a specific cache and the occupation condition of the cache memory, and carry out statistical analysis and data summarization.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
FIG. 3 is a block diagram illustrating a buffer monitor management device according to an embodiment of the present application; referring to fig. 3, the cache monitor management apparatus includes:
the first interception module 100 is configured to intercept a data request sent by a first client, where the data request carries a first access parameter and attribute information of target data to be requested;
a first determining module 200, configured to determine a first cache according to the interface parameter or the annotation parameter in the first access parameter;
the second determining module 300 is configured to determine a target sub-buffer corresponding to the target data in the first buffer according to the attribute information;
a first request module 400, configured to request target data from a target sub-buffer according to attribute information;
the second intercepting module 500 is configured to intercept the target data acquired from the target sub-cache;
the data analysis module 600 is configured to perform data analysis on the data request and the target data to obtain cache monitoring data of the target sub-cache area, where the cache monitoring data includes at least one of a number of keys in the sub-cache area, a memory size occupied by the keys, and related data of each key.
In one embodiment, the apparatus further comprises:
the first rule configuration module is used for receiving first cache configuration data issued by a user, generating or updating a corresponding first cache rule according to the first cache configuration data, wherein the first cache rule is used for defining a writing rule of the data;
The third interception module is used for intercepting a data writing request, wherein the data writing request carries data to be cached and second access parameters;
the third determining module is used for determining a second cache according to the second access parameter;
and the writing module is used for writing the data to be cached into the sub-cache area corresponding to the second cache according to the data type of the data to be cached and the first cache rule.
In one embodiment, the writing module specifically includes:
the matching unit is used for determining an available cache mode supporting the data to be cached from all first candidate cache modes corresponding to the second cache according to the data type of the data to be cached, wherein the first candidate cache mode has a mapping relation with a sub-cache area in the second cache;
the screening unit is used for determining an optimal first target cache mode in the available cache modes according to the first cache rule;
the writing unit is used for writing the data to be cached into the sub-cache area corresponding to the first target cache mode in the second cache.
In one embodiment, the second determining module 300 specifically includes:
a data type acquisition unit for acquiring the data type of the target data according to the attribute information;
the first determining unit is used for determining a second target cache mode supporting target data from all second candidate cache modes corresponding to the first cache according to the data type and a first cache rule, and the first cache rule is also used for defining a data reading rule;
And the second determining unit is used for determining the buffer area corresponding to the second target buffer mode in the first buffer as the target buffer area corresponding to the target data.
In one embodiment, the apparatus further comprises:
the second rule configuration module is used for receiving second cache configuration data issued by a user, generating or updating a corresponding second cache rule according to the second cache configuration data, and the second cache rule is used for defining the cache size and the expiration policy.
In one embodiment, the apparatus further comprises:
the sending module is used for uploading the cache monitoring data to the monitoring server;
and the display module is used for generating and displaying a corresponding cache monitoring chart in real time at a second client corresponding to the monitoring server according to the cache monitoring data.
The meaning of "first" and "second" in the above modules/units is merely to distinguish different modules/units, and is not used to limit which module/unit has higher priority or other limiting meaning. Furthermore, the terms "comprises," "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules that are expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or modules that may not be expressly listed or inherent to such process, method, article, or apparatus, and the partitioning of such modules by means of any other means that may be implemented by such means.
The specific limitation of the cache monitoring management device may be referred to the limitation of the cache monitoring management method hereinabove, and will not be described herein. The modules in the above-mentioned cache monitoring management device may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Fig. 4 is a block diagram showing an internal structure of a computer device according to an embodiment of the present application. The computer device may in particular be the application server in fig. 1. As shown in fig. 4, the computer device includes a processor, a memory, a network interface, an input device, a database, and a display screen connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory includes a storage medium and an internal memory. The storage medium may be a nonvolatile storage medium or a volatile storage medium. The storage medium stores an operating system and may further store computer readable instructions that, when executed by the processor, cause the processor to implement a cache monitor management method. The internal memory provides an environment for the execution of an operating system and computer-readable instructions in the storage medium. The internal memory may also store computer readable instructions that, when executed by the processor, cause the processor to perform a cache monitor management method. The network interface of the computer device is for communicating with an external server via a network connection. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
In one embodiment, a computer device is provided that includes a memory, a processor, and computer readable instructions (e.g., a computer program) stored on the memory and executable on the processor, the processor implementing the steps of the cache monitor management method in the above embodiments when executing the computer readable instructions, such as steps S100 to S600 shown in fig. 2 and other extensions of the method and extensions of related steps. Alternatively, the processor executes computer readable instructions to implement the functions of the modules/units of the cache monitor management apparatus in the above embodiments, such as the functions of the modules 100 to 600 shown in fig. 3. In order to avoid repetition, a description thereof is omitted.
The processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being a control center of the computer device, and the various interfaces and lines connecting the various parts of the overall computer device.
The memory may be used to store computer-readable instructions and/or modules that, by being executed or executed by the processor, implement various functions of the computer device by invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the cellular phone, etc.
The memory may be integrated with the processor or may be separate from the processor.
It will be appreciated by persons skilled in the art that the architecture shown in fig. 4 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements are applicable, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer readable storage medium is provided, on which computer readable instructions are stored, which when executed by a processor implement the steps of the cache monitor management method of the above embodiment, such as steps S100 to S600 shown in fig. 2 and other extensions of the method and related steps. Alternatively, the computer readable instructions, when executed by a processor, implement the functions of the modules/units of the data cache monitoring management apparatus in the above embodiments, such as the functions of modules 100 to 600 shown in fig. 3. In order to avoid repetition, a description thereof is omitted.
Those of ordinary skill in the art will appreciate that implementing all or part of the processes of the above described embodiments may be accomplished by instructing the associated hardware by way of computer readable instructions stored in a computer readable storage medium, which when executed, may comprise processes of embodiments of the above described methods. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. The cache monitoring and managing method is characterized by comprising the following steps:
intercepting a data request sent by a first client, wherein the data request carries first access parameters and attribute information of target data to be requested;
determining a first cache according to interface parameters or annotation parameters in the first access parameters, wherein the first cache is a local cache or a distributed cache;
determining a target sub-cache region corresponding to the target data in the first cache according to the attribute information;
requesting target data from the target sub-cache area according to the attribute information;
intercepting target data acquired from the target sub-cache area;
and carrying out data analysis on the data request and the target data to obtain cache monitoring data of the target sub-cache region, wherein the cache monitoring data comprises at least one of the number of keys in the sub-cache region, the memory size occupied by the keys and related data of each key.
2. The method of claim 1, wherein prior to said determining a first cache based on interface parameters or annotation parameters in said first access parameters, said method further comprises:
receiving first cache configuration data issued by a user, and generating or updating a corresponding first cache rule according to the first cache configuration data, wherein the first cache rule is used for defining a writing rule of the data;
intercepting a data writing request, wherein the data writing request carries data to be cached and a second access parameter;
determining a second cache according to the second access parameter;
and writing the data to be cached into the sub-cache area corresponding to the second cache according to the data type of the data to be cached and the first cache rule.
3. The method according to claim 2, wherein writing the data to be cached into the sub-cache region corresponding to the second cache according to the data type of the data to be cached and the first cache rule includes:
determining an available cache mode supporting the data to be cached from all first candidate cache modes corresponding to the second cache according to the data type of the data to be cached, wherein the first candidate cache mode has a mapping relation with a sub-cache area in the second cache;
Determining an optimal first target cache mode in the available cache modes according to the first cache rule;
and writing the data to be cached into a sub-cache area corresponding to the first target cache mode in the second cache.
4. The method of claim 2, wherein the first caching rule is further used to define a read rule for data;
the determining, according to the attribute information, a target cache area corresponding to the target data in the first cache includes:
acquiring the data type of the target data according to the attribute information,
determining a second target cache mode supporting the target data from all second candidate cache modes corresponding to the first cache according to the data type and the first cache rule,
and determining a sub-buffer area corresponding to the second target buffer mode in the first buffer as a target buffer area corresponding to the target data.
5. The method according to claim 1, wherein the method further comprises:
and receiving second cache configuration data issued by the user, and generating or updating a corresponding second cache rule according to the second cache configuration data, wherein the second cache rule is used for defining the cache size and the expiration policy.
6. The method according to claim 1, wherein the method further comprises:
uploading the cache monitoring data to a monitoring server;
and generating and displaying a corresponding cache monitoring chart in real time at a second client corresponding to the monitoring server according to the cache monitoring data.
7. A cache monitor and management device, the device comprising:
the first interception module is used for intercepting a data request sent by a first client, wherein the data request carries first access parameters and attribute information of target data to be requested;
the first determining module is used for determining a first cache according to interface parameters or annotation parameters in the first access parameters, wherein the first cache is a local cache or a distributed cache;
the second determining module is used for determining a target sub-cache region corresponding to the target data in the first cache according to the attribute information;
the first request module is used for requesting target data from the target sub-cache area according to the attribute information;
the second interception module is used for intercepting the target data acquired from the target sub-cache area;
the data analysis module is used for carrying out data analysis on the data request and the target data to obtain cache monitoring data of the target sub-cache region, wherein the cache monitoring data comprises at least one of the number of keys in the sub-cache region, the memory size occupied by the keys and related data of each key.
8. A cache monitor management system, the system comprising: the system comprises an application server, a monitoring server and a distributed cache, wherein a cache monitoring management tool is installed in the application server, and a local cache is arranged in the application server;
the cache monitoring management tool is used for intercepting a data request sent by a first client, wherein the data request carries first access parameters and attribute information of target data to be requested;
the buffer monitoring management tool is further configured to determine a first buffer according to an interface parameter or an annotation parameter in the first access parameter, where the first buffer is the local buffer or a distributed buffer;
the buffer monitoring management tool is further used for determining a target sub-buffer area corresponding to the target data in the first buffer according to the attribute information;
the cache monitoring management tool is further used for requesting target data from the target sub-cache area according to the attribute information;
the cache monitoring management tool is also used for intercepting target data acquired from the target sub-cache area;
the monitoring server is used for carrying out data analysis on the received data request and target data uploaded by the cache monitoring management tool to obtain cache monitoring data of the target sub-cache area, wherein the cache monitoring data comprises at least one of the number of keys in the sub-cache area, the memory size occupied by the keys and related data of each key.
9. A computer device comprising a memory, a processor and computer readable instructions stored on the memory and executable on the processor, wherein the processor, when executing the computer readable instructions, performs the steps of the method of any of claims 1-6.
10. A computer readable storage medium having computer readable instructions stored thereon, which when executed by a processor, cause the processor to perform the steps of the method according to any of claims 1-6.
CN202110924879.9A 2021-08-12 2021-08-12 Cache monitoring management method, device, system, equipment and storage medium Active CN113590665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110924879.9A CN113590665B (en) 2021-08-12 2021-08-12 Cache monitoring management method, device, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110924879.9A CN113590665B (en) 2021-08-12 2021-08-12 Cache monitoring management method, device, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113590665A CN113590665A (en) 2021-11-02
CN113590665B true CN113590665B (en) 2023-11-17

Family

ID=78257677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110924879.9A Active CN113590665B (en) 2021-08-12 2021-08-12 Cache monitoring management method, device, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113590665B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115277128B (en) * 2022-07-13 2024-02-23 上海砾阳软件有限公司 Illegal request processing method and device and electronic equipment
CN115469815B (en) * 2022-10-31 2023-04-18 之江实验室 Cache management method, device, equipment and storage medium for improving reliability of flash memory

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102891894A (en) * 2012-10-17 2013-01-23 中国工商银行股份有限公司 Caching method used for server cluster, cache server and cache system
CN110113385A (en) * 2019-04-15 2019-08-09 中国平安人寿保险股份有限公司 Cache Dynamic Configuration, device, computer equipment and storage medium
CN110753099A (en) * 2019-10-12 2020-02-04 平安健康保险股份有限公司 Distributed cache system and cache data updating method
CN111078147A (en) * 2019-12-16 2020-04-28 南京领行科技股份有限公司 Processing method, device and equipment for cache data and storage medium
CN111090675A (en) * 2019-11-22 2020-05-01 福建亿榕信息技术有限公司 Multi-entry data caching method and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020091712A1 (en) * 2000-10-28 2002-07-11 Martin Andrew Richard Data-base caching system and method of operation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102891894A (en) * 2012-10-17 2013-01-23 中国工商银行股份有限公司 Caching method used for server cluster, cache server and cache system
CN110113385A (en) * 2019-04-15 2019-08-09 中国平安人寿保险股份有限公司 Cache Dynamic Configuration, device, computer equipment and storage medium
CN110753099A (en) * 2019-10-12 2020-02-04 平安健康保险股份有限公司 Distributed cache system and cache data updating method
CN111090675A (en) * 2019-11-22 2020-05-01 福建亿榕信息技术有限公司 Multi-entry data caching method and storage medium
CN111078147A (en) * 2019-12-16 2020-04-28 南京领行科技股份有限公司 Processing method, device and equipment for cache data and storage medium

Also Published As

Publication number Publication date
CN113590665A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN113590665B (en) Cache monitoring management method, device, system, equipment and storage medium
CN106980625B (en) Data synchronization method, device and system
US6701464B2 (en) Method and system for reporting error logs within a logical partition environment
US9146956B2 (en) Statistical applications in OLTP environment
WO2020181810A1 (en) Data processing method and apparatus applied to multi-level caching in cluster
US20060271510A1 (en) Database Caching and Invalidation using Database Provided Facilities for Query Dependency Analysis
US20190121541A1 (en) Method and apparatus for improving storage performance of container
US10157130B1 (en) Differential storage and eviction for information resources from a browser cache
US11231973B2 (en) Intelligent business logging for cloud applications
CN110990439A (en) Cache-based quick query method and device, computer equipment and storage medium
US10999399B2 (en) Offline use of network application
CN110555184A (en) resource caching method and device, computer equipment and storage medium
US20070192324A1 (en) Method and device for advanced cache management in a user agent
KR20170090874A (en) Self defense security apparatus with behavior and environment analysis and operating method thereof
US11269784B1 (en) System and methods for efficient caching in a distributed environment
CN112199391A (en) Data locking detection method and device and computer readable storage medium
CN111767053A (en) Front-end page data acquisition method and device
CN114297284A (en) Interface quick response method and device, electronic equipment and storage medium
CN113157738B (en) In-heap data cache synchronization method and device, computer equipment and storage medium
CN114547108A (en) Data processing method, device, equipment and medium
US20120096048A1 (en) Personalized Object Dimension
US20140245386A1 (en) System and method for access control management
CN115840770B (en) Local cache data processing method and related equipment based on distributed environment
CN111488230A (en) Method and device for modifying log output level, electronic equipment and storage medium
US20240089339A1 (en) Caching across multiple cloud environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231019

Address after: 830000 room 217-3, information technology innovation park, Xinjiang University, No. 499, Northwest Road, shayibak District, Urumqi, Xinjiang Uygur Autonomous Region

Applicant after: Xinjiang Beidou Tongchuang Information Technology Co.,Ltd.

Address before: 518000 Room 202, block B, aerospace micromotor building, No.7, Langshan No.2 Road, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen LIAN intellectual property service center

Effective date of registration: 20231019

Address after: 518000 Room 202, block B, aerospace micromotor building, No.7, Langshan No.2 Road, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen LIAN intellectual property service center

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: PING AN PUHUI ENTERPRISE MANAGEMENT Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant