CN114490744B - Data caching method, storage medium and electronic device - Google Patents

Data caching method, storage medium and electronic device Download PDF

Info

Publication number
CN114490744B
CN114490744B CN202111514229.3A CN202111514229A CN114490744B CN 114490744 B CN114490744 B CN 114490744B CN 202111514229 A CN202111514229 A CN 202111514229A CN 114490744 B CN114490744 B CN 114490744B
Authority
CN
China
Prior art keywords
service
data
cache
result
main
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111514229.3A
Other languages
Chinese (zh)
Other versions
CN114490744A (en
Inventor
姜勇
杨雷
石京豪
王明志
吴豹只
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Enterprise Cloud Chain Co ltd
Original Assignee
China Enterprise Cloud Chain Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Enterprise Cloud Chain Co ltd filed Critical China Enterprise Cloud Chain Co ltd
Priority to CN202111514229.3A priority Critical patent/CN114490744B/en
Publication of CN114490744A publication Critical patent/CN114490744A/en
Application granted granted Critical
Publication of CN114490744B publication Critical patent/CN114490744B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a data caching method, a storage medium and an electronic device, wherein the method comprises the steps of summarizing and sorting data of each business service node to obtain a statistical result; pushing the statistical result to a cache service message queue, storing the data in the cache service consumption queue into a main cache service by utilizing multithreading, and notifying each slave service node through a sentinel service; the result is stored in a non-relational database REDIS, and the result is replaced by a KEY during changing; sending inquiry data information to each service assembly block, inquiring a response result set, returning a cache result to each service line, returning an empty set when the response result set is not inquired, sending the inquiry data information to a message queue, and reading the inquiry information by a cache service and caching the inquiry information into a database. The invention can quickly respond to the result set, reduce the consumption of system hardware resources, divide the cache partition according to the service number, ensure the separation of reading and writing and the high availability and expandability of the service system, and provide stable data query support.

Description

Data caching method, storage medium and electronic device
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a data caching method, a storage medium, and an electronic device.
Background
With the current domain division of service of a service line refinement system, many service data are finely divided into service in each domain, some statistical data needed by customers are difficult to provide from a single service, and the request frequency of some data is very high, and some statistical results are not real-time but also require real-time data in a short time, so frequent statistical work occupies a large amount of database resources, and the resources provided for other service lines are relatively reduced.
In order to split the comprehensive statistical query out of the service main line system and respond in time to accept a large amount of complex statistical query work, the system is provided with storage and use of some infrequently changed data, so that most of system hardware resources are exhausted, the use of the service main line function is influenced, and therefore, a data caching method is required to be provided, and the problems in the prior art are solved.
Disclosure of Invention
Aiming at the defects related to the background technology, the invention provides a data caching method, provides a quick response result, decouples business statistics work and non-statistics work, reduces the consumption of system hardware resources and provides flexible business service support.
The aim and the technical problems to be solved in the invention are realized by adopting the following technical scheme:
A data caching method, comprising:
summarizing and sorting the data of each business service node through different business servlets to obtain a statistical result;
pushing the statistical result to a cache service message queue, storing data in a cache service consumption queue into a main cache service by utilizing multithreading, and notifying each slave service node through a sentinel service;
After the thread pool threads process the data, the result is stored in a non-relational database REDIS, the overtime time of the result is set, and the result is replaced by a KEY when the result is changed;
The service node sends a query data message to each service component block, queries a response result set, returns a cache result in the main cache service to each service line,
When the service system can not inquire a response result set, temporarily returning an empty set, sending an inquiry data message to a message queue, and sending an inquiry data result to each service line;
and the cache service reads the query information, caches the query information to the cache service and stores the query information in the database.
Preferably, the service system data are summarized through a data warehouse.
Preferably, the data service monitors the change of the binlog of the service database of each service system, pushes the data to the doris database through kafka, and after doris database stores the data, each service servlet acquires the latest statistical result in real time.
Preferably, the cache service provides queries through clusters and dynamically expands according to server performance.
Preferably, the cache service is deployed through a redis cluster, when the main cache service is down, the main server is reselected, and other slave service nodes continue to provide the main service, so that the high writing availability of the service is ensured.
Preferably, the message queue is used for consuming the notification message in a consumption group mode, the service numbers of the service services are pre-allocated, and the result set is cached into different subareas according to the service numbers.
Preferably, the cache service sets different partitions to provide copy functions, the main partition is used for accepting writing service, and the follower copy is used for providing query service.
Preferably, the service number of the service is a 21-bit character string, wherein 1-5 bits identify the service number, 6-11 bits are specific function codes, and the storage servers are distinguished according to 1-5 bits and 6-11 bits during storage.
The invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
The invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
The invention provides the query cache through the cluster, can perform load balancing according to the performance of the server, provides stable data query support, and can dynamically expand the service cluster when the query pressure ratio is larger so as to improve the stability and expansibility of the system; the method has the advantages that the result set can be responded quickly, the consumption of system hardware resources is reduced, service support is provided for the outside through the clusters, service support can not be caused even if other services can be provided after one cache service is down, and meanwhile, the cache partition is divided according to the service number, so that the separation of reading and writing and the high availability and the expandability of the service system are ensured.
Drawings
Fig. 1 is a schematic flow chart of an implementation of a data caching method provided by the present invention.
Detailed Description
The technical scheme of the invention is further described in detail below with reference to the accompanying drawings. It is apparent that the described embodiments are only a part of the embodiments of the present disclosure, and not all the embodiments, and all other embodiments obtained by those skilled in the art without inventive effort are within the scope of the present disclosure.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, components and/or sections, these elements, components and/or sections should not be limited by these terms.
As shown in fig. 1, a data caching method includes:
summarizing and sorting the data of each business service node through different business servlets to obtain a statistical result;
pushing the statistical result to a cache service message queue, storing data in a cache service consumption queue into a main cache service by utilizing multithreading, and notifying each slave service node through a sentinel service;
after the thread pool threads process the data, the result is stored in a non-relational database REDIS, the result timeout time is set, and the result can be replaced by a KEY when the result is changed;
The service node sends a query data message to each service component block, queries a response result set, returns a cache result in the main cache service to each service line,
When the service system can not inquire a response result set, temporarily returning an empty set, sending an inquiry data message to a message queue, and sending an inquiry data result to each service line;
and the cache service reads the query information, caches the query information to the cache service and stores the query information in the database.
In this embodiment, the service system data are summarized through a data warehouse.
In this embodiment, the data service monitors the change of the binlog of the service database of each service system, pushes the data to the doris database through kafka, and after doris database stores the data, each service servlet obtains the latest statistics in real time.
After the statistical result is obtained, the statistical result is pushed to the corresponding cache service cluster through ROCHETMQ according to the service number of the service line, and the cache service decouples the relation between the computing service and the service, so that the stability of the system is improved.
In this embodiment, the cache service provides a query through a cluster, and dynamically expands according to the performance of the server.
The service cluster can be dynamically expanded when the query pressure ratio is larger, so that the stability and expansibility of the system are improved.
In this embodiment, the cache service is deployed through the redis cluster, and when the main cache service is down, the main server is reselected, and other slave service nodes continue to provide the main service, so that the service is enabled to be available in high writing, and the service responds faster than the relational database.
In this embodiment, the message queue consumes the notification message by using the consumption group, allocates the service number of the service in advance, and caches the result set into different partitions according to the service number.
In this embodiment, the different partitions of the cache service provide copy functions, the main partition is used to accept writing service, and the follower copy is used to provide query service.
In this embodiment, the service number of the service is a 21-bit string, where 1-5 bits identify the service number, 6-11 bits are specific function codes, and the storage servers are distinguished according to 1-5 bits and 6-11 bits during storage.
The invention provides the query cache through the cluster, can perform load balancing according to the performance of the server, provides stable data query support, and can dynamically expand the service cluster when the query pressure ratio is larger so as to improve the stability and expansibility of the system; the method has the advantages that the result set can be responded quickly, the consumption of system hardware resources is reduced, service support is provided for the outside through the clusters, service support can not be caused even if other services can be provided after one cache service is down, and meanwhile, the cache partition is divided according to the service number, so that the separation of reading and writing and the high availability and the expandability of the service system are ensured.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present invention may be implemented in a combination of hardware and software. When the software is applied, the corresponding functions may be stored in a computer-readable medium or transmitted as one or more instructions or code on the computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The preferred embodiments of the present specification disclosed above are merely used to help clarify the present specification. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the disclosure and the practical application, to thereby enable others skilled in the art to best understand and utilize the disclosure. This specification is to be limited only by the claims and the full scope and equivalents thereof.

Claims (5)

1. A data caching method, comprising:
summarizing and sorting the data of each business service node through different business servlets to obtain a statistical result;
the data service monitors the binlog change of the service databases of the service systems, pushes data to a doris database through kafka, stores the data in a doris database, and acquires the latest statistical result in real time by each service servlet;
pushing the statistical result to a cache service message queue, storing data in a cache service consumption queue into a main cache service by utilizing multithreading, and notifying each slave service node through a sentinel service;
the cache service is provided with inquiry through a cluster, dynamic expansion is carried out according to the performance of the server, and the cache service is deployed through a redis cluster, when the main cache service is down, the main server is reselected, other slave service nodes continue to provide the main service, and the high writing availability of the service is ensured;
The cache service sets different partitions to provide copy functions, the main partition is used for receiving writing service, and the follower copy is used for providing query service;
After the thread pool threads process the data, the result is stored in a non-relational database REDIS, the overtime time of the result is set, and the result is replaced by a KEY when the result is changed;
The service node sends a query data message to each service assembly block, queries a response result set, and returns a cache result in the main cache service to each service line;
when the service system can not inquire a response result set, temporarily returning an empty set, sending an inquiry data message to a message queue, and sending an inquiry data result to each service line;
Consuming the notification message in a consumption group mode through a message queue, pre-distributing service numbers of service services, and caching a result set into different partitions according to the service numbers;
and the cache service reads the query information, caches the query information to the cache service and stores the query information in the database.
2. The data caching method of claim 1, wherein the business system data is aggregated by a data warehouse.
3. The data caching method according to claim 1, wherein the service number of the service is a 21-bit character string, wherein 1-5 bits identify the service number, 6-11 bits are specific function codes, and the storage servers are distinguished according to 1-5 bits and 6-11 bits during storage.
4. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of the method of claim 1 when run.
5. An electronic device comprising a memory, in which a computer program is stored, and a processor arranged to run the computer program to perform the steps of the method of claim 1.
CN202111514229.3A 2021-12-13 2021-12-13 Data caching method, storage medium and electronic device Active CN114490744B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111514229.3A CN114490744B (en) 2021-12-13 2021-12-13 Data caching method, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111514229.3A CN114490744B (en) 2021-12-13 2021-12-13 Data caching method, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN114490744A CN114490744A (en) 2022-05-13
CN114490744B true CN114490744B (en) 2024-04-26

Family

ID=81492583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111514229.3A Active CN114490744B (en) 2021-12-13 2021-12-13 Data caching method, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN114490744B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117271597A (en) * 2022-06-14 2023-12-22 顺丰科技有限公司 Redis-based performance adjustment method and device, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109451072A (en) * 2018-12-29 2019-03-08 广东电网有限责任公司 A kind of message caching system and method based on Kafka
CN111077870A (en) * 2020-01-06 2020-04-28 浙江中烟工业有限责任公司 Intelligent OPC data real-time acquisition and monitoring system and method based on stream calculation
CN111209258A (en) * 2019-12-31 2020-05-29 航天信息股份有限公司 Tax end system log real-time analysis method, equipment, medium and system
CN111913989A (en) * 2020-06-15 2020-11-10 东风日产数据服务有限公司 Distributed application cache refreshing system and method, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10412158B2 (en) * 2016-07-27 2019-09-10 Salesforce.Com, Inc. Dynamic allocation of stateful nodes for healing and load balancing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109451072A (en) * 2018-12-29 2019-03-08 广东电网有限责任公司 A kind of message caching system and method based on Kafka
CN111209258A (en) * 2019-12-31 2020-05-29 航天信息股份有限公司 Tax end system log real-time analysis method, equipment, medium and system
CN111077870A (en) * 2020-01-06 2020-04-28 浙江中烟工业有限责任公司 Intelligent OPC data real-time acquisition and monitoring system and method based on stream calculation
CN111913989A (en) * 2020-06-15 2020-11-10 东风日产数据服务有限公司 Distributed application cache refreshing system and method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114490744A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN107528816B (en) Processing method, management system and server of ID in distributed database
US7058783B2 (en) Method and mechanism for on-line data compression and in-place updates
CN101876983B (en) Method for partitioning database and system thereof
US20100306234A1 (en) Cache synchronization
CN111597160A (en) Distributed database system, distributed data processing method and device
US20050004898A1 (en) Distributed search methods, architectures, systems, and software
US8930518B2 (en) Processing of write requests in application server clusters
CN114490744B (en) Data caching method, storage medium and electronic device
CN111127252A (en) Data management method of water resource management decision support system
CN110109931B (en) Method and system for preventing data access conflict between RAC instances
CN114238518A (en) Data processing method, device, equipment and storage medium
CN111913917A (en) File processing method, device, equipment and medium
CN111831691A (en) Data reading and writing method and device, electronic equipment and storage medium
CN107133334B (en) Data synchronization method based on high-bandwidth storage system
CN112685403A (en) High-availability framework system for hidden danger troubleshooting data storage and implementation method thereof
CN113672583B (en) Big data multi-data source analysis method and system based on storage and calculation separation
CN112231129A (en) Data proxy service method, server, storage medium and computing equipment
KR102211403B1 (en) Synchronizing system for public resources in multi-WEB server environment
CN114661690A (en) Multi-version concurrency control and log clearing method, node, equipment and medium
CN102622284A (en) Data asynchronous replication method directing to mass storage system
CN117539915B (en) Data processing method and related device
CN107679093B (en) Data query method and device
CN114205363B (en) Cluster management method of distributed database and distributed management system cluster
CN113282585B (en) Report calculation method, device, equipment and medium
CN113051274B (en) Mass tag storage system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 100078 4th floor, CRRC building, building 15, fangchengyuan 1st District, Fengtai District, Beijing

Applicant after: China Enterprise Cloud Chain Co.,Ltd.

Address before: 100078 4th floor, CRRC building, building 15, fangchengyuan 1st District, Fengtai District, Beijing

Applicant before: ZHONGQI SCC (BEIJING) FINANCE INFORMATION SERVICE Co.,Ltd.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant