CN113704308B - Data caching method, device, server and recharging system - Google Patents

Data caching method, device, server and recharging system Download PDF

Info

Publication number
CN113704308B
CN113704308B CN202111024640.2A CN202111024640A CN113704308B CN 113704308 B CN113704308 B CN 113704308B CN 202111024640 A CN202111024640 A CN 202111024640A CN 113704308 B CN113704308 B CN 113704308B
Authority
CN
China
Prior art keywords
data
cached
server
determining
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111024640.2A
Other languages
Chinese (zh)
Other versions
CN113704308A (en
Inventor
王立民
肖震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202111024640.2A priority Critical patent/CN113704308B/en
Publication of CN113704308A publication Critical patent/CN113704308A/en
Application granted granted Critical
Publication of CN113704308B publication Critical patent/CN113704308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a data caching method, a device, a server and a recharging system, wherein the method comprises the following steps: acquiring data to be cached, and determining key data of the data to be cached; determining a virtual index value according to the key data, and determining a server identifier to be cached according to the virtual index value; and sending the data to be cached to the server to be cached according to the identifier of the server to be cached, so that the server to be cached stores the data according to the data to be cached. According to the method and the device for storing the data to be cached, the server to be cached is determined through the mapping relation of the virtual index values, so that the data to be cached is stored in the corresponding server, the distributed storage of the data is realized, and the storage efficiency of the data is improved.

Description

Data caching method, device, server and recharging system
Technical Field
The embodiment of the invention relates to the field of data processing, in particular to a data caching method, a data caching device, a server and a recharging system.
Background
The one-card-charging-off-site recharging system can realize various recharging scenes, such as local recharging, off-site recharging, and the like, and adopts a unified key to encrypt and decrypt data, so that the storage safety of the data is optimized.
In the actual recharging process of the recharging system of the card in different places, the recharging system of the different places uniformly calls a resource center to carry out data authentication, and notifies the number to correspond to the place to carry out recharging, after the recharging is successful, the stock state is updated, the updating of the data is completed in real time, and the data is stored in a database in a direct storage mode.
However, since the existing remote recharging system has concentrated data resources, the storage capacity of the database is large, so that the efficiency of data storage is lower when the recharging system processes high-concurrency recharging business.
Disclosure of Invention
The invention provides a data caching method, a device, a server and a recharging system, wherein the server to be cached is used for storing data according to the data to be cached by determining the server to be cached corresponding to the data to be cached, so that the distributed storage of the data to be cached is realized, and the data storage efficiency is improved.
In a first aspect, the present invention provides a data caching method, including:
acquiring data to be cached, and determining key data of the data to be cached;
determining a virtual index value according to the key data, and determining a server identifier to be cached according to the virtual index value;
and sending the data to be cached to the server to be cached according to the server identifier to be cached, so that the server to be cached stores the data according to the data to be cached.
In one possible design, before the determining the server identifier to be cached according to the virtual index value, the method includes:
receiving server information to be cached sent by all servers to be cached, and determining virtual index values corresponding to all servers to be cached according to a hash algorithm;
and generating a mapping list according to a preset number of virtual to-be-cached nodes and virtual index values corresponding to each to-be-cached server, wherein all to-be-cached server identifiers and virtual index value ranges corresponding to the to-be-cached servers are stored in the mapping list.
In one possible design, the determining the server identifier to be cached according to the virtual index value includes:
determining a virtual index value range corresponding to the virtual index value from the mapping list;
and determining the identification of the target cache server according to the virtual index value range, and taking the identification of the target cache server as the identification of the server to be cached.
In one possible design, the key data is a serial number of a rechargeable card, and the determining a virtual index value according to the key data includes:
and determining a virtual index value of the serial number of the rechargeable card according to the hash algorithm.
In one possible design, before the determining the critical data of the data to be cached, the method further includes:
carrying out serialization storage on the data to be cached according to a serialization algorithm to obtain serialized storage data, and taking the serialized storage data as the data to be cached;
and determining key data of the data to be cached according to the serialized storage data.
In a second aspect, the present invention provides a data caching apparatus, comprising:
the acquisition module is used for acquiring data to be cached and determining key data of the data to be cached;
the determining module is used for determining a virtual index value according to the key data and determining a server identifier to be cached according to the virtual index value;
and the sending module is used for sending the data to be cached to the server to be cached according to the server identifier to be cached, so that the server to be cached stores the data according to the data to be cached.
In a third aspect, the present invention provides a data caching server, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes the computer-executable instructions stored by the memory, causing the at least one processor to perform the data caching method as described above in the first aspect and the various possible designs of the first aspect.
In a fourth aspect, the present invention provides a computer storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the data caching method according to the first aspect and the various possible designs of the first aspect.
In a fifth aspect, the present invention provides a recharging system, as described in the third aspect, wherein S and M are positive integers;
the terminal is used for collecting data to be cached and sending the data to be cached to the data caching server;
the data caching server is used for acquiring data to be cached and determining key data of the data to be cached; determining a virtual index value according to the key data, and determining a server identifier to be cached according to the virtual index value; transmitting the data to be cached to the server to be cached according to the server identifier to be cached;
the server to be cached is used for storing the data to be cached.
In one possible design, the server to be cached is a Redis server.
According to the data caching method, the device, the server and the recharging system, the virtual index value is determined by determining the key data of the data to be cached, and the server to be cached is determined according to the mapping relation of the virtual index value, so that the data to be cached is stored in the corresponding server according to the preset mapping relation, the distributed storage of the data is realized, and the storage efficiency of the data is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it will be obvious that the drawings in the following description are some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a schematic diagram of a recharging system according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a data buffering method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a distributed storage provided in an embodiment of the present invention;
FIG. 4 is a schematic diagram of data storage according to an embodiment of the present invention;
FIG. 5 is a second flowchart of a data caching method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a data buffering device according to an embodiment of the present invention;
fig. 7 is a schematic hardware structure of a server according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
After the resource is charged in one card, the resource center is uniformly called for data authentication by remote recharging, the number is informed to recharge corresponding to the city, the stock state is updated after the recharging is successful, the updating of the data is completed in real time, the remote recharging process is optimized, and the timeliness is achieved. The recharging system of the card in different places also stores decryption keys in a centralized way through a unified decryption mode, and the recharging system completes the encryption and decryption processes of the old data and the new data in a unified way, so that the security of the data is optimized. However, after the recharging of the physical card is concentrated, the recharging quantity of the physical card is large, and the database storage capacity of the recharging system of the different places of the card is large, so that the efficiency of data storage is lower when the recharging system processes high-concurrency recharging business.
In order to solve the technical problems, the embodiment of the invention provides the following technical scheme: the virtual index value is determined according to the key data of the data to be cached, and the server identification to be cached is determined according to the virtual index value, so that the distributed storage of the data is realized, and the storage efficiency of the data is improved.
Fig. 1 is a schematic structural diagram of a recharging system according to an embodiment of the present invention. As shown in fig. 1, the recharging system provided in the embodiment of the present invention includes S terminals, data caching servers, and M servers to be cached, where S and M are both positive integers. The data transmission is performed between each terminal and each data caching server through wireless, the data caching server and each server to be cached are performed through a data transmission bus, and the physical connection between the M servers to be cached is achieved through Linked Hash Map connection. Specifically, in a card recharging system, the terminal is used for collecting data to be cached, wherein the data to be cached is recharging data corresponding to the recharging process of a user. Because the recharging system of one card can realize recharging in various scenes such as recharging in different places, the situation that a plurality of users recharge at the same time can occur, namely, the data to be cached can recharge the data of a plurality of users at the same time. The terminal provides a recharging client for a user through a recharging service program of one card, and a High-speed service framework (High-Speed Service Framework, HSF) distributed application system can be realized by utilizing a plurality of terminals, so that the distributed arrangement of the recharging system client of one card is realized from the distributed application layer and the unified release and calling mode layer. The data caching server realizes distributed data caching by utilizing all servers to be cached. The server to be cached does not support distributed storage, and the data caching server is required to store the collected data to be recharged into the corresponding server to be cached according to a set routing rule. The data caching server receives recharging data to be cached, which is sent by the terminal, through the recharging service interface. And determining a corresponding server identifier to be cached when the recharging data to be cached are stored in a distributed mode, and transmitting the recharging data to be cached to the corresponding server to be cached. Specifically, M servers to be cached form a server cluster to be cached, and in the invention, the distributed storage of data is realized through the server cluster to be cached formed by a plurality of servers to be cached.
Fig. 2 is a schematic flow chart of a data caching method according to an embodiment of the present invention, where the execution subject of the embodiment may be the data caching server in the embodiment shown in fig. 1, and the embodiment is not limited herein. As shown in fig. 2, the method includes:
s201: and acquiring data to be cached, and determining key data of the data to be cached.
In the embodiment of the invention, in order to realize the distributed storage of the data to be cached, the storage route is required to be determined according to the key data of the data to be cached. Specifically, the data to be cached is obtained through the recharging data sent by the receiving terminal, and a storage path of distributed storage is determined according to key data of the recharging data. The key data of the data to be cached is a serial number of the rechargeable card in the recharging data, wherein the serial number of the rechargeable card of each data to be recharged is a unique identifier. Specifically, naming rules of serial numbers of rechargeable cards to be rechargeable cards are service identifiers, area identifiers and card numbers.
In the embodiment of the invention, for example, before determining the server identifier to be cached according to the virtual index value, the server information to be cached sent by all servers to be cached is received, and the virtual index values corresponding to all servers to be cached are determined according to a hash algorithm; and generating a mapping list according to the preset number of virtual to-be-cached nodes and virtual index values corresponding to each to-be-cached server, wherein all to-be-cached server identifiers and virtual index value ranges corresponding to the to-be-cached servers are stored in the mapping list. Specifically, the server to be cached is a Redis server. Fig. 3 is a schematic diagram of distributed storage according to an embodiment of the present invention. In the embodiment of the invention, the recharging system comprises 3 Redis servers in total. As shown in fig. 3, 3 Redis servers are named as M1, M2 and M3 respectively, and as shown by solid circles in fig. 3, physical connection between the 3 Redis servers is set to be stored by using a Linked Hash Map. And determining virtual index values corresponding to all servers to be cached by utilizing a hash algorithm according to node information of all servers to be cached, generating a mapping list according to a preset number of virtual nodes to be cached and the virtual index values corresponding to each server to be cached, and setting all the virtual nodes to be cached to be stored by adopting a TreeMap. In fig. 3, the dashed circle represents a virtual node to be cached. For example, if the preset number is 160, in fig. 3, 150 virtual nodes to be cached are included, the virtual index value range corresponding to the Redis server M1 is 1-50, the virtual index value range corresponding to the Redis server M2 is 51-100, and the virtual index value range corresponding to the Redis server M3 is 101-150. And naming the storage NODEs corresponding to all the servers to be cached and all the virtual NODEs to be cached according to SHARD-N-NODE-M naming rules, wherein N represents the identification of the Redis server, and M is the virtual index value of the virtual NODEs to be cached. Illustratively, SHARD-1-NODE-32 represents the 32 rd virtual NODE to be cached of Redis server M1.
S202: and determining a virtual index value according to the key data, and determining a server identification to be cached according to the virtual index value.
In the embodiment of the invention, specifically, a virtual index value range corresponding to a virtual index value is determined from a mapping list; and determining the identification of the target cache server according to the virtual index value range, and taking the identification of the target cache server as the identification of the server to be cached. And carrying out hash operation on the serial number of the rechargeable card according to a hash algorithm to obtain a unique hash value corresponding to the serial number of the rechargeable card, and taking the hash value as a virtual index value of data to be cached. And determining the server identification to be cached according to the virtual index value. For example, if the result of the hash operation on the serial number of the rechargeable card according to the hash algorithm is 51, the virtual index value of the data to be cached is 51, and the virtual index value of the virtual node to be cached corresponding to the data to be cached is 51, and the target cache server corresponding to the virtual node to be cached with the virtual index value of 51 is determined to be the dis server M2 according to the mapping list, that is, the identifier of the server to be cached corresponding to the virtual node to be cached with the virtual index value of 51 is M2.
S203: and sending the data to be cached to the server to be cached according to the identifier of the server to be cached, so that the server to be cached stores the data according to the data to be cached.
In the embodiment of the invention, after the identification of the server to be cached is determined, the data to be cached is sent to the server to be cached for storage according to the identification of the server to be cached. In the recharging system, a scenario that a plurality of users recharge simultaneously occurs, a corresponding virtual index value is determined according to a keyword of data to be cached, which is recharged by each user, and a corresponding server to be cached is determined according to the virtual index value, so that the data to be cached, which is recharged by each user, is respectively stored in the corresponding servers to be cached, and distributed storage is realized. Exemplary, as shown in fig. 4, fig. 4 is a schematic diagram of data storage according to an embodiment of the present invention. The key data 1 and the key data 2 of the data 1 to be cached and the key data 2 to be cached respectively obtain virtual index values V2 and V5 of corresponding virtual nodes to be cached, respectively determine that an instantiation storage node corresponding to the virtual index value V2 is n1 and an instantiation storage node corresponding to the virtual index value V5 is n3 according to the corresponding relation of a mapping list, namely the identifier of a server to be cached corresponding to the data 1 to be cached is n1 and the identifier of a server to be cached corresponding to the data 2 to be cached is n3, respectively send the data 1 to be cached to an n1 server for storage and send the data 2 to be cached to an n3 server for storage, and realize distributed storage of the data to be cached.
According to the data caching method, under the condition that the recharging system processes high-concurrency recharging business, the virtual index value is determined by utilizing the key data of the data to be cached, and the server identification to be cached is determined according to the virtual index value, so that the distributed storage of the data is realized, and the storage efficiency of the data is improved.
Fig. 5 is a schematic diagram of a data buffering method according to an embodiment of the present invention. In the embodiment of the present invention, the method before determining the critical data of the data to be cached in S201 is described in detail on the basis of the embodiment provided in fig. 2. As shown in fig. 5, the method includes:
s501: and carrying out serialization storage on the data to be cached according to a serialization algorithm to obtain serialized storage data, and taking the serialized storage data as the data to be cached.
In the embodiment of the invention, illustratively, a Kryo serialization algorithm is selected to perform serialization storage on data to be cached, specifically, kryo development software is utilized to obtain serialized storage data by implementing serialization storage of the data to be cached, and the serialized storage data is used as the data to be cached. Specifically, the variable-length byte storage is adopted to replace a mode of using fixed bytes (4, 8) in java for data types such as long and int, and a data caching mechanism is used, so that in one-time serialization of the object, the same object is serialized once in the whole recursion serialization period, and a local int value is used for replacing the same object subsequently, thereby improving the efficiency of data storage. The byte code generation mechanism in the serialization algorithm is utilized to convert the data type of the data to be cached into long byte storage, so that the space for storing the data is saved.
S502: and determining key data of the data to be cached according to the serialized storage data.
In the embodiment of the present invention, a hash algorithm is adopted according to the serialized storage data in S501 to obtain key data corresponding to the data to be cached. The serialized storage data are long-byte storage data, and a key value, namely key data corresponding to data to be cached, can be calculated directly by utilizing a hash algorithm.
According to the data caching method, the data to be cached is stored in a serialization mode through the serialization algorithm, so that the storage space of the data is saved, the storage efficiency of the data is improved, and the recharging system provided by the invention can achieve higher data storage efficiency and larger data storage space.
Fig. 6 is a schematic structural diagram of a data buffering device according to an embodiment of the present invention. As shown in fig. 6, the data caching apparatus includes: an acquisition module 601, a determination module 602, and a transmission module 603.
The acquiring module 601 is configured to acquire data to be cached, and determine key data of the data to be cached;
a determining module 602, configured to determine a virtual index value according to the key data, and determine a server identifier to be cached according to the virtual index value;
and the sending module 603 is configured to send the data to be cached to the server to be cached according to the identifier of the server to be cached, so that the server to be cached performs data storage according to the data to be cached.
In a possible implementation manner, the data caching device provided by the embodiment of the invention further comprises a generating module, a hash module and a data caching module, wherein the generating module is used for receiving server information to be cached sent by all servers to be cached and determining virtual index values corresponding to all servers to be cached according to the hash algorithm; and generating a mapping list according to a preset number of virtual to-be-cached nodes and virtual index values corresponding to each to-be-cached server, wherein all to-be-cached server identifiers and virtual index value ranges corresponding to the to-be-cached servers are stored in the mapping list.
In a possible implementation manner, the determining module 602 is specifically configured to determine, from the mapping list, a virtual index value range corresponding to the virtual index value; and determining the identification of the target cache server according to the virtual index value range, and taking the identification of the target cache server as the identification of the server to be cached.
In one possible implementation, the key data is a serial number of the rechargeable card, and the determining module 602 is specifically configured to determine a virtual index value of the serial number of the rechargeable card according to a hash algorithm.
In a possible implementation manner, the data caching device further includes a storage module, specifically, the data to be cached is subjected to serialization storage according to a serialization algorithm to obtain serialized storage data, and the serialized storage data is used as the data to be cached; and determining key data of the data to be cached according to the serialized storage data.
The device provided in this embodiment may be used to implement the technical solution of the foregoing method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein again.
Fig. 7 is a schematic hardware structure of a server according to an embodiment of the present invention. As shown in fig. 7, the server 70 of the present embodiment includes: a processor 701 and a memory 702; wherein the method comprises the steps of
A memory 702 for storing computer-executable instructions;
the processor 701 is configured to execute computer-executable instructions stored in the memory to implement the steps executed by the server in the above embodiments. Reference may be made in particular to the relevant description of the embodiments of the method described above.
Alternatively, the memory 702 may be separate or integrated with the processor 701.
When the memory 702 is provided separately, the server further comprises a bus 703 for connecting said memory 702 to the processor 701.
The embodiment of the invention also provides a computer storage medium, wherein computer execution instructions are stored in the computer storage medium, and when a processor executes the computer execution instructions, the data caching method is realized.
The embodiment of the invention also provides a computer program product, which comprises a computer program, wherein the computer program realizes the data caching method when being executed by a processor.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to implement the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each module may exist alone physically, or two or more modules may be integrated in one unit. The units formed by the modules can be realized in a form of hardware or a form of hardware and software functional units.
The integrated modules, which are implemented in the form of software functional modules, may be stored in a computer readable storage medium. The software functional modules described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or processor to perform some of the steps of the methods described in various embodiments of the present application.
It should be understood that the above processor may be a central processing unit (Central Processing Unit, abbreviated as CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, abbreviated as DSP), application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, and may also be a U-disk, a removable hard disk, a read-only memory, a magnetic disk or optical disk, etc.
The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or one type of bus.
The storage medium may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC for short). It is also possible that the processor and the storage medium reside as discrete components in an electronic device or a master device.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (8)

1. A data caching method, comprising:
receiving recharging data of a plurality of terminals through a recharging service interface;
acquiring data to be cached according to the recharging data, and determining key data of the data to be cached, wherein the key data is a serial number of a recharging card;
determining a virtual index value of the serial number of the rechargeable card according to a hash algorithm, and determining a server identifier to be cached according to the virtual index value;
transmitting the data to be cached to the server to be cached according to the server identifier to be cached, so that the server to be cached stores the data according to the data to be cached;
before determining the key data of the data to be cached, the method further comprises:
carrying out serialization storage on the data to be cached according to a byte code generation mechanism in a serialization algorithm to obtain serialized storage data, taking the serialized storage data as the data to be cached, wherein the serialized storage data is long-byte storage data, and in one-time serialization of objects, the same objects are serialized once during whole echelon serialization, and the subsequent partial rounding functions int are used for replacing the objects;
and determining key data of the data to be cached according to the serialized storage data.
2. The method according to claim 1, comprising, before said determining a server identification to be cached from said virtual index value:
receiving server information to be cached sent by all servers to be cached, and determining virtual index values corresponding to all servers to be cached according to a hash algorithm;
and generating a mapping list according to a preset number of virtual to-be-cached nodes and virtual index values corresponding to each to-be-cached server, wherein all to-be-cached server identifiers and virtual index value ranges corresponding to the to-be-cached servers are stored in the mapping list.
3. The method according to claim 2, wherein the determining the server identification to be cached according to the virtual index value comprises:
determining a virtual index value range corresponding to the virtual index value from the mapping list;
and determining the identification of the target cache server according to the virtual index value range, and taking the identification of the target cache server as the identification of the server to be cached.
4. A data caching apparatus, comprising:
the acquisition module is used for receiving recharging data of a plurality of terminals through the recharging service interface; acquiring data to be cached according to the recharging data, and determining key data of the data to be cached, wherein the key data is a serial number of a recharging card;
the determining module is used for determining a virtual index value of the serial number of the rechargeable card according to the hash algorithm and determining a server identifier to be cached according to the virtual index value;
the sending module is used for sending the data to be cached to the server to be cached according to the server identifier to be cached, so that the server to be cached performs data storage according to the data to be cached;
the apparatus further comprises: the memory module is provided with a memory module,
the storage module is specifically configured to:
carrying out serialization storage on the data to be cached according to a byte code generation mechanism in a serialization algorithm to obtain serialized storage data, taking the serialized storage data as the data to be cached, wherein the serialized storage data is long-byte storage data, and in one-time serialization of objects, the same objects are serialized once during whole echelon serialization, and the subsequent partial rounding functions int are used for replacing the objects;
and determining key data of the data to be cached according to the serialized storage data.
5. A data caching server, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing computer-executable instructions stored in the memory causes the at least one processor to perform the data caching method of any one of claims 1 to 3.
6. A computer storage medium having stored therein computer executable instructions which when executed by a processor implement the data caching method of any one of claims 1 to 3.
7. The recharging system is characterized by comprising the data caching server, S terminals and M servers to be cached according to claim 5, wherein S and M are positive integers;
the terminal is used for collecting data to be cached and sending the data to be cached to the data caching server;
the data cache server is used for receiving recharging data of a plurality of terminals through the recharging service interface;
acquiring data to be cached according to the recharging data, and determining key data of the data to be cached, wherein the key data is a serial number of a recharging card; determining a virtual index value of the serial number of the rechargeable card according to a hash algorithm, and determining a server identifier to be cached according to the virtual index value; transmitting the data to be cached to the server to be cached according to the server identifier to be cached;
the server to be cached is used for storing the data to be cached;
the data caching server is further configured to: carrying out serialization storage on the data to be cached according to a byte code generation mechanism in a serialization algorithm to obtain serialized storage data, taking the serialized storage data as the data to be cached, wherein the serialized storage data is long-byte storage data, and in one-time serialization of objects, the same objects are serialized once during whole echelon serialization, and the subsequent partial rounding functions int are used for replacing the objects; and determining key data of the data to be cached according to the serialized storage data.
8. The system of claim 7, wherein the server to be cached is a Redis server.
CN202111024640.2A 2021-09-02 2021-09-02 Data caching method, device, server and recharging system Active CN113704308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111024640.2A CN113704308B (en) 2021-09-02 2021-09-02 Data caching method, device, server and recharging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111024640.2A CN113704308B (en) 2021-09-02 2021-09-02 Data caching method, device, server and recharging system

Publications (2)

Publication Number Publication Date
CN113704308A CN113704308A (en) 2021-11-26
CN113704308B true CN113704308B (en) 2024-03-12

Family

ID=78657284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111024640.2A Active CN113704308B (en) 2021-09-02 2021-09-02 Data caching method, device, server and recharging system

Country Status (1)

Country Link
CN (1) CN113704308B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106775638A (en) * 2016-11-22 2017-05-31 北京皮尔布莱尼软件有限公司 A kind of object serialization method, device and computing device
CN109446225A (en) * 2018-09-26 2019-03-08 平安科技(深圳)有限公司 Data cache method, device, computer equipment and storage medium
CN110336891A (en) * 2019-07-24 2019-10-15 中南民族大学 Data cached location mode, equipment, storage medium and device
CN110442848A (en) * 2019-07-30 2019-11-12 中国工商银行股份有限公司 Data Serialization conciliates sequence method and its device, electronic equipment and medium
CN112333186A (en) * 2020-11-03 2021-02-05 平安普惠企业管理有限公司 Data communication method, device, equipment and storage medium
CN112463379A (en) * 2020-11-27 2021-03-09 北京浪潮数据技术有限公司 Management method and system of virtual cache server, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10089377B2 (en) * 2014-09-26 2018-10-02 Oracle International Corporation System and method for data transfer from JDBC to a data warehouse layer in a massively parallel or distributed database environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106775638A (en) * 2016-11-22 2017-05-31 北京皮尔布莱尼软件有限公司 A kind of object serialization method, device and computing device
CN109446225A (en) * 2018-09-26 2019-03-08 平安科技(深圳)有限公司 Data cache method, device, computer equipment and storage medium
CN110336891A (en) * 2019-07-24 2019-10-15 中南民族大学 Data cached location mode, equipment, storage medium and device
CN110442848A (en) * 2019-07-30 2019-11-12 中国工商银行股份有限公司 Data Serialization conciliates sequence method and its device, electronic equipment and medium
CN112333186A (en) * 2020-11-03 2021-02-05 平安普惠企业管理有限公司 Data communication method, device, equipment and storage medium
CN112463379A (en) * 2020-11-27 2021-03-09 北京浪潮数据技术有限公司 Management method and system of virtual cache server, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Parallelization of tree-to-TLV serialization;Makoto Nakayama等;《2014 IEEE 33rd international performance computing and communications conference》;20150122;第1-2页 *
基于RDD非序列化本地存储的Spark存储性能优化;赵俊先 等;《计算机科学》;20190515;第46卷(第3期);第143-149页 *

Also Published As

Publication number Publication date
CN113704308A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN109032803B (en) Data processing method and device and client
CN113169882A (en) System and method for block chain interoperability
CN109918382A (en) Data processing method, device, terminal and storage medium
CN112953981A (en) Node selection method, block chain consensus method, device, system and equipment
EP4310691A1 (en) Blockchain-based data processing method, apparatus, and device, and storage medium
CN111767144A (en) Transaction routing determination method, device, equipment and system for transaction data
CN114127724A (en) Integrity audit for multi-copy storage
CN116305298B (en) Method and device for managing computing power resources, storage medium and electronic equipment
CN110570311A (en) block chain consensus method, device and equipment
CN107276912B (en) Memory, message processing method and distributed storage system
CN113704308B (en) Data caching method, device, server and recharging system
CN112241474A (en) Information processing method, device and storage medium
CN110457265A (en) Data processing method, device and storage medium
CN111324645A (en) Data processing method and device for block chain
US20220171763A1 (en) Blockchain selective world state database
CN109088913B (en) Method for requesting data and load balancing server
JP2023538497A (en) editable blockchain
CN110717827B (en) Database determination method and device and transaction processing system
CN113608703B (en) Data processing method and device
WO2024109388A1 (en) Feature synchronization method and apparatus, and computer device, storage medium and program product
CN113392138B (en) Statistical analysis method, device, server and storage medium for private data
CN110912987B (en) Information processing method and related equipment
CN110209666B (en) data storage method and terminal equipment
CN113194127B (en) Data storage method, data transmission method and computing equipment
CN112883038B (en) Data management method based on block chain, computer and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant