CN116483746A - Data caching method and unified caching device - Google Patents

Data caching method and unified caching device Download PDF

Info

Publication number
CN116483746A
CN116483746A CN202310376856.8A CN202310376856A CN116483746A CN 116483746 A CN116483746 A CN 116483746A CN 202310376856 A CN202310376856 A CN 202310376856A CN 116483746 A CN116483746 A CN 116483746A
Authority
CN
China
Prior art keywords
service
data
micro
unified
service data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310376856.8A
Other languages
Chinese (zh)
Inventor
石少锋
杨亦胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huawei Cloud Computing Technology Co ltd
Original Assignee
Shenzhen Huawei Cloud Computing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huawei Cloud Computing Technology Co ltd filed Critical Shenzhen Huawei Cloud Computing Technology Co ltd
Priority to CN202310376856.8A priority Critical patent/CN116483746A/en
Publication of CN116483746A publication Critical patent/CN116483746A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0873Mapping of cache memory to specific storage devices or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Mining & Analysis (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Computing Systems (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • Health & Medical Sciences (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a data caching method and a unified caching device, the method is applied to a micro-service architecture, the micro-service architecture comprises at least one first micro-service, at least one second micro-service and a unified caching system, and the method comprises the following steps: the unified cache system acquires a first mapping relation, wherein the first mapping relation is a corresponding relation between a plurality of data objects and a plurality of service data, and each service data is determined according to the corresponding data object; the unified cache system receives service request information of one or more second micro services, wherein the service request information comprises a first data object; and the unified caching system determines first business data corresponding to the first data object according to the first mapping relation and sends the first business data to one or more second micro services. According to the technical scheme, the unified cache system stores and transmits the data, so that the efficiency of acquiring service data and communicating and coordinating between micro services is improved.

Description

Data caching method and unified caching device
Technical Field
The embodiment of the application relates to the field of cloud computing, and more particularly relates to a data caching method and a unified caching device.
Background
Under the micro-service architecture scene, different micro-services are operated by different service domain systems, the different domain systems have data dependency relations, and under the high concurrency and low time delay scene, the cross-system data acquisition can occupy a large amount of limited threads and network resources, so that concurrency and time delay can not be improved.
One micro service may rely on data of multiple business domains, and data of one business domain may also be relied on by multiple micro services. By adopting the traditional data caching scheme, a set of caching system is required to be built for each micro-service, and a large amount of resources are wasted by repeated construction.
Disclosure of Invention
The embodiment of the application provides a data caching method and a unified caching device, which can improve the efficiency of acquiring business data between micro services and the efficiency of communication coordination between the micro services by establishing a unified caching system of a micro service architecture.
In a first aspect, a method of data caching is provided, the method being applied to a micro-service architecture, the micro-service architecture including at least one first micro-service, at least one second micro-service, and a unified caching system, business data for business processing of the at least one second micro-service being provided by the at least one first micro-service, the method comprising: the unified cache system acquires a first mapping relation, wherein the first mapping relation is a corresponding relation between a plurality of data objects and a plurality of service data, each service data is determined according to the corresponding data object, the plurality of service data is provided by one or a plurality of first micro services, and the one or the plurality of first micro services are part or all of at least one first micro service; the unified cache system receives service request information of one or more second micro services, wherein the service request information comprises a first data object, the first data object is part or all of a plurality of data objects, and the one or more second micro services are part or all of at least one second micro service; the unified cache system determines first business data corresponding to the first data object according to the first mapping relation; the unified cache system sends the first service data to one or more second micro services, and the one or more second micro services are used for performing service processing according to the first service data.
According to the technical scheme provided by the application, the business data is sent to the dependent service through the unified cache system, interface interaction between the dependent service and the dependent service is eliminated, the efficiency of data acquisition between micro services is improved, and the problem of uneven cache utilization rate of different micro services is avoided. Meanwhile, the unified cache system unifies the data objects of the micro services, and the problem of inconsistent data formats among the micro services is avoided.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: the unified cache system acquires a security level value corresponding to the first data object; and the unified cache system takes security measures for the first service data according to the security level value corresponding to the first data object.
With reference to the first aspect, in certain implementations of the first aspect, the security level value of the first data object is greater than or equal to a security level threshold, and taking security measures on the first traffic data includes at least one of: the unified cache system encrypts and stores the first service data; in the case where the unified cache system transmits first service data to one or more second micro services, the unified cache system sets an authentication measure for the first service data.
According to the technical scheme provided by the application, the data protection capability is provided through trusted data access credential management and service data encryption storage, and the security and the reliability of data query are ensured.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: when the first service data is changed, the unified cache system generates change notification information for prompting the first service data to be changed into the second service data.
According to the technical scheme provided by the application, the event change of the data object is managed, so that the data consistency among the micro services is enhanced.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: the unified cache system takes data synchronization measures for modifying the first service data stored in the unified cache system to the second service data.
According to the technical scheme provided by the application, the data consistency among the micro services is enhanced by automatically carrying out cache updating processing.
With reference to the first aspect, in certain implementations of the first aspect, an execution period of the data synchronization measure is determined according to a target time threshold.
According to the technical scheme provided by the application, the data synchronization measures are carried out regularly, so that more synchronous updating frequency and waste of synchronous updating resources caused by a small amount of service data change are avoided.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: the unified caching system determines that the frequency of using the first service data is higher than a target frequency threshold when one or more second micro services perform service processing; the unified caching system sends the first business data to the local caches of the one or more second micro services.
According to the technical scheme provided by the application, the hot data object instantiation data service micro-service local caching capability is provided, the system processing efficiency is improved, and the hot cache query performance is further improved.
In a second aspect, there is provided a unified caching apparatus applied to a micro-service architecture, the micro-service architecture further including at least one first micro-service and at least one second micro-service, the at least one second micro-service providing service data for service processing, the unified caching apparatus comprising: the system comprises an acquisition module, a first mapping relation acquisition module and a second mapping relation acquisition module, wherein the first mapping relation is a corresponding relation between a plurality of data objects and a plurality of service data, each service data is determined according to the corresponding data object, the plurality of service data is provided by one or more first micro services, and the one or more first micro services are part or all of at least one first micro service; a receiving module, configured to receive service request information of one or more second micro services, where the service request information includes a first data object, where the first data object is part or all of a plurality of data objects, and the one or more second micro services are part or all of at least one second micro service; and the sending module is used for determining first service data corresponding to the first data object according to the first mapping relation and sending the first service data to one or more second micro services, wherein the one or more second micro services are used for carrying out service processing according to the first service data.
With reference to the second aspect, in some implementations of the second aspect, the unified caching device further includes a processing module, where the obtaining module is specifically configured to obtain a security level value corresponding to the first data object; and the processing module is used for taking security measures for the first service data according to the security level value corresponding to the first data object.
With reference to the second aspect, in some implementations of the second aspect, the security level value of the first data object is greater than or equal to a security level threshold, and the processing module is specifically configured to at least one of: encrypting and storing the first service data; in the case that the transmission module transmits the first service data to one or more second micro services, an authentication measure is set for the first service data.
With reference to the second aspect, in some implementations of the second aspect, the unified caching apparatus further includes a generating module, where the generating module is configured to generate change notification information, where the change notification information is configured to prompt the first service data to change to the second service data, when the first service data changes.
With reference to the second aspect, in certain implementations of the second aspect, the processing module is specifically configured to take a data synchronization measure, where the data synchronization measure is used to modify the stored first service data into the second service data.
With reference to the second aspect, in certain implementations of the second aspect, the execution period of the data synchronization measure is determined according to a target time threshold.
With reference to the second aspect, in some implementations of the second aspect, the processing module is specifically configured to determine that a frequency of using the first service data when the one or more second micro services perform service processing is higher than a target frequency threshold; the sending module is specifically configured to send the first service data to a local cache of one or more second micro services.
In a third aspect, there is provided an apparatus comprising a processor and a memory, wherein the memory is for storing instructions, the processor being for executing the instructions stored in the memory, to cause the computing apparatus to perform the method of the first aspect or any one of the possible implementations of the first aspect.
In the alternative, the processor may be a general purpose processor, and may be implemented in hardware or in software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like; when implemented in software, the processor may be a general-purpose processor, implemented by reading software code stored in a memory, which may be integrated in the processor, or may exist separately from the processor.
In a fourth aspect, a chip is provided, which obtains instructions and executes the instructions to implement the method of the first aspect or any one of the possible implementation manners of the first aspect.
Optionally, as an implementation manner, the chip includes a processor and a data interface, where the processor reads instructions stored on a memory through the data interface, and performs the method in the first aspect or any one of the possible implementation manners of the first aspect.
Optionally, as an implementation manner, the chip may further include a memory, where the memory stores instructions, and the processor is configured to execute the instructions stored on the memory, where the instructions, when executed, are configured to perform the method in the first aspect or any one of the possible implementation manners of the first aspect.
In a fifth aspect, there is provided a computer program product comprising instructions which, when executed by a computing device or cluster of computing devices, cause the computing device or cluster of computing devices to perform the method of any one of the possible implementations of the first aspect or the first aspect.
In a sixth aspect, a computer readable storage medium is provided, comprising computer program instructions which, when executed by a computing device or a cluster of computing devices, cause the computing device or the cluster of computing devices to perform the method of the first aspect or any one of the possible implementations of the first aspect.
By way of example, such computer-readable storage media include, but are not limited to, one or more of the following: read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), flash memory, electrically EPROM (EEPROM), and hard disk drive (hard drive).
Alternatively, as an implementation manner, the storage medium may be a nonvolatile storage medium.
Drawings
Fig. 1 is a schematic system diagram of a cloud server system according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a conventional architecture and a micro-service architecture according to an embodiment of the present application.
Fig. 3 is a schematic block diagram of a method for data caching according to an embodiment of the present application.
Fig. 4 is a schematic diagram of unified cache management according to an embodiment of the present application.
Fig. 5 is a schematic block diagram of a unified caching device according to an embodiment of the present application.
Fig. 6 is a schematic block diagram of a computing device provided in an embodiment of the present application.
Fig. 7 is a schematic block diagram of a computing device cluster provided in an embodiment of the present application.
FIG. 8 is a schematic block diagram of another cluster of computing devices provided by an embodiment of the application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be understood that, in various embodiments of the present application, the size of the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In addition, in the embodiments of the present application, words such as "exemplary," "for example," and the like are used to indicate an example, instance, or illustration. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the term use of an example is intended to present concepts in a concrete fashion.
In the embodiments of the present application, "corresponding" and "corresponding" may sometimes be used in combination, and it should be noted that the meaning to be expressed is consistent when the distinction is not emphasized.
The network architecture and the service scenario described in the embodiments of the present application are for more clearly describing the technical solution of the embodiments of the present application, and do not constitute a limitation on the technical solution provided in the embodiments of the present application, and those skilled in the art can know that, with the evolution of the network architecture and the appearance of the new service scenario, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: including the case where a alone exists, both a and B together, and B alone, where a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
The micro-service architecture is a technology for deploying applications and services in a cloud environment, and includes a plurality of micro-services. In a micro-service architecture, micro-services advocate the division of a single application into a set of small services, the services being coordinated and interworked with each other, providing the user with a final value. Each service runs in an independent process, and the service communicate with each other by adopting a lightweight communication mechanism. A micro-service is an independent entity. The micro-services may be deployed independently on a platform, i.e., service PaaS (platform as a service), or may exist as an operating system process.
To facilitate understanding of the cloud environment, fig. 1 shows a system schematic diagram based on a cloud server system. As shown in fig. 1, the cloud management platform 110 is configured to manage an infrastructure that provides a plurality of cloud services, where the infrastructure includes a plurality of cloud data centers, each cloud data center includes a plurality of servers, each server includes a cloud service resource, and provides a corresponding cloud service for a tenant.
The cloud management platform 110 provides an access interface (such as an interface or an application program interface (application program interface, API)), the tenant may operate the client to remotely register a cloud account number and a password in the cloud management platform through the Cheng Jieru access interface, log in the cloud management platform, after the cloud management platform successfully authenticates the cloud account number and the password, the tenant may further pay for selecting and purchasing a virtual machine with a specific specification (a processor, a memory and a disk) in the cloud management platform, after the pay purchase is successful, the cloud management platform provides a remote login account number password of the purchased virtual machine, and the client may remotely log in the virtual machine, and install and run an application of the tenant in the virtual machine. The tenant of the cloud service may be a person, business, school, hospital, administrative authority, or the like.
The functions of the cloud management platform 110 include, but are not limited to, user consoles, computing management services, network management services, storage management services, authentication services, mirror management services. The user console provides interfaces or APIs to interact with tenants, the computing management service is used for managing servers running virtual machines and containers and bare metal servers, the network management service is used for managing network services (such as gateways, firewalls and the like), the storage management service is used for managing storage services (such as data bucket services), the authentication service is used for managing account passwords of tenants, and the mirror image management service is used for managing virtual machine mirrors. The tenant uses the client 130 to log in to the cloud management platform 110 through the internet 120, and manage the rented cloud service.
For example, in an embodiment of the present application, a cloud server for unified caching in a cloud data center may receive and store service data sent by one or more micro services in a micro service architecture, and send the service data stored in the cloud server according to service request information of other micro services.
To facilitate understanding of the micro-service architecture, FIG. 2 shows a schematic diagram of a conventional architecture and micro-service architecture. As shown in fig. 2, the online supermarket may have functional services such as user service, commodity service, order service, etc. As shown in fig. 2 (a), in the conventional service architecture, a developer may place all the above-mentioned functional services in a unified system, such as an application program of a mobile terminal or a browser website. In this traditional service architecture, the web site and mobile end application have many repeated codes of the same business logic and the interface relationships used to invoke the data are cluttered. In addition, when maintaining the traditional architecture, if a small function in a certain function service needs to be changed, the small function in all systems needs to be modified, so that the efficiency is low.
However, in the micro service architecture, as shown in fig. 2 (b), a developer may create applications corresponding to the respective functional services according to different service areas of the functional services. These applications can be developed, managed, and iterated independently. The application background only needs to acquire the required data from the services, so that a large amount of redundant codes are deleted, and only a control layer and a front end are left.
In a micro-service architecture, different domain systems of services run different micro-services, and the different domain systems have data dependencies on each other. For example, the user service, the commodity service, and the order service in fig. 2 correspond to the user service, the commodity service, and the order service, respectively, and the establishment of the order service requires reliance on the user service and the commodity service, and thus the order service requires reliance on the business data of the user service and the commodity service. For convenience of description, a micro-service that depends on other business service data will be hereinafter referred to as a "dependent service", and a micro-service that depends on business data by other micro-services will be hereinafter referred to as a "dependent service". In the embodiment of the application, one micro service can be a depended service or a depended service, that is, can depend on business data of other micro services, and the business data of the micro service can also depend on other micro services.
In the micro-service architecture of the present stage, each micro-service automatically establishes a set of cache system. Caching refers to temporarily holding some data in a high-speed memory for reading and re-reading, for reducing the latency of acquiring the data. For example, the depended service establishes a local cache, stores the service data generated by the service processing in the local cache, and obtains the service data from the local cache of the depended service when the depended service calls the service data. In addition, the dependent service can also establish a local cache, after the service data is acquired from the dependent service, the dependent service stores the acquired service data in the local cache, and subsequently, the data is preferentially acquired from the local cache, and the service data is acquired from the dependent service when the service data is not in the cache.
However, in the case of a large number of micro services, there are many corresponding buffer data to be created, and some data are stored in different buffers, resulting in repeated buffering of data and wasting of buffer space. In addition, the cache utilization rate of different micro services is uneven, resources applied according to peak flow cannot be shared, and the cache resource waste is serious.
In addition, because of the strong coupling dependency relationship between the dependent service and the dependent service, that is, the data exchange is realized through the specific data object and the interface, the format inconsistency and the interface change complexity may exist between the data cached by the dependent service and the data required by the dependent service. For example, the user service can provide the service data with the data object of "user name and mobile phone number", and the data object of the service data needed for the service processing of the order service is "mobile phone number and user name", so that the order service needs to perform the data processing before using the service data provided by the user service. For another example, if the interface between the user service and the order service needs to be updated and upgraded, a cooperative agreement is required between the two micro services to achieve synchronous updating and upgrading, and correspondingly, the commodity service having a dependency relationship with the order service may also update and upgrade the interface along with the updating of the interface of the order service. When more services of updating and upgrading interfaces are needed in the system, the complexity and the workload of communication coordination are high.
In order to solve the above-mentioned problem, the present application proposes a method 300 for data caching. The method is applied to a micro-service architecture, wherein the micro-service architecture comprises at least one dependent service, at least one depended service and a unified cache system. Fig. 3 shows a schematic block diagram of a method 300 of data caching, wherein the method 300 comprises steps 310, 320 and 330. Unlike the self-built cache of each micro-service in the current stage, the technical scheme of the application builds a unified cache system of the micro-service architecture. When the unified caching system executes the method 300, not only the efficiency of acquiring business data between micro services can be improved, but also the efficiency of communication coordination between the micro services can be improved.
Step 310: the unified cache system obtains a first mapping relation, wherein the first mapping relation is a corresponding relation between a plurality of data objects and a plurality of service data, and each service data is determined according to the corresponding data object.
It should be appreciated that the business data is received and stored by the unified cache system through a specific data object, thereby solving the problem of inconsistent data formats between the dependent service and the dependent service. Wherein the data objects are sets of service data of the same nature, i.e. there is a mapping relationship between the data objects and the service data of the same nature. The description of the data object may include definition, model, usage specification, etc. of the data, for example, the attribution field of the data, data name, data parameter, data type, management policy, data change event, synchronization interface, field information, status, and policy for caching data management, etc., and the service data corresponding to the description of the data object is the service data having a mapping relationship with the data object. When the dependent service needs the service data to process the service, the unified cache system can send the service data corresponding to the data object to the dependent service according to the mapping relation. The mapping relationship will be described in the following embodiments by way of example, and will not be described in detail herein.
Alternatively, the data object may be determined according to the specific content of the service data required by the dependent service, may be determined according to the data object information input by the data manager through the management platform, or may be determined according to the specific content of the service data generated by the dependent service.
Step 320: the unified caching system receives service request information for one or more service dependent services, the service request information including a first data object.
It should be understood that when the dependent service needs the service data to perform service processing, the service data is obtained from the unified cache system, that is, the service request information is sent to the unified cache system, and the unified cache system queries the service data required by the dependent service according to the service request information. The first data object is a data object corresponding to first service data required for service processing depending on the service, and the first service data is part or all of the plurality of service data.
Step 330: and the unified cache system determines first business data corresponding to the first data object according to the first mapping relation and sends the first business data to one or more dependent services.
It should be understood that, according to the first mapping relationship, the unified cache system queries the first service data corresponding to the first data object in all service data of the unified cache system, and sends the first service data to the dependent service for service processing of the dependent service. The one or more dependent services are for performing business processes based on the first business data.
The method 300 determines the business data of the dependent service according to the first mapping relation by the unified caching system and sends the business data to the dependent service, so that interface interaction between the dependent service and the dependent service is eliminated, the efficiency of data acquisition between micro services is improved, and the problem of uneven cache utilization rate of different micro services is avoided. Meanwhile, the unified cache system unifies the data objects of the micro services, and the problem of inconsistent data formats among the micro services is avoided.
A specific embodiment will be provided below to introduce a specific process of method 300. Fig. 4 shows a schematic diagram of a data buffer provided in the present application. The micro-service architecture shown in fig. 4 includes a plurality of dependent services, and a unified cache system. Meanwhile, fig. 4 shows an example of two data objects, a customer level and a payment method, respectively.
Illustratively, the description of the data object corresponding to the customer level and payment method is shown in fig. 4, and includes the home domain, the name, the parameter name, the data type, the value range, the status, and the like. Optionally, the data object may also include a caching policy such as full cache. The unified cache system may receive and store service data having a mapping relationship with the client level and the payment method, for example, the service data having a mapping relationship with the client level may be service data indicated by the client level in the data object description, such as "home domain: customer domain, data type: enumeration, namely taking the value: v5, state: has been released. In the embodiment of the present application, the dependent service 1 and the dependent service 2 can acquire business data corresponding to the customer level and the payment method for business processing.
Alternatively, the data objects such as customer level and payment means may be determined based on the type of business data of the plurality of relied services. For example, the relied service 1 corresponds to a service in the user domain, that is, is capable of generating service data related to a client, wherein the client's level is included, the unified caching system or a manager of the relied service 1 may determine the data object of the client level according to the service data type of the relied service 1. For another example, the depended service 2 corresponds to a service in the order domain, i.e. is able to generate business data related to the order, wherein payment of the order is included, the unified caching system or the manager of the depended service 2 may determine the data object of the payment method according to the business data type of the depended service 2.
Alternatively, the data object may be a type of business data required for business processing depending on the service. For example, depending on the business data that services 1 and 2 require in relation to the customer's level and payment means of the order, the manager can determine the customer's level and payment means data objects based on the type of business data that is specifically required by the dependent services. The content of a particular data object also needs to be determined by the business data of the dependent service.
In the embodiment of the present application, the manager configures the data objects such as the client level and the payment manner into a unified cache system, and the unified cache system receives and stores the service data sent by the dependent service and corresponding to the data objects, for example, the service data with the value range [ V0, V10] in the dependent service 1, the data type being enumeration, and the status being published, such as "company a: v5 "and" B clients: v10", and the range of values in the relied service 2 is [ prepaid, monthly ], data type is enumerated, status is published business data, such as" company a: prepayment. Meanwhile, the unified cache system can mark the stored business data through attributes such as the attribution field, the name, the parameter name and the like so as to be used for distinguishing the stored full data, such as' A company, attribution field: customer domain, name: customer level: v5".
After the unified cache system stores the service data, the dependent service needs to send service request information to the unified cache system, such as a request to query the service data in the unified cache system for service processing. In the embodiment of the present application, if the relying service 1 needs to request to the unified cache system to query the service data corresponding to the client level, the relying service 1 may send the data object of the client level to the unified cache system, and the unified cache system may query the service data corresponding to the client level stored in the system according to the mapping relationship between the data object and the service data. Alternatively, the dependent service 1 may also transmit the value range corresponding to the client level at the same time when transmitting the data object of the client level. For example, the unified cache system stores the client class with the value range of [ V1, V10], and when the dependent service 1 can send the data object of the client class, the client class of V5 or [ V5, V8] is simultaneously sent, and further, the unified cache system sends the business data corresponding to the client class with the value range of V5 or [ V5, V8] to the dependent service 1, as the above-mentioned "company a: v5".
Alternatively, the unified cache system may include a query software package SDK, and the relying service may send service request information, that is, the data object in the embodiment of the present application, to the unified cache system through the query SDK integrated in the relying service. And the unified cache system queries the business data required by the dependent service through the query SDK, and sends the business data to the dependent service through the query SDK integrated in the dependent service.
Optionally, each data object may correspond to one or more security level values, so that the unified cache system takes security measures for the service data corresponding to the data object to avoid the problem of data leakage. For example, if the security level value is L1 to L4, where the L4 level is highest, the L1 level is lowest and no security measures are required. If the security level value of the client level is L2, the unified cache system may encrypt and store the service data corresponding to the client level when receiving the service data corresponding to the client level sent by the dependent service 1. For another example, if the security level value of the payment method is L3, the unified cache system may set an authentication measure for the service data corresponding to the payment method when sending the service data corresponding to the payment method of the dependent service 2, so as to prevent unauthorized access. If the security level of the data object is L4, the unified cache system can carry out encryption storage and authentication measures on the service data of the data object corresponding table. For another example, the security level value may further include a desensitization level, and the unified cache system may perform desensitization processing on the service data when storing or transmitting the service data corresponding to the data object with the desensitization level of 1.
Optionally, when the unified cache system sends the service data to the dependent service, whether the service data is directly sent to the local cache of the dependent service or not can be determined according to the frequency of using the service data by the dependent service, that is, only the service data with higher use frequency is stored in the local cache of the dependent service. Thus, the query performance and the query efficiency of the hot business data are further improved.
For example, the determination of the frequency of use of the data objects may be based on a popular data object set by the administrator. After the manager configures the data objects to the unified cache system, the manager may pre-define hot data objects. The hot data object may be a data object that is determined by an administrator to be more frequently used by dependent services. For example, the manager may predefine a customer level and a payment method as shown in fig. 4, define the customer level as a popular data object, and the payment method as a non-popular data object. After determining that the client level is the hot data object, when the relying service 1 and the relying service 2 need the business data corresponding to the client level to perform business processing, the unified caching system can directly send the business data corresponding to the client level to the local caches of the relying service 1 and the relying service 2 by inquiring the SDK. When the follow-up dependent service 1 and the dependent service 2 need the business data corresponding to the client level to perform business processing, the data can be obtained from the local cache preferentially.
The determination of the usage frequency may be, for example, a determination of the unified caching system based on the frequency of data objects sent by the dependent service. For example, if the number of times the dependent service 2 calls the payment method through the service request information is greater than the target number of times threshold, such as 10 times or 50% of the total call times of the service request, the unified cache system determines that the service data corresponding to the payment method is the service data commonly used by the dependent service 2. Furthermore, after receiving the service request information of the dependent service 2, the unified caching system can directly send the service data corresponding to the payment mode to the local cache of the dependent service 2.
If the unified cache system only plays roles of storing data and exchanging data with the micro service in the micro service architecture, the unified cache system cannot modify the changed data by itself when the service data provided by the dependent service changes.
Optionally, the unified cache system may include asynchronous event management, that is, when service data provided by the dependent service changes, the unified cache system may push a data object corresponding to the changed service data in time through asynchronous event management. For example, when configuring the data object, the attribute of the data object may further include an event ID, for example, the event ID of the client level is customer_level_update_event, and the event ID of the payment mode is event_method_update_event. When the business data corresponding to the client class provided by the dependent service 1 changes, the company "a" as described above: v5 "change to" company a: v10", the dependent service 1 can inform the unified cache system in a mode of kafka interface or online notification, etc., and the unified cache system can push the event ID of the client level in real time, namely inform the manager that the service data corresponding to the client level changes.
Optionally, after the unified cache system pushes the event ID of the data object in real time, broadcast information may also be sent. For example, the broadcast information may include a customer-level event ID for notifying all micro services in the micro service architecture that may use the customer-level corresponding business data for business processing to communicate that the customer-level corresponding business data has changed.
Optionally, the unified cache system may also perform synchronous data updates. After the unified cache system pushes the event ID of the data object with changed service data, the changed service data can be synchronously updated. For example, after the service data corresponding to the client level is changed, the unified cache system may reacquire the changed service data provided by the dependent service 1 through the synchronous interface with the dependent service 1, and replace the original service data. The dependent service 1 may re-provide the business data of the data object conforming to the client level to the unified cache system through the synchronization interface.
Optionally, in order to avoid more frequent synchronization update and waste of synchronization update resources caused by a small amount of service data change, the unified cache system may perform synchronization data update at regular time or according to a time period threshold. For example, if the frequency of the service data changes depending on the service provision is high, in order to avoid frequent synchronization data update, the unified cache system may perform synchronization update of the service data at regular intervals each day, for example, synchronization update of the service data once per hour.
Alternatively, different data objects may correspond to different data update periods. For example, payment means may correspond to a longer data update period due to less variation, and customer levels may correspond to a shorter data update period. In embodiments of the present application, the data update period of the data may also be configured when configuring the data object, such as "client level, synchronization policy: the 24-hour synchronization is updated once.
Optionally, when the unified cache system receives the newly added or modified data object, the unified cache system may merge the newly added data object into a similar process, and replace the modified data object. For example, the newly added data object may be "order information", where the data type of "order information" is enumeration, the value range includes [ prepayment, month ] of payment mode and number of order, and the event ID is "order_information_update_event", that is, the order information and payment mode have the same value range. When the data object of the order information is added to the unified cache system, the unified cache system can determine that the two data objects are similar data objects according to the order information and the value range of the payment mode, and merge the event IDs of the two data objects into the similar event IDs, namely when the business data corresponding to the payment mode changes, the event IDs pushed by the unified cache system not only comprise the "event_method_update_event" of the payment mode, but also comprise the "order_information_update_event" of the order information. In other embodiments of the present application, the unified cache system may set the synchronous data update of the two data objects to be the same time period threshold, and when the synchronous update of the service data is performed, the unified cache system may update the service data corresponding to the payment mode and the order information at the same time, so as to avoid the problem that the service data values of the payment mode and the order information are inconsistent.
According to the embodiment of the application, based on the configured data object, the business data sent by the dependent service is stored through the unified cache system and sent to the dependent service, so that the dependent service is prevented from acquiring data from the dependent service one by one, and the efficiency of acquiring the data of the dependent service is improved. In addition, by introducing functions such as hot data objects, asynchronous event management, synchronous data updating, data object security level and the like into the unified cache system, the query efficiency, data accuracy and security reliability of service data are improved.
A block diagram of an apparatus according to an embodiment of the present application is described below in conjunction with fig. 5. It should be noted that the apparatus shown in fig. 5 may perform the method shown in fig. 3. It should be understood that the apparatus described below is capable of performing the method of the embodiments of the present application described above, and in order to avoid unnecessary repetition, the repeated description is appropriately omitted when introducing the apparatus of the embodiments of the present application.
Fig. 5 is a schematic diagram of a unified caching apparatus according to an embodiment of the present application, and the unified caching apparatus 500 shown in fig. 5 includes: an acquisition module 510, a reception module 520, and a transmission module 530.
Specifically, the obtaining module 510 is configured to obtain a first mapping relationship, where the first mapping relationship is a correspondence relationship between a plurality of data objects and a plurality of service data, each service data is determined according to a corresponding data object, where the plurality of service data is provided by one or more first micro services, and the one or more first micro services are part or all of the at least one first micro service.
Specifically, the receiving module 520 is configured to receive service request information of one or more dependent services, where the service request information includes the first data object, and the one or more dependent services are part or all of at least one dependent service.
Specifically, the sending module 530 is configured to determine, according to the first mapping relationship, first service data corresponding to the first data object, and send the first service data to one or more second micro services, where the one or more dependent services are configured to perform service processing according to the first service data.
The specific functions and advantages of the acquisition module 510, the receiving module 520 and the sending module 530 are described in the above embodiments, and are not described herein for brevity.
The modules can be implemented by software or hardware. Illustratively, an implementation of the receiving module 520 is described next with respect to the receiving module 520. Similarly, the implementation of the acquisition module 510 and the transmission module 530 may refer to the implementation of the reception module 520.
Module as an example of a software functional unit, the receiving module 520 may include code running on a computing instance. The computing instance may include at least one of a physical host (computing device), a virtual machine, and a container, among others. Further, the above-described computing examples may be one or more. For example, the receiving module 520 may include code running on multiple hosts/virtual machines/containers. It should be noted that, multiple hosts/virtual machines/containers for running the code may be distributed in the same region (region), or may be distributed in different regions. Further, multiple hosts/virtual machines/containers for running the code may be distributed in the same availability zone (availability zone, AZ) or may be distributed in different AZs, each AZ comprising a data center or multiple geographically close data centers. Wherein typically a region may comprise a plurality of AZs.
Also, multiple hosts/virtual machines/containers for running the code may be distributed in the same virtual private cloud (virtual private cloud, VPC) or in multiple VPCs. In general, one VPC is disposed in one region, and a communication gateway is disposed in each VPC for implementing inter-connection between VPCs in the same region and between VPCs in different regions.
Module as an example of a hardware functional unit, the receiving module 520 may include at least one computing device, such as a server or the like. Alternatively, the receiving module 520 may be a device implemented using an application-specific integrated circuit (ASIC) or a programmable logic device (programmable logic device, PLD), or the like. The PLD may be implemented as a complex program logic device (complex programmable logical device, CPLD), a field-programmable gate array (FPGA), a general-purpose array logic (generic array logic, GAL), or any combination thereof.
The multiple computing devices included in the receiving module 520 may be distributed in the same region or may be distributed in different regions. The plurality of computing devices included in the receiving module 520 may be distributed in the same AZ or may be distributed in different AZ. Also, the plurality of computing devices included in the receiving module 520 may be distributed in the same VPC or may be distributed in a plurality of VPCs. The multiple computing devices included in the receiving module 520 may be any combination of computing devices such as servers, ASIC, PLD, CPLD, FPGA, and GAL.
The present application also provides a computing device 600. As shown in fig. 6, the computing device 600 includes: bus 602, processor 604, memory 606, and communication interface 608. The processor 604, the memory 606, and the communication interface 608 communicate via the bus 602. Computing device 600 may be a server or a terminal device. It should be understood that the present application is not limited to the number of processors, memories in computing device 600.
Bus 602 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one line is shown in fig. 6, but not only one bus or one type of bus. Bus 602 may include a path to transfer information between various components of computing device 600 (e.g., memory 606, processor 604, communication interface 608).
The processor 604 may include any one or more of a central processing unit (central processing unit, CPU), a graphics processor (graphics processing unit, GPU), a Microprocessor (MP), or a digital signal processor (digital signal processor, DSP).
The memory 606 may include volatile memory (RAM), such as random access memory (random access memory). The processor 604 may also include non-volatile memory (ROM), such as read-only memory (ROM), flash memory, a mechanical hard disk (HDD), or a solid state disk (solid state drive, SSD).
The memory 606 stores executable program codes, and the processor 604 executes the executable program codes to implement the functions of the acquisition module 510, the reception module 520, and the transmission module 530, respectively, thereby implementing the data caching method described above. That is, the memory 606 has instructions stored thereon for performing the method of data caching described above.
Communication interface 608 enables communication between computing device 600 and other devices or communication networks using transceiver modules such as, but not limited to, network interface cards, transceivers, and the like.
The embodiment of the application also provides a computing device cluster. The computing device cluster includes at least two computing devices. The computing device may be a server, such as a central server, an edge server, or a local server in a local data center. In some embodiments, the computing device may also be a terminal device such as a desktop, notebook, or smart phone.
As shown in fig. 7, the computing device cluster includes at least two computing devices 600. The same instructions for performing the above-described method of data caching may be stored in the memory 606 in the plurality of computing devices 600 in the computing device cluster.
In some possible implementations, the memory 606 of the plurality of computing devices 600 in the computing device cluster may also have stored therein a portion of instructions for performing the above-described method of data caching, respectively. In other words, a combination of one or more computing devices 600 may collectively execute instructions for performing the above-described methods of data caching.
It should be noted that, the memory 606 in different computing devices 600 in the computing device cluster may store different instructions for performing part of the functions of the foregoing apparatus, respectively. That is, the instructions stored by the memory 606 in the different computing devices 600 may implement the functionality of one or more of the acquisition module, the reception module, and the transmission module.
In some possible implementations, multiple computing devices in a cluster of computing devices may be connected through a network. Wherein the network may be a wide area network or a local area network, etc. Fig. 8 shows one possible implementation. As shown in fig. 8, two computing devices 600A and 600B are connected by a network. Specifically, the network is connected through communication interfaces in the respective computing devices. In this type of possible implementation, instructions to obtain the functionality of the module are stored in memory 606 in computing device 600A. Meanwhile, instructions for the functions of the receiving module and the transmitting module are stored in the memory 606 in the computing device 600B.
It should be appreciated that the functionality of computing device 600A shown in fig. 8 may also be performed by multiple computing devices 600. Likewise, the functionality of computing device 600B may also be performed by multiple computing devices 600.
The embodiment of the application also provides a chip, which comprises a processor and a data interface, wherein the processor reads instructions stored in a memory through the data interface so as to execute the data caching method.
Embodiments of the present application also provide a computer program product comprising instructions. The computer program product may be software or a program product containing instructions capable of running on a computing device or stored in any useful medium. The computer program product, when run on at least one computing device, causes the at least one computing device to perform the method of data caching described above.
Embodiments of the present application also provide a computer-readable storage medium. The computer readable storage medium may be any available medium that can be stored by a computing device or a data storage device such as a data center containing one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc. The computer-readable storage medium includes instructions that instruct a computing device to perform the method of data caching described above.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; these modifications or substitutions do not depart from the essence of the corresponding technical solutions from the protection scope of the technical solutions of the embodiments of the present application.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system, apparatus and module may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules is merely a logical function division, and there may be additional divisions of actual implementation, e.g., multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

1. A method of data caching, the method being applied to a micro-service architecture comprising at least one first micro-service, at least one second micro-service, and a unified caching system, the at least one second micro-service providing business data for business processing, the method comprising:
the unified cache system acquires a first mapping relation, wherein the first mapping relation is a corresponding relation between a plurality of data objects and a plurality of service data, each service data is determined according to the corresponding data object, the plurality of service data is provided by one or more first micro services, and the one or more first micro services are part or all of the at least one first micro service;
The unified cache system receives service request information of one or more second micro services, wherein the service request information comprises a first data object, the first data object is part or all of the plurality of data objects, and the one or more second micro services are part or all of the at least one second micro service;
the unified cache system determines first service data corresponding to the first data object according to the first mapping relation;
the unified cache system sends the first service data to the one or more second micro services, and the one or more second micro services are used for performing service processing according to the first service data.
2. The method according to claim 1, wherein the method further comprises:
the unified cache system acquires a security level value corresponding to the first data object;
and the unified cache system takes security measures for the first service data according to the security level value corresponding to the first data object.
3. The method of claim 2, wherein the security level value of the first data object is greater than or equal to a security level threshold, and wherein taking security measures on the first traffic data comprises at least one of:
The unified cache system encrypts and stores the first service data;
and under the condition that the unified cache system sends the first service data to the one or more second micro services, the unified cache system sets authentication measures for the first service data.
4. A method according to any one of claims 1 to 3, further comprising:
and under the condition that the first service data is changed, the unified cache system generates change notification information, wherein the change notification information is used for prompting the first service data to be changed into second service data.
5. The method according to claim 4, wherein the method further comprises:
the unified cache system adopts a data synchronization measure for modifying the first service data stored in the unified cache system into second service data.
6. The method of claim 5, wherein the execution period of the data synchronization measure is determined based on a target time threshold.
7. The method according to any one of claims 1 to 6, further comprising:
The unified caching system determines that the frequency of using the first service data when the one or more second micro services perform service processing is higher than a target frequency threshold;
the unified caching system sends the first business data to the local caches of the one or more second micro services.
8. A unified caching apparatus, the unified caching apparatus being applied to a micro-service architecture, the micro-service architecture further comprising at least one first micro-service and at least one second micro-service, the at least one second micro-service providing business data for business processing by the at least one first micro-service, the unified caching apparatus comprising:
an acquisition module for: acquiring a first mapping relation, wherein the first mapping relation is a corresponding relation between a plurality of data objects and a plurality of service data, each service data is determined according to the corresponding data object, the plurality of service data is provided by one or a plurality of first micro services, and the one or the plurality of first micro services are part or all of the at least one first micro service;
a receiving module for: receiving service request information of one or more second micro-services, wherein the service request information comprises a first data object, the first data object is part or all of the plurality of data objects, and the one or more second micro-services are part or all of the at least one second micro-service;
A sending module, configured to: and determining first service data corresponding to the first data object according to the first mapping relation, and sending the first service data to one or more second micro services, wherein the one or more second micro services are used for carrying out service processing according to the first service data.
9. The unified caching apparatus of claim 8, further comprising a processing module,
the acquisition module is specifically configured to acquire a security level value corresponding to the first data object;
the processing module is used for taking security measures for the first service data according to the security level value corresponding to the first data object.
10. The unified caching apparatus of claim 9, wherein the security level value of the first data object is greater than or equal to a security level threshold, the processing module being specifically configured to at least one of:
encrypting and storing the first service data;
and setting authentication measures for the first service data under the condition that the sending module sends the first service data to the one or more second micro services.
11. The unified caching apparatus according to any one of claims 8 to 10, further comprising a generating module configured to generate change notification information for prompting the first service data to change to the second service data if the first service data changes.
12. The unified caching apparatus of claim 11, wherein the processing module is specifically configured to take a data synchronization measure for modifying the stored first service data to second service data.
13. The unified caching apparatus of claim 12, wherein the execution period of the data synchronization measure is determined based on a target time threshold.
14. The unified caching apparatus according to any one of claims 8 to 13, wherein the processing means is specifically configured to determine that a frequency of using the first service data when the one or more second micro services perform service processing is higher than a target frequency threshold;
the sending device is specifically configured to send the first service data to a local cache of the one or more second micro services.
15. A computing device comprising a processor and a memory, the processor configured to execute instructions stored in the memory, to cause the computing device to perform the method of any of claims 1-7.
16. A chip, comprising: the processor is connected with the data interface; the processor reads instructions stored on a memory through the data interface, causing the chip to perform the method of any one of claims 1 to 7.
17. A computer program product containing instructions that, when executed by a computing device, cause the computing device to perform the method of any of claims 1 to 7.
18. A computer readable medium comprising computer program instructions which, when run on a computing device, cause the computing device to perform the method of any of claims 1 to 7.
CN202310376856.8A 2023-03-31 2023-03-31 Data caching method and unified caching device Pending CN116483746A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310376856.8A CN116483746A (en) 2023-03-31 2023-03-31 Data caching method and unified caching device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310376856.8A CN116483746A (en) 2023-03-31 2023-03-31 Data caching method and unified caching device

Publications (1)

Publication Number Publication Date
CN116483746A true CN116483746A (en) 2023-07-25

Family

ID=87222475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310376856.8A Pending CN116483746A (en) 2023-03-31 2023-03-31 Data caching method and unified caching device

Country Status (1)

Country Link
CN (1) CN116483746A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116684475A (en) * 2023-08-01 2023-09-01 中海油信息科技有限公司 Full-flow data flow control system and method based on micro-service
CN117170889A (en) * 2023-11-01 2023-12-05 沐曦集成电路(上海)有限公司 Heterogeneous non-blocking data packet synchronous processing system
CN118018596A (en) * 2024-03-13 2024-05-10 证通股份有限公司 Method, component, storage medium and program product for API selection of micro-services

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116684475A (en) * 2023-08-01 2023-09-01 中海油信息科技有限公司 Full-flow data flow control system and method based on micro-service
CN116684475B (en) * 2023-08-01 2023-10-24 中海油信息科技有限公司 Full-flow data flow control system and method based on micro-service
CN117170889A (en) * 2023-11-01 2023-12-05 沐曦集成电路(上海)有限公司 Heterogeneous non-blocking data packet synchronous processing system
CN117170889B (en) * 2023-11-01 2024-01-23 沐曦集成电路(上海)有限公司 Heterogeneous non-blocking data packet synchronous processing system
CN118018596A (en) * 2024-03-13 2024-05-10 证通股份有限公司 Method, component, storage medium and program product for API selection of micro-services

Similar Documents

Publication Publication Date Title
JP5516821B2 (en) System and method for remote maintenance of multiple clients in an electronic network using virtualization and authentication
CN116483746A (en) Data caching method and unified caching device
US11038690B2 (en) Policy-driven dynamic consensus protocol selection
CN105981331B (en) Entity handling registry for supporting traffic policy enforcement
US8943319B2 (en) Managing security for computer services
US9942203B2 (en) Enhanced security when sending asynchronous messages
CN106101258A (en) A kind of interface interchange method of mixed cloud, Apparatus and system
US10534631B2 (en) Scalable policy assignment in an edge virtual bridging (EVB) environment
US10686765B2 (en) Data access levels
AU2019356039B2 (en) Local mapped accounts in virtual desktops
US9471351B2 (en) Scalable policy management in an edge virtual bridging (EVB) environment
US11354300B2 (en) Mobile auditable and tamper-resistant digital-system usage tracking and analytics
JP2024505692A (en) Data processing methods, devices and computer equipment based on blockchain networks
CN109964507A (en) Management method, administrative unit and the system of network function
US20210281555A1 (en) Api key access authorization
US10182121B2 (en) Cookie based session timeout detection and management
US20200134606A1 (en) Asset management in asset-based blockchain system
WO2022105617A1 (en) Private key management
WO2022144643A1 (en) Secure memory sharing
US10284563B2 (en) Transparent asynchronous network flow information exchange
US20200045029A1 (en) Secure sharing of peering connection parameters between cloud providers and network providers
CN114785612B (en) Cloud platform management method, device, equipment and medium
WO2022078069A1 (en) Secure data storage device access control and sharing
CN114785612A (en) Cloud platform management method, device, equipment and medium
CA3172917A1 (en) Tiered application pattern

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination