CN116634017B - Identification analysis data caching method and device based on digital object - Google Patents
Identification analysis data caching method and device based on digital object Download PDFInfo
- Publication number
- CN116634017B CN116634017B CN202310542724.8A CN202310542724A CN116634017B CN 116634017 B CN116634017 B CN 116634017B CN 202310542724 A CN202310542724 A CN 202310542724A CN 116634017 B CN116634017 B CN 116634017B
- Authority
- CN
- China
- Prior art keywords
- node
- identification
- data
- analysis
- cache
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 207
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000003860 storage Methods 0.000 claims description 11
- 230000004083 survival effect Effects 0.000 claims description 6
- 230000002688 persistence Effects 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 3
- 230000002045 lasting effect Effects 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 10
- 238000007726 management method Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 239000002699 waste material Substances 0.000 description 3
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/141—Setup of application sessions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application relates to a digital object-based identification analysis data caching method and device. The method comprises the following steps: setting an identification analysis system of a tree hierarchy structure, wherein the identification analysis system comprises an end node, a domain node and a root node; according to the state information of the data resources uploaded by the provider, distributing unique identifiers of the whole network for the data resources in the end nodes; the state information is cached in the end node as analysis data corresponding to the identification; when the user sends the identified analysis request, the analysis data is returned to the user along the path of the analysis request from the node cached with the analysis data, and the analysis data is gradually cached in the node closer to the user along the returned path until the analysis data is cached in the end node closest to the user. The method does not need a large number of repeated analysis marks, shortens the routing length of the analysis request, thereby reducing the delay of user access, saving bandwidth resources and dispersing caches to avoid network congestion.
Description
Technical Field
The present invention relates to the field of data storage, and in particular, to a method and apparatus for caching identification resolution data based on a digital object.
Background
Digital object architecture (Digital Object Architecture, DOA) is a technology and standard architecture. The digital object architecture includes a basic data model, two standard protocols, and three core subsystems: registry, repository, and identity resolution system. In the digital object architecture, heterogeneous data of different systems in the existing Internet are integrally modeled and abstracted into a digital object as a basic element in a digital object system. The digital object consists of three parts, namely a mark, metadata and a data entity, wherein the mark is a unique identity mark of the digital object; metadata represents digital objects and application, classification, business related descriptive information for retrieving desired digital objects according to application requirements; the digital object entity represents the actual content of the data resource.
In a digital object architecture, a user cannot directly access an entity of a digital object, but rather needs to obtain status information of the digital object from a manager of the digital object through a network-wide unique identification of the digital object, and further access the entity of the digital object through the status information. Under the condition of no cache, when the identification of the digital object is repeatedly requested to be analyzed in a large amount, the problems of serious network resource waste and high access delay of users are caused.
The existing identification analysis system is mainly provided with a special cache node for carrying out centralized management on identification analysis data in a network, however, when a large amount of analysis data need to be cached, the cache node can receive a large amount of data packets, and when a large amount of users initiate identification analysis requests, the cache node can receive a large amount of request information, so that the network load of the analysis node is huge, and congestion and even downtime are easy to occur. Therefore, it is necessary to find a caching method for the identification analysis data, which can avoid network congestion, reduce the delay of user access, and reduce the waste of network resources.
Disclosure of Invention
In view of this, the present application aims to provide a method and an apparatus for caching identification analysis data based on a digital object, so as to solve the problems of network congestion, serious bandwidth waste and high user access delay caused by accessing a large number of users to digital objects.
In order to achieve the above purpose, the technical scheme of the application is as follows:
an embodiment of the present application provides a method for caching identification resolution data based on a digital object, where the method includes:
setting an identification analysis system of a tree hierarchy structure, wherein the identification analysis system comprises an end node, a domain node and a root node; the root node directly communicates with a subordinate domain node; the domain node is directly communicated with the upper-level root node and is also directly communicated with the subordinate end node; the end node manages the identification of the digital object and communicates with the user directly;
According to the state information of the data resources uploaded by the provider, distributing unique identifiers of the whole network for the data resources in the end nodes to be used as corresponding information of the analysis data;
the analysis data is the state information obtained by analyzing the identification by the user; the status information includes: the storage location, access mode, owner, timestamp and access related information of the body data of the digital object; the identification is a self-determined identification or a randomly generated identification of the provider;
binding the identifier serving as a field in the state information with the state information to generate a key value pair, and storing the key value pair into a database in a lasting manner;
the state information is used as analysis data corresponding to the identification and is cached in the end node; when a user sends the identified analysis request for the first time, routing the analysis request to a hit cache at the end node, acquiring the analysis data from the cache of the hit cache node, and returning the analysis data to the user according to the path of the analysis request; caching the analysis data in a next hop node of the node currently hit in the cache on a return path;
When the user sends the identified analysis request again, the analysis request is routed to the next-hop node to hit the cache, the analysis data is returned from the node hit the cache to the user, and the analysis data is cached to the next-hop node of the node hit the cache currently on the return path;
and sequentially caching the analysis data in the nodes on the return path according to the analysis request sent by the user each time until the analysis data is cached in the end node closest to the user.
Optionally, the method for caching identification analysis data based on the digital object further comprises the following steps:
and deleting the analytic data in the cache of the node of the current hit cache when the analytic data is cached in the next hop node of the current hit cache.
Optionally, the method for caching identification analysis data based on the digital object further comprises the following steps:
and when the existence time of the analysis data exceeds the set survival time in the cache of any node, acquiring the latest analysis data corresponding to the identification from the end node again, and updating the cache of the node.
Optionally, the method for caching identification analysis data based on the digital object further comprises the following steps:
In the cache of any node, sequencing all the analysis data according to the time stamp in the analysis data; the timestamp represents a time when the parsed data was recently accessed;
when the buffer capacity of the node is full, replacing the analysis data with the earliest time stamp in the current buffer with the analysis data newly added into the buffer.
Optionally, allocating a unique identifier of the whole network to the data resource in the end node, as corresponding information of the analysis data, including:
judging whether the state information contains a user self-defined identifier or not, if not, randomly generating an identifier, and judging whether the identifier is used by other digital objects or not;
if the identification has been used by other digital objects, then a new identification is randomly generated;
assigning the identification to the digital object if the identification is not used by other digital objects;
if the state information contains a user self-identification, judging whether the user self-identification is used by other digital objects, and if the user self-identification is used by other digital objects, distributing a randomly generated identification to a user;
if the user self-identification is not used by other digital objects, the user self-identification is used as the identification of the digital object.
Optionally, the method for caching identification analysis data based on the digital object further comprises the following steps:
generating a hierarchical structure identifier for the data resource, wherein the hierarchical structure identifier is used for distinguishing different hosting institutions and the data resource, so that the management is convenient; the identification includes: identifying a prefix, identifying a suffix, and a separator;
setting user-defined or randomly generated UTF-8 codes as coding formats of the identification prefix and the identification suffix, and realizing identification multiplexing by being compatible with different identification systems;
user-defined or system default separators are set for compatibility with user-personalized requirements.
According to a second aspect of embodiments of the present application, there is provided an identification resolution data caching apparatus based on a digital object, the apparatus comprising:
the analysis route construction module is configured to set an identification analysis system of a tree hierarchy structure and comprises an end node, a domain node and a root node; the root node directly communicates with a subordinate domain node; the domain node is directly communicated with the upper-level root node and is also directly communicated with the subordinate end node; the end node manages the identification of the digital object and communicates with the user directly;
the identifier allocation module is configured to allocate a unique identifier of the whole network for the data resource in the end node according to the state information of the data resource uploaded by the provider, and the unique identifier is used as corresponding information of the analysis data;
The analysis data is the state information obtained by analyzing the identification by the user; the status information includes: the storage location, access mode, owner, timestamp and access related information of the body data of the digital object; the identification is a self-determined identification or a randomly generated identification of the provider;
a persistence module configured to bind the identifier with the state information as a field in the state information to generate a key value pair, and persist the key value pair to a database;
a caching module configured to cache the state information in the end node as parsing data corresponding to the identifier;
when a user sends the identified analysis request for the first time, routing the analysis request to a hit cache at the end node, acquiring the analysis data from the cache of the hit cache node, and returning the analysis data to the user according to the path of the analysis request; caching the analysis data in a next hop node of the node currently hit in the cache on a return path;
when the user sends the identified analysis request again, the analysis request is routed to the next-hop node to hit the cache, the analysis data is returned from the node hit the cache to the user, and the analysis data is cached to the next-hop node of the node hit the cache currently on the return path;
And sequentially caching the analysis data in the nodes on the return path according to the analysis request sent by the user each time until the analysis data is cached in the end node closest to the user.
Optionally, the caching module is further configured to delete the resolved data in the cache of the node of the current hit cache when the resolved data is cached in a next-hop node of the current hit cache.
Optionally, the digital object based identification parsing data caching apparatus further includes:
and the cache updating module is configured to re-acquire the latest analysis data corresponding to the identification from the end node and update the cache of the node when the existence time of the analysis data exceeds the set survival time in the cache of any node.
Optionally, the digital object based identification parsing data caching apparatus further includes:
the buffer replacement module is configured to sort all the analysis data according to the time stamps in the analysis data in the buffer of any node; the timestamp represents a time when the parsed data was recently accessed; when the buffer capacity of the node is full, replacing the analysis data with the earliest time stamp in the current buffer with the analysis data newly added into the buffer.
According to the identification analysis data caching method, when an end node distributes the identification to the digital object, the distributed unique identification of the whole network is bound with the corresponding state information and is persisted into the database to serve as original data, the state information is cached in the end node as analysis data, so that a user can conveniently and directly extract the state information in the cache to return to the user when the user analyzes the identification request of the digital object, the identification of the digital object is not required to be repeatedly analyzed each time, bandwidth resources are saved, and access delay of the user is reduced. Furthermore, after the analysis data is returned each time, the analysis data is cached in the next hop node which is closer to the user in the return path, so that the route length of the next request identification analysis of the user is shortened, the access delay of the user is further reduced, and the bandwidth resource is saved. Meanwhile, the method dispersedly caches the analysis data of the digital object identification in a plurality of nodes, thereby avoiding the problems that the node load increases, the network delay becomes high and even network congestion is caused when the fixed cache node is arranged.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for digital object based identification resolution data caching according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a digital object based identification resolution data caching apparatus according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a method for caching identification resolution data according to an embodiment of the present application;
FIG. 4 is a schematic diagram of another method for caching identification resolution data according to an embodiment of the present application;
FIG. 5 is a flow chart illustrating cache replacement in a node according to an embodiment of the present application;
fig. 6 is a schematic diagram of an identification code structure according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present application, it should be understood that the sequence numbers of the following processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a flowchart of a method for resolving data based on identification of a digital object according to an embodiment of the present application. As shown in fig. 1, the method includes:
s11: setting an identification analysis system of a tree hierarchy structure, wherein the identification analysis system comprises an end node, a domain node and a root node; the root node directly communicates with a subordinate domain node; the domain node is directly communicated with the upper-level root node and is also directly communicated with the subordinate end node; the end node manages the identification of the digital object and communicates directly with the user.
In this embodiment, the process of identity resolution is performed in an identity resolution system of a tree hierarchy, where the system includes a plurality of root nodes, a plurality of domain nodes, and a plurality of end nodes.
Wherein the root node is located at the top level of the tree hierarchy and manages the domain nodes subordinate thereto, such as: the domain nodes are added, accessed, deleted, etc., and each root node is also responsible for maintaining node information of the nodes of the subordinate domain, such as: the prefix identification of the node, the IP address of the node, the service address of the node, the tcp port of the node, the http port of the node, the identification amount of the node, the analysis amount of the node, the number of users of the node and the like.
The domain node is positioned in the middle of the tree hierarchy structure, is a subordinate node of the root node, and is used for identifying that a multi-level domain node exists in the analysis system, and the subordinate domain node is managed by an upper-level domain node. For example, a primary domain node manages a secondary domain node to which it belongs, and a tertiary domain node to which a secondary domain node manager belongs. For example, "universality.pku" is a primary domain node prefix identification of the root node whose prefix identification is "universality", and "universality.pku.s1" is a secondary domain node prefix identification of the primary domain node whose prefix identification is "universality.pku". The lowest domain node is responsible for managing its subordinate end nodes, including: new addition of end nodes, access of end nodes, deletion of end nodes and the like; meanwhile, the lowest domain node is also responsible for maintaining information of subordinate end nodes thereof, including: the prefix identification of the end node, the IP address of the end node, the service address of the end node, the tcp port of the end node, the http port of the end node, the identification amount of the end node, the analysis amount of the end node, the user amount of the end node and the like.
The end node is positioned at the bottom layer of the tree hierarchy structure, and the upper-level domain node to which the end node belongs is assigned with a prefix identifier in the format of root node identifier, primary domain node identifier, secondary domain node identifier. The end node directly manages the identification of the digital object and is responsible for the distribution, deletion, analysis, update, etc. of the digital object identification. The end node needs to synchronize the related data of the digital object identifier managed by the end node, such as the registration amount, the resolution amount and the like of the digital object identifier, to the domain node at the upper level of the end node.
It is noted that the primary domain nodes are managed by the upper root nodes, and the root nodes synchronize the IP addresses and node prefix identifiers of the respective subordinate primary domain nodes, i.e. all the root nodes have the IP addresses and node prefix identifier information of all the primary domain nodes in the identifier analysis system. In the tree hierarchy, each domain node is managed by only one root node or upper domain node, and each end node is managed by only one domain node.
In this embodiment, the data resources of different information systems are modeled in a unified abstract manner as digital objects by a digital object architecture. For example, video files, audio files, and text files of different formats are abstractly modeled as a unified format digital object, which is composed of identification, state information, and data entities. When a user accesses a data entity of a digital object, the user needs to analyze through an identifier of the digital object to obtain an analysis result of the digital object (i.e. state information corresponding to the identifier), where the state information includes: storage location of the digital object, access mode, owner, time stamp, and access related information. After the user obtains the analysis data, the entity data of the digital object can be accessed according to the storage position and the access mode of the digital object.
S12: according to the state information of the data resources uploaded by the provider, distributing unique identifiers of the whole network for the data resources in the end nodes to be used as corresponding information of the analysis data;
the analysis data is the state information obtained by analyzing the identification by the user; the status information includes: the storage location, access mode, owner, timestamp and access related information of the body data of the digital object; the identification is a self-determined identification or a randomly generated identification of the provider;
s121: judging whether the state information contains a user self-defined identifier or not, if not, randomly generating an identifier, and judging whether the identifier is used by other digital objects or not;
s122: if the identification has been used by other digital objects, then a new identification is randomly generated;
s123: assigning the identification to the digital object if the identification is not used by other digital objects;
s124: if the state information contains a user self-identification, judging whether the user self-identification is used by other digital objects, and if the user self-identification is used by other digital objects, distributing a randomly generated identification to a user;
S125: if the user self-identification is not used by other digital objects, the user self-identification is used as the identification of the digital object.
In this embodiment, the digital object provider may upload status information of different types of data resources through the client, and after the system performs authentication on the user, generate and allocate a unique identifier of the whole network to the data resources, so as to obtain a complete digital object. After the identifier is allocated, the identifier is corresponding to the state information and is cached in an end node for managing the identifier as analysis data, so that a user can conveniently analyze the digital object identifier request without re-analyzing the identifier each time, the access delay of the user is reduced, and bandwidth resources are saved.
Specifically, the step of the end node assigning an identifier to the state information of the digital object is as follows:
firstly, the end node performs identity verification on a user, and only after the identity verification is passed, the end node performs subsequent steps;
after the authentication is successful, the user inputs the state information of the data resource to be allocated with the identifier at the client and submits the state information to the identifier analysis system;
the identification management module judges whether the state information submitted by the user contains a user self-defined identification. In this embodiment, the identification of the digital object may be a user self-identification or a system randomly generated identification.
When the state information contains the user self-defined identification, further judging whether the user self-defined identification is used by other digital objects, if so, randomly generating a unique identification of the whole network by an identification management module and distributing the unique identification to the data resource;
if the status information does not contain the user self-defined identification, the identification management module randomly generates the unique identification of the whole network and distributes the unique identification to the data resource. Likewise, the randomly generated identifications of the identification management module also need to determine whether they have been used by other digital objects, and if so, need to regenerate identifications not used by other digital objects, thereby ensuring that the identifications assigned to the data resources are network-wide unique.
S13: and binding the identification serving as a field in the state information with the state information to generate a key value pair, and storing the key value pair into a database in a lasting manner.
In this embodiment, after generating a unique identifier of the whole network, the identifier is used as a field of status information of a data resource uploaded by a user, bound with the status information to form a key value pair, used as original data for analyzing the identifier, and stored in a database in a persistence form. When the subsequent user analyzes the identification request or caches the analysis data of the identification, the key value pair data stored in the database is used as the basis for caching the analysis data.
S14: the state information is used as analysis data corresponding to the identification and is cached in the end node; when a user sends the identified analysis request for the first time, routing the analysis request to a hit cache at the end node, acquiring the analysis data from the cache of the hit cache node, and returning the analysis data to the user according to the path of the analysis request; caching the analysis data in a next hop node of the node currently hit in the cache on a return path;
s15: when the user sends the identified analysis request again, the analysis request is routed to the next-hop node to hit the cache, the analysis data is returned from the node hit the cache to the user, and the analysis data is cached to the next-hop node of the node hit the cache currently on the return path;
s16: and sequentially caching the analysis data in the nodes on the return path according to the analysis request sent by the user each time until the analysis data is cached in the end node closest to the user.
In this embodiment, when the user initiates the identifier resolution request for the digital object again, the system routes the resolution request to the node in which the resolution data corresponding to the node is cached, directly searches the corresponding state information from the cache of the node, and returns the state information to the user according to the path of the resolution request, without resolving the identifier again, so as to reduce access delay of the user and save bandwidth resources. Furthermore, according to the number of times of analyzing the identification request by the user, the analysis data can be gradually cached in the node which is closer to the user in the path, so that the routing length of the analysis request and the analysis data is further shortened, and the access delay of the user is reduced.
Fig. 3 is a schematic diagram of a caching method for identifying resolved data according to an embodiment of the present application. As shown in fig. 3, in the case that the end node with the prefix of "pku.s2.d2" caches the parsed data, the user sends the identifier parsing request of the digital object with the prefix of "pku.s2.d2", and the caching process of the parsed data in the system is as follows:
(1) A user sends an identification analysis request with a prefix of "pku.s2.d2" to an end node with the prefix of "pku.s1.d1" through a client;
(2) The end node with the prefix of "pku.s1.d1" does not store the identification information with the prefix of "pku.s2.d2" and the cache cannot hit, and forwards the request to the upper domain node with the prefix of "pku.s1";
(3) The domain node with the prefix of "pku.s1" does not store the identification information with the prefix of "pku.s2.d2" and the cache cannot hit, and forwards the request to the upper-level root node with the prefix of "pku";
(4) The root node with prefix identification of "pku" does not store the identification information with prefix of "pku.s2.d2" and the cache cannot hit, forwarding the request to the lower domain node with prefix identification of "pku.s2";
(5) The domain node with the prefix of "pku.s2" does not store the identification information with the prefix of "pku.s2.d2" and the cache cannot hit, and forwards the request to the lower-level end node with the prefix of "pku.s2.d2";
(6) The end node with the prefix marked as "pku.s2.d2" caches hit, and the analysis data is returned to the domain node with the prefix marked as "pku.s2";
(7) According to the LCD cache placement strategy, the domain node with the prefix mark of "pku.s2" caches the resolved data, and then returns the resolved data to the root node of "pku";
(8) The root node 'pku' returns the parsed data to the domain node whose prefix is identified as 'pku.s1';
(9) The domain node with the prefix mark of "pku.s1" returns the analysis data to the end node with the prefix mark of "pku.s1.d1";
(10) The end node with the prefix mark of "pku.s1.d1" returns the analysis data to the client;
(11) When the client sends out the same analysis request again to the end node with the prefix mark of "pku.s1.d1", the domain node with the prefix mark of "pku.s2" hits the cache, the domain node directly returns the analysis data original path to the client, and meanwhile, the next hop node (namely the root node "pku") on the return path caches the corresponding data, and as the analysis times of the request increase, all nodes on the path of the analysis request cache the same data.
In this embodiment, the analysis data is buffered from the end node managing the digital object identifier, each time at the next-hop node that returns the analysis data in the buffer hit, until the analysis data is buffered in the end node where the client directly communicates. In this way, the node hit in the cache is closer to the client, the routing length of the data is reduced, and the delay of user access is further reduced.
In this embodiment, as shown in fig. 3, when the user initiates the request for resolving the identifier of the digital object again, in addition to searching for the corresponding status information return from the node managing the identifier, the release information of the identifier of the digital object is cached in the next hop node of the return path.
Optionally, the method for caching identification analysis data based on the digital object further comprises the following steps:
s17: and deleting the analytic data in the cache of the node of the current hit cache when the analytic data is cached in the next hop node of the current hit cache.
In this embodiment, when the step-by-step buffering is performed in the return path of the resolved data, the resolved data buffered in the previous node may be deleted each time the resolved data is buffered in the node closer to the user.
Fig. 4 is a schematic diagram of another caching method for identifying resolved data according to an embodiment of the present application. As shown in fig. 4, in the case where the end node with the prefix "pku.s2.d2" has cached the parsed data, the user sends the identifier parsing request of the digital object with the prefix "pku.s2.d2", and the caching process of the parsed data in the system is as follows:
(1) The client sends an identification analysis request with a prefix of "pku.s2.d2" to an end node with the prefix of "pku.s1.d1";
(2) The end node with the prefix of "pku.s1.d1" does not store the identification information with the prefix of "pku.s2.d2" and the cache cannot hit, and forwards the request to the upper domain node with the prefix of "pku.s1";
(3) The domain node with the prefix of "pku.s1" does not store the identification information with the prefix of "pku.s2.d2" and the cache cannot hit, and forwards the request to the upper-level root node with the prefix of "pku";
(4) The root node with prefix identification of "pku" does not store the identification information with prefix of "pku.s2.d2" and the cache cannot hit, forwarding the request to the lower domain node with prefix identification of "pku.s2";
(5) The domain node with the prefix of "pku.s2" does not store the identification information with the prefix of "pku.s2.d2" and the cache cannot hit, and forwards the request to the lower-level end node with the prefix of "pku.s2.d2";
(6) The end node with the prefix marked as "pku.s2.d2" caches hit, the analysis data is returned to the domain node with the prefix marked as "pku.s2" and the cache content of the node is deleted according to the MCD cache placement strategy;
(7) According to the MCD cache placement strategy, the domain node with the prefix mark of "pku.s2" caches the resolved data, and then returns the resolved data to the root node of "pku";
(8) The root node 'pku' returns the parsed data to the domain node whose prefix is identified as 'pku.s1';
(9) The domain node with the prefix mark of "pku.s1" returns the analysis data to the end node with the prefix mark of "pku.s1.d1";
(10) The end node with the prefix mark of "pku.s1.d1" returns the analysis data to the client;
when the client sends out the same resolution request again to the end node with the prefix of "pku.s1.d1", the domain node with the prefix of "pku.s2" hits the cache, and returns the resolution data to the client directly, and the next hop node (i.e. the root node "pku") on the return path caches the resolution data, and the domain node deletes the locally cached resolution data. As the number of requests to be parsed increases, only the first end node (end node with prefix "pku.s1.d1") on the path of the request to be parsed will cache the parsed data.
Optionally, the method for caching identification analysis data based on the digital object further comprises the following steps:
S18: and when the existence time of the analysis data exceeds the set survival time in the cache of any node, acquiring the latest analysis data corresponding to the identification from the end node again, and updating the cache of the node.
In this embodiment, the resolved data cached in each node has a lifetime, and when the time of caching the resolved data exceeds the lifetime, the resolved data needs to be routed to the end node managing the identifier along the path of the resolved request, and the latest resolved data is acquired from the end node for updating.
Optionally, the method for caching identification analysis data based on the digital object further comprises the following steps:
s19: in the cache of any node, sequencing all the analysis data according to the time stamp in the analysis data; the timestamp represents a time when the parsed data was recently accessed;
when the buffer capacity of the node is full, replacing the analysis data with the earliest time stamp in the current buffer with the analysis data newly added into the buffer.
Fig. 5 is a schematic flow chart of cache replacement in a node according to an embodiment of the present application. As shown in fig. 5, optionally, a least recently used (least recently used, LRU) cache replacement policy is used in this embodiment to determine replacement of data cached in a single node, which specifically performs the following steps:
(1) In the initial situation, the buffer is empty, and only one queue is needed at the moment, namely, the element 1, the element 2, the element 3, the element 4 and the element 5 are added into the queue at one time according to the first-in first-out principle;
(2) When element 5 is enqueued, the capacity of the buffer reaches the maximum value;
(3) When element 1 is accessed from the buffer, element 1 becomes the most recently accessed element, thus updating the timestamp of element 1, moving it to the head of the queue position;
(4) When the element 6 needs to be continuously added into the buffer, the buffer is found to have reached the maximum capacity, and the element 2 at the tail position of the queue is least recently used according to the LRU algorithm, so that the element 2 needs to be eliminated, and the element 6 needs to be added into the head position of the queue;
(5) When element 4 is accessed from the buffer, element 4 becomes the most recently accessed element, thus updating the timestamp of element 4, moving element 4 to the head of the queue position.
Optionally, the method for caching identification analysis data based on the digital object further comprises the following steps:
s110: generating a hierarchical structure identifier for the data resource, wherein the hierarchical structure identifier is used for distinguishing different hosting institutions and the data resource, so that the management is convenient; the identification includes: identifying a prefix, identifying a suffix, and a separator;
S111: setting user-defined or randomly generated UTF-8 codes as coding formats of the identification prefix and the identification suffix, and realizing identification multiplexing by being compatible with different identification systems;
s112: user-defined or system default separators are set for compatibility with user-personalized requirements.
In this embodiment, the digital object identifier adopts a hierarchical coding manner, and its structure is composed of an identifier prefix, an identifier suffix, and an identifier prefix-suffix separator. The identifier prefix is used for uniquely identifying the organization main body hosting the identifier service, the identifier suffix is used for uniquely identifying any data under the prefix, and the separator is used for dividing the identifier prefix and the identifier suffix.
The identifier prefix defines the name space of the identifier code, so that the global uniqueness of the data identifier is ensured, the identifier prefix is freely defined by using UTF-8 code on the premise of ensuring the uniqueness of the whole network, and the structure and the semantics of the identifier prefix are not limited;
similarly, the identification suffix is freely defined by using UTF-8 codes on the premise of ensuring the uniqueness of the whole network;
the identification partition Fu Mo considers "/", other partitions can be customized according to requirements, but it needs to ensure that the same symbol does not appear in the identification prefix and the identification suffix.
Fig. 6 is a schematic diagram of an identification code structure according to an embodiment of the present application. As shown in fig. 6, according to the coding scheme of the above identifier, the identifier prefix is defined as "slw.os.doa.dev", the identifier suffix is defined as "do.a67920dd-0c25-4dbc-b164-a45711e7ef78", and the default separator "/" is used as the identifier separator.
In this embodiment, the encoding mode of the digital object identifier is compatible with multiple encoding rules, the identifier allocated by the existing identifier system can be directly used as the suffix of the identifier of the system for multiplexing, and the identifier allocated by the system can be mapped to identifiers allocated by a plurality of other identifier systems through the identifier encoding mapping table, so that one-to-many multi-identifier compatibility is realized.
Fig. 2 is a schematic diagram of a digital object-based id resolution data caching apparatus 200 according to an embodiment of the present application. As shown in fig. 2, the apparatus includes:
a parsing route construction module 201 configured to set an identification parsing system of a tree hierarchy, including end nodes, domain nodes, and root nodes; the root node directly communicates with a subordinate domain node; the domain node is directly communicated with the upper-level root node and is also directly communicated with the subordinate end node; the end node manages the identification of the digital object and communicates with the user directly;
An identifier allocation module 202, configured to allocate a unique identifier of the whole network to the data resource in the end node according to the status information of the data resource uploaded by the provider, as corresponding information of the analysis data;
the analysis data is the state information obtained by analyzing the identification by the user; the status information includes: the storage location, access mode, owner, timestamp and access related information of the body data of the digital object; the identification is a self-determined identification or a randomly generated identification of the provider;
a persistence module 203 configured to bind the identifier with the state information as a field in the state information to generate a key value pair, and store the key value pair in a database in a persistence manner;
a caching module 204 configured to cache the state information in the end node as parsed data corresponding to the identity;
when a user sends the identified analysis request for the first time, routing the analysis request to a hit cache at the end node, acquiring the analysis data from the cache of the hit cache node, and returning the analysis data to the user according to the path of the analysis request; caching the analysis data in a next hop node of the node currently hit in the cache on a return path;
When the user sends the identified analysis request again, the analysis request is routed to the next-hop node to hit the cache, the analysis data is returned from the node hit the cache to the user, and the analysis data is cached to the next-hop node of the node hit the cache currently on the return path;
and sequentially caching the analysis data in the nodes on the return path according to the analysis request sent by the user each time until the analysis data is cached in the end node closest to the user.
Optionally, the caching module 204 is further configured to delete the resolved data in the cache of the node of the current hit cache when the resolved data is cached in a next-hop node of the current hit cache.
Optionally, the digital object based identification parsing data caching apparatus 200 further includes:
and the cache updating module is configured to re-acquire the latest analysis data corresponding to the identification from the end node and update the cache of the node when the existence time of the analysis data exceeds the set survival time in the cache of any node.
Optionally, the digital object based identification parsing data caching apparatus 200 further includes:
the buffer replacement module is configured to sort all the analysis data according to the time stamps in the analysis data in the buffer of any node; the timestamp represents a time when the parsed data was recently accessed; when the buffer capacity of the node is full, replacing the analysis data with the earliest time stamp in the current buffer with the analysis data newly added into the buffer.
Optionally, the identifier allocation module 202 is further configured to determine whether the status information includes a user-defined identifier, and if not, randomly generate an identifier, and determine whether the identifier is already used by other digital objects;
if the identification has been used by other digital objects, then a new identification is randomly generated;
assigning the identification to the digital object if the identification is not used by other digital objects;
if the state information contains a user self-identification, judging whether the user self-identification is used by other digital objects, and if the user self-identification is used by other digital objects, distributing a randomly generated identification to a user;
If the user self-identification is not used by other digital objects, the user self-identification is used as the identification of the digital object.
Optionally, the identifier allocation module 202 is further configured to generate a hierarchical identifier for the data resource, so as to distinguish different hosting institutions and data resources, so as to facilitate management; the identification includes: identifying a prefix, identifying a suffix, and a separator;
setting user-defined or randomly generated UTF-8 codes as coding formats of the identification prefix and the identification suffix, and realizing identification multiplexing by being compatible with different identification systems;
user-defined or system default separators are set for compatibility with user-personalized requirements.
The foregoing description of the preferred embodiments of the present application is not intended to be limiting, but rather is intended to cover any and all modifications, equivalents, alternatives, and improvements within the spirit and principles of the present application.
For the purposes of simplicity of explanation, the methodologies are shown as a series of acts, but one of ordinary skill in the art will recognize that the subject application is not limited by the order of acts described, as some acts may, in accordance with the subject application, occur in other orders or concurrently. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments and that the acts and components referred to are not necessarily required for the present application.
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present embodiments have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the present application.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The digital object-based identification analysis data caching method and device provided by the application are described in detail, and specific examples are applied to the description of the principles and the implementation modes of the application, and the description of the above examples is only used for helping to understand the method and the core idea of the application; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Claims (10)
1. A digital object-based identification resolution data caching method, comprising:
setting an identification analysis system of a tree hierarchy structure, wherein the identification analysis system comprises an end node, a domain node and a root node; the root node is positioned at the top layer of the tree hierarchy structure, manages the subordinate domain nodes and is responsible for maintaining the node information of the subordinate domain nodes; the domain node is positioned in the middle of the tree-like hierarchical structure, is a subordinate node of the root node, a multi-level domain node exists in the identification analysis system, the subordinate domain node is managed by an upper-level domain node, and the lowest-level domain node is responsible for managing the subordinate end node and maintaining the information of the subordinate end node; the end nodes are positioned at the bottom layer of the tree hierarchy structure; the root node directly communicates with a subordinate domain node; the domain node is directly communicated with the upper-level root node and is also directly communicated with the subordinate end node; the end node manages the identification of the digital object and communicates with the user directly;
according to the state information of the data resources uploaded by the provider, distributing unique identifiers of the whole network for the data resources in the end nodes to be used as corresponding information of the analysis data;
the analysis data is the state information obtained by analyzing the identification by the user; the status information includes: the storage location, access mode, owner, timestamp and access related information of the body data of the digital object; the identification is a self-determined identification or a randomly generated identification of the provider;
Binding the identifier serving as a field in the state information with the state information to generate a key value pair, and storing the key value pair into a database in a lasting manner;
the state information is used as analysis data corresponding to the identification and is cached in the end node; when a user sends the identified analysis request for the first time, routing the analysis request to a hit cache at the end node, acquiring the analysis data from the cache of the hit cache node, and returning the analysis data to the user according to the path of the analysis request; caching the analysis data in a next hop node of the node currently hit in the cache on a return path;
when the user sends the identified analysis request again, the analysis request is routed to the next-hop node to hit the cache, the analysis data is returned from the node hit the cache to the user, and the analysis data is cached to the next-hop node of the node hit the cache currently on the return path;
and sequentially caching the analysis data in the nodes on the return path according to the analysis request sent by the user each time until the analysis data is cached in the end node closest to the user.
2. The digital object based identification resolution data caching method of claim 1, further comprising:
and deleting the analytic data in the cache of the node of the current hit cache when the analytic data is cached in the next hop node of the current hit cache.
3. The digital object based identification resolution data caching method of claim 2, further comprising:
and when the existence time of the analysis data exceeds the set survival time in the cache of any node, acquiring the latest analysis data corresponding to the identification from the end node again, and updating the cache of the node.
4. The digital object based identification resolution data caching method of claim 2, further comprising:
in the cache of any node, sequencing all the analysis data according to the time stamp in the analysis data; the timestamp represents a time when the parsed data was recently accessed;
when the buffer capacity of the node is full, replacing the analysis data with the earliest time stamp in the current buffer with the analysis data newly added into the buffer.
5. The digital object based identification resolution data caching method according to claim 1, wherein a unique identification of the whole network is allocated to the data resource in an end node as corresponding information of resolution data, comprising:
Judging whether the state information contains a user self-defined identifier or not, if not, randomly generating an identifier, and judging whether the identifier is used by other digital objects or not;
if the identification has been used by other digital objects, then a new identification is randomly generated;
assigning the identification to the digital object if the identification is not used by other digital objects;
if the state information contains a user self-identification, judging whether the user self-identification is used by other digital objects, and if the user self-identification is used by other digital objects, distributing a randomly generated identification to a user;
if the user self-identification is not used by other digital objects, the user self-identification is used as the identification of the digital object.
6. The digital object based identification resolution data caching method of claim 5, further comprising:
generating a hierarchical structure identifier for the data resource, wherein the hierarchical structure identifier is used for distinguishing different hosting institutions and the data resource, so that the management is convenient; the identification includes: identifying a prefix, identifying a suffix, and a separator;
setting user-defined or randomly generated UTF-8 codes as coding formats of the identification prefix and the identification suffix, and realizing identification multiplexing by being compatible with different identification systems;
User-defined or system default separators are set for compatibility with user-personalized requirements.
7. An identification resolution data caching apparatus based on a digital object, comprising:
the analysis route construction module is configured to set an identification analysis system of a tree hierarchy structure and comprises an end node, a domain node and a root node; the root node is positioned at the top layer of the tree hierarchy structure, manages the subordinate domain nodes and is responsible for maintaining the node information of the subordinate domain nodes; the domain node is positioned in the middle of the tree-like hierarchical structure, is a subordinate node of the root node, a multi-level domain node exists in the identification analysis system, the subordinate domain node is managed by an upper-level domain node, and the lowest-level domain node is responsible for managing the subordinate end node and maintaining the information of the subordinate end node; the end nodes are positioned at the bottom layer of the tree hierarchy structure; the root node directly communicates with a subordinate domain node; the domain node is directly communicated with the upper-level root node and is also directly communicated with the subordinate end node; the end node manages the identification of the digital object and communicates with the user directly;
the identifier allocation module is configured to allocate a unique identifier of the whole network for the data resource in the end node according to the state information of the data resource uploaded by the provider, and the unique identifier is used as corresponding information of the analysis data;
The analysis data is the state information obtained by analyzing the identification by the user; the status information includes: the storage location, access mode, owner, timestamp and access related information of the body data of the digital object; the identification is a self-determined identification or a randomly generated identification of the provider;
a persistence module configured to bind the identifier with the state information as a field in the state information to generate a key value pair, and persist the key value pair to a database;
a caching module configured to cache the state information in the end node as parsing data corresponding to the identifier;
when a user sends the identified analysis request for the first time, routing the analysis request to a hit cache at the end node, acquiring the analysis data from the cache of the hit cache node, and returning the analysis data to the user according to the path of the analysis request; caching the analysis data in a next hop node of the node currently hit in the cache on a return path;
when the user sends the identified analysis request again, the analysis request is routed to the next-hop node to hit the cache, the analysis data is returned from the node hit the cache to the user, and the analysis data is cached to the next-hop node of the node hit the cache currently on the return path;
And sequentially caching the analysis data in the nodes on the return path according to the analysis request sent by the user each time until the analysis data is cached in the end node closest to the user.
8. The digital object based identification resolution data caching apparatus of claim 7, wherein the caching module is further configured to delete the resolution data in the cache of the node currently hitting the cache when the resolution data is cached in a next hop node of the node currently hitting the cache.
9. The digital object based identification resolution data caching apparatus of claim 8, further comprising:
and the cache updating module is configured to re-acquire the latest analysis data corresponding to the identification from the end node and update the cache of the node when the existence time of the analysis data exceeds the set survival time in the cache of any node.
10. The digital object based identification resolution data caching apparatus of claim 8, further comprising:
the buffer replacement module is configured to sort all the analysis data according to the time stamps in the analysis data in the buffer of any node; the timestamp represents a time when the parsed data was recently accessed;
When the buffer capacity of the node is full, replacing the analysis data with the earliest time stamp in the current buffer with the analysis data newly added into the buffer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310542724.8A CN116634017B (en) | 2023-05-15 | 2023-05-15 | Identification analysis data caching method and device based on digital object |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310542724.8A CN116634017B (en) | 2023-05-15 | 2023-05-15 | Identification analysis data caching method and device based on digital object |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116634017A CN116634017A (en) | 2023-08-22 |
CN116634017B true CN116634017B (en) | 2024-02-06 |
Family
ID=87612670
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310542724.8A Active CN116634017B (en) | 2023-05-15 | 2023-05-15 | Identification analysis data caching method and device based on digital object |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116634017B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117082106B (en) * | 2023-10-16 | 2024-01-16 | 北京大学 | Multi-level data networking method, system, device and equipment oriented to government cloud environment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102650989A (en) * | 2011-02-23 | 2012-08-29 | 上海博路信息技术有限公司 | Content parsing system based on digital object identification |
WO2017020597A1 (en) * | 2015-07-31 | 2017-02-09 | 华为技术有限公司 | Resource cache method and apparatus |
CN107656981A (en) * | 2017-09-08 | 2018-02-02 | 中国科学院计算机网络信息中心 | A kind of data sharing and management method and system based on identification technology |
CN113868289A (en) * | 2021-10-18 | 2021-12-31 | 国网山东省电力公司电力科学研究院 | Identification analysis system and method suitable for intelligent Internet of things system |
-
2023
- 2023-05-15 CN CN202310542724.8A patent/CN116634017B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102650989A (en) * | 2011-02-23 | 2012-08-29 | 上海博路信息技术有限公司 | Content parsing system based on digital object identification |
WO2017020597A1 (en) * | 2015-07-31 | 2017-02-09 | 华为技术有限公司 | Resource cache method and apparatus |
CN107656981A (en) * | 2017-09-08 | 2018-02-02 | 中国科学院计算机网络信息中心 | A kind of data sharing and management method and system based on identification technology |
CN113868289A (en) * | 2021-10-18 | 2021-12-31 | 国网山东省电力公司电力科学研究院 | Identification analysis system and method suitable for intelligent Internet of things system |
Also Published As
Publication number | Publication date |
---|---|
CN116634017A (en) | 2023-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102462781B1 (en) | KVS tree database | |
KR102266756B1 (en) | KVS tree | |
US9960999B2 (en) | Balanced load execution with locally distributed forwarding information base in information centric networks | |
CN104731516B (en) | A kind of method, apparatus and distributed memory system of accessing file | |
US6424992B2 (en) | Affinity-based router and routing method | |
CN107562757B (en) | Query and access method, device and system based on distributed file system | |
US11003719B2 (en) | Method and apparatus for accessing a storage disk | |
US20040030731A1 (en) | System and method for accessing files in a network | |
US20200159775A1 (en) | Network-wide, location-independent object identifiers for high-performance distributed graph databases | |
CN108572991A (en) | Data base processing method, device and storage medium | |
CN116634017B (en) | Identification analysis data caching method and device based on digital object | |
CN109344122B (en) | Distributed metadata management method and system based on file pre-creation strategy | |
US10326854B2 (en) | Method and apparatus for data caching in a communications network | |
CN102971732A (en) | System architecture for integrated hierarchical query processing for key/value stores | |
WO2008119286A1 (en) | Method and system of data management | |
US10579597B1 (en) | Data-tiering service with multiple cold tier quality of service levels | |
US20200285629A1 (en) | System and method for state object data store | |
CN1575575A (en) | Hierarchical caching in telecommunication networks | |
US11775480B2 (en) | Method and system for deleting obsolete files from a file system | |
WO2020125630A1 (en) | File reading | |
KR101172885B1 (en) | Apparatus and method for providing device profile using device identifier | |
CN108647266A (en) | A kind of isomeric data is quickly distributed storage, exchange method | |
JP3842319B2 (en) | Information retrieval system | |
EP3788501B1 (en) | Data partitioning in a distributed storage system | |
CN107493309B (en) | File writing method and device in distributed system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |