CN114356873A - Data sharing system and method - Google Patents

Data sharing system and method Download PDF

Info

Publication number
CN114356873A
CN114356873A CN202210004214.0A CN202210004214A CN114356873A CN 114356873 A CN114356873 A CN 114356873A CN 202210004214 A CN202210004214 A CN 202210004214A CN 114356873 A CN114356873 A CN 114356873A
Authority
CN
China
Prior art keywords
data
cache
layer
written
cache node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210004214.0A
Other languages
Chinese (zh)
Inventor
范东来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinabank Payments Beijing Technology Co Ltd
Original Assignee
Chinabank Payments Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinabank Payments Beijing Technology Co Ltd filed Critical Chinabank Payments Beijing Technology Co Ltd
Priority to CN202210004214.0A priority Critical patent/CN114356873A/en
Publication of CN114356873A publication Critical patent/CN114356873A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a data sharing system and method, and relates to the technical field of big data. One embodiment of the system comprises: the user interaction layer is used for receiving a data processing request, wherein the data processing request comprises a data writing request, and the data writing request comprises data to be written; the data storage layer is used for persistently storing the data to be written according to the data writing request; a multi-level distributed cache layer having a plurality of cache nodes, each cache node having at least one tag; the multi-level distributed cache layer is used for performing label adaptation on data to be written, so as to determine a first target cache node from a plurality of cache nodes, and cache the data to be written to the first target cache node. The implementation method can carry out efficient data sharing by utilizing the customized and personalized tags without changing the existing storage pattern, and the management is carried out through a uniform interface, so that the details are hidden for users, and the development and the maintenance are convenient.

Description

Data sharing system and method
Technical Field
The invention relates to the technical field of big data, in particular to a data sharing system and method.
Background
With the advent of the big data age, data is generated more and more rapidly and more in variety. The data storage positions can also be distributed in different geographical positions, so that a lot of difficulties are brought to data users, and the data users are usually distributed in different geographical positions, the data exchange and sharing requirements are more and more frequent, so that the data are difficult to efficiently distribute and use.
Disclosure of Invention
In view of this, embodiments of the present invention provide a data sharing system and method, which can perform efficient data sharing by using customized and personalized tags without changing the existing storage structure, and implement a flexible caching policy by dynamically combining tags, so as to cache data in a more suitable cache node, so as to facilitate fast reading and analysis of data, and improve data sharing efficiency and data use value; and managed through a unified interface. In addition, a user does not need to write codes for data sharing, the data sharing can be realized only through the most common reading and writing commands, details are hidden for the user, and development and maintenance are facilitated.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a data sharing system including:
the system comprises a user interaction layer, a data processing layer and a data processing layer, wherein the user interaction layer is used for receiving a data processing request, the data processing request comprises a data writing request, and the data writing request comprises data to be written;
the data storage layer is used for persistently storing the data to be written according to the data writing request;
a multi-level distributed cache layer having a plurality of cache nodes, each of the cache nodes having at least one tag; the multi-level distributed cache layer is used for performing label adaptation on the data to be written, so as to determine a first target cache node from the plurality of cache nodes, and cache the data to be written to the first target cache node.
Optionally, the multi-level distributed cache layer further includes a tag configuration module, configured to receive tag configuration information, and determine at least one tag of the cache node according to the tag configuration information.
Optionally, the tag configuration information includes one or more of: the physical position, the machine room position, the read-write speed, the hardware configuration information, the data source type, the data use and the residual cache space of the cache node.
Optionally, the multi-level distributed cache layer further includes a hierarchy management unit, configured to perform hierarchy division on cache resources of the cache nodes.
Optionally, the number of levels of each of the cache nodes is the same.
Optionally, the data storage layer further includes a first file system and at least one second file system, and the at least one second file system is mounted under a global directory of the first file system.
Optionally, the data processing request further includes a data reading request;
and the multi-level distributed cache layer is further used for determining a second target cache node according to the data reading request and reading target data from the second target cache node.
Optionally, the system further includes a metadata service layer, configured to record a storage path and a cache path of the data to be written.
In order to achieve the above object, according to another aspect of the embodiments of the present invention, there is provided a data sharing method applied to the data sharing system according to the embodiments of the present invention, the data sharing method including:
receiving a data processing request, wherein the data processing request comprises a data writing request, and the data writing request comprises data to be written;
according to the data writing request, persistently storing the data to be written;
performing label adaptation on the data to be written so as to determine a first target cache node from a plurality of cache nodes and cache the data to be written to the first target cache node; wherein each cache node of the plurality of cache nodes has at least one tag.
Optionally, before receiving the data processing request, the method further comprises:
receiving label configuration information, and determining at least one label of the cache node according to the label configuration information.
Optionally, the tag configuration information includes one or more of: the physical position, the machine room position, the read-write speed, the hardware configuration information, the data source type, the data use and the residual cache space of the cache node.
Optionally, the data processing request further includes a data reading request;
the method further comprises the following steps: and determining a second target cache node according to the data reading request, and reading target data from the second target cache node.
To achieve the above object, according to still another aspect of an embodiment of the present invention, there is provided an electronic apparatus including: one or more processors; a storage device, configured to store one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the data sharing method according to the embodiment of the present invention.
To achieve the above object, according to still another aspect of an embodiment of the present invention, there is provided a computer-readable medium on which a computer program is stored, the program implementing a data sharing method of an embodiment of the present invention when executed by a processor.
One embodiment of the above invention has the following advantages or benefits: because the user data processing request is received through the uniform user interaction layer, the user does not directly interact with the data storage of the bottom layer, does not need to know the data storage logic of the bottom layer, and only needs to write a data writing or reading command; the method comprises the steps of marking cache nodes in a multi-level distributed cache layer in a multi-dimensional mode, determining at least one label of each cache node, and achieving a flexible cache strategy through dynamic combination of the labels, so that data are cached to the more suitable cache nodes, data can be read and analyzed quickly, and data sharing efficiency and data use value are improved. The existing file system is used as a persistent storage layer, other file systems are mounted under the file system of the persistent storage layer, efficient data sharing can be carried out by utilizing customized and personalized tags under the condition that the existing storage pattern is not changed, and a flexible caching strategy is realized by dynamically combining the tags, so that data are cached to a more suitable caching node, data can be read and analyzed quickly, and the data sharing efficiency and the data use value are improved; and managed through a unified interface. In addition, a user does not need to write codes for data sharing, the data sharing can be realized only through the most common reading and writing commands, details are hidden for the user, and development and maintenance are facilitated.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of a centralized data storage system according to the prior art;
FIG. 2 is a schematic diagram of a prior art distributed data storage system;
FIG. 3 is a schematic structural diagram of a data sharing system according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the structure of the data storage layer of the data sharing system of an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a data sharing system according to another embodiment of the present invention;
FIG. 6 is a schematic diagram of the main flow of a data sharing method according to an embodiment of the present invention;
FIG. 7 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 8 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In a big data environment, the increase of data size and complexity is very rapid, and the requirement of a data storage system is higher and higher. Currently, if data needs to be shared in a large range and across regions, there are two general ways: centralized storage and distributed storage. As shown in fig. 1, the centralized storage needs to summarize the shared data step by step, and the data storage of each layer may be distributed in different geographical locations, and finally, the data storage is served by the data sharing area in a unified manner. However, the data link in this way is too long, the data timeliness is poor, and the data occupies more storage and calculation. In the distributed storage, as shown in fig. 2, each data user may initiate a request to data required by itself according to its own requirement, and then the other party transmits the data to the local for use. Compared with centralized storage, distributed storage is more flexible and can respond to demands quickly, but data are more dependent on each other. As time goes on, the connection between data becomes more and more compact, and the dependence between data sources becomes more and more unmanageable, and the management cost increases dramatically.
In order to solve at least one technical problem, embodiments of the present invention construct a data sharing system. There is a uniform access mode in the data sharing system, that is, the data sharing system provides a uniform service interface for users to read and write data. In the data sharing system, there are also a plurality of data storage media, and most of them are distributed file systems, which may be physically located in different geographical locations, and the data to be shared is also scattered in different data storage media. Moreover, the different data storage media all have at least one label, and the label can be set according to different scene requirements, for example, marking can be performed according to physical position, read-write speed, and the like. When data writing operation is carried out, the data is adapted to the label of the storage medium according to the source, the action, the reading and writing requirements and the like of the data to be written, so that the target storage medium is determined from the plurality of storage media. The data sharing system of the embodiment of the invention can obtain an accurate cache strategy meeting the scene requirement by flexibly combining the tags, and can meet the complex data sharing requirement.
Fig. 3 is a schematic structural diagram of a data sharing system according to an embodiment of the present invention, and as shown in fig. 3, the data sharing system 300 includes a user interaction layer 301, a data storage layer 302, and a multi-level distributed cache layer 303.
The user interaction layer 301 is configured to receive a data processing request, where the data processing request includes a data write request, and the data write request includes data to be written. The data processing request also includes a data read request. In an alternative embodiment, the user interaction layer may include a user interaction interface through which a user's request to write or read data is received. The user does not need to adapt to or even know about a specific data access protocol and path, and only needs to know about a data storage path to perform data read-write operation, thereby greatly facilitating data maintenance.
And the data storage layer 302 is used for persistently storing the data to be written according to the data writing request. In this embodiment, the data storage layer 302 is used to persistently store data to be written, and the multi-level distributed cache layer 303 is used to cache the data in the data storage layer 302, so as to improve data reading and writing speed, improve data analysis efficiency, and improve data value.
In an alternative embodiment, the data storage layer may be any standard file system, which may be distributed. Such as HDFS (Hadoop Distributed File S8 stem), NFS (Network File S8 stem), Ceph, AWS 3(Amazon Web Services Simple Storage Service), OSS (Object Storage Service), and the like.
In an alternative embodiment, data store layer 302 further includes a first file system and at least one second file system mounted under a global directory of the first file system. In this embodiment, as shown in fig. 4, after any standard file system (i.e., the first file system) is used as the data storage layer, other file systems (i.e., the second file system) may also be mounted under the standard file system (i.e., the first file system), so that the data storage layer may be laterally extended, and an existing architecture does not need to be modified greatly, which is convenient for maintenance and management. The user can access the first file system or the second file system through the user interaction layer 301, the user does not need to adapt to or even know a specific data access protocol and path, and only needs to know a data storage path to perform data read-write operation, so that great convenience is brought to data maintenance.
The multi-level distributed cache layer 303 is used to cache data. The multi-level distributed cache layer 303 has a plurality of cache nodes. The multi-level distributed cache layer 303 is configured to perform tag adaptation on the data to be written, so as to determine a first target cache node from the plurality of cache nodes, and cache the data to be written to the first target cache node. Among the storage resources under the data storage layer 302, additional storage resources such as memory, high-speed solid state hard disk regions of these data storage nodes (i.e., on the machine on which the file system resides or on the storage medium on which the file system resides) are ubiquitous, and these storage resources are independent of the file system and managed by the Host operating system on which the file system resides. In this embodiment, these cache resources, which are independent of the file system, may be used as multiple levels of distributed caching layers of the data sharing system. In an alternative embodiment, each cache node of the multi-level distributed cache layer 303 may be further hierarchically divided, that is, different storage resources in the cache node are taken as different levels, which may be divided according to, for example, read-write speed. For example, the first layer may read and write the highest-speed storage medium, such as a memory, the second layer is a storage medium with a slower read and write speed, such as a directory corresponding to the SSD, and so on. It should be noted that the hierarchical number of each cache node is consistent, but the directories corresponding to each hierarchical level may or may not be consistent. For example, the number of the layers set by all the cache nodes is consistent, and the situation that one cache node has 3 layers and the other cache node has 4 layers does not occur; for different nodes, the set directories may be inconsistent, for example, the primary directory of the node a is a memory, the secondary directory is an SSD, the primary directory of the node B is an SSD, and the secondary directory is a memory.
Further, each cache node in this embodiment has at least one tag. In this embodiment, each cache node needs to be marked in multiple dimensions to cache different data into different cache nodes, so that data sharing is performed in an easy-to-manage and simple manner, and data reading speed and use value are improved.
In an optional embodiment, the multi-level distributed cache layer 303 further includes a tag configuration module, configured to receive tag configuration information, and determine at least one tag of the cache node according to the tag configuration information. Wherein the tag configuration information includes, but is not limited to, one or more of: the physical position, the machine room position, the read-write speed, the hardware configuration information, the data source type, the data use and the residual cache space of the cache node.
As a specific example, if the tag of the cache node is determined according to the physical location, the geographic location of the cache node may be used as the tag, such as beijing, shanghai, and the like. If the label of the cache node is determined according to the machine room location, the machine room location where the cache node is located or the machine room identifier may be used as the label, for example, the first machine room in beijing, or the machine room in beijing 0001. If the tag of the cache node is determined according to the read-write speed, different read-write speeds can be set to different levels, such as high speed, medium speed, low speed, and the like, and the tag is high-speed read-write, medium-speed read-write, or low-speed read-write. If the tag of the cache node is determined according to the hardware configuration information, the system type, the total storage space, and the like of the cache node may be used as the tag. And if the label of the cache node is determined according to the type of the data source, the data source or the business or item name related to the data is used as the label. If the tag of the cache node is determined according to the data application, the data application can be used as the tag, such as real-time analysis, off-line calculation, item recommendation, user representation, and the like. If the tag of the cache node is determined according to the remaining cache space, the size of the current remaining cache space of the cache node may be used as the tag, or the remaining cache spaces with different sizes may be divided in advance, for example, the remaining cache space larger than 5T is sufficient as the remaining cache space, the remaining cache space between 2T and 5T is moderate as the remaining cache space, and the remaining cache space smaller than 2T is insufficient as the remaining cache space. It should be noted that the remaining cache space is automatically updated, and each time data is written into a cache node, the remaining cache space of the cache node is reduced, and the corresponding tag is updated.
In this embodiment, the tags may be flexibly and dynamically combined, and the data is cached to a more suitable cache node through the tag designation, so as to obtain better performance. For example, the data in city a is stored in HDFS, and a data analyst in city B needs to use the data to perform data mining, so that the data sharing system in the embodiment of the present invention may cache the data in the cache node in city B or the cache node closest to city B, so as to improve the data reading speed.
In an alternative embodiment, after receiving a data write request, the data to be written is stored to data store layer 302 for persistent storage. After saving the data to be written to the data store layer 302, the data to be written may be cached in the adapted cache node synchronously or asynchronously. After receiving the data reading request, the multi-level distributed cache layer 303 determines a second target cache node according to the data reading request, and reads target data from the second target cache node. If the second target cache node does not query the target data, the data is read from the data storage layer 302. That is, after receiving a data read request, data is preferentially read from the multi-level distributed cache layer 303, and if the multi-level distributed cache layer 303 does not inquire corresponding data, the data is read from the data storage layer 302.
The data sharing system of the embodiment of the invention uniformly receives the data processing request of the user through the user interaction layer, the user does not directly interact with the data storage of the bottom layer, the data storage logic of the bottom layer is not needed to be known, and only a data writing or reading command needs to be compiled; the method comprises the steps of marking cache nodes in a multi-level distributed cache layer in a multi-dimensional mode, determining at least one label of each cache node, and achieving a flexible cache strategy through dynamic combination of the labels, so that data are cached to the more suitable cache nodes, data can be read and analyzed quickly, and data sharing efficiency and data use value are improved. The data sharing system of the embodiment of the invention can mount other file systems in the data storage layer, thereby carrying out efficient data exchange by utilizing the customized label without changing the existing storage pattern and managing through a uniform interface. In addition, a user does not need to write codes for data sharing, the code can be written through the most common reading and writing commands, details are hidden for the user, and experience is better.
In an alternative embodiment, the data sharing system of the embodiment of the present invention may be implemented based on the open source software Alluxio. The Alluxio is an open source data arranging and storing system facing to a mixed cloud environment. Mounting at least one second file system under the global directory of the first file system may be achieved by a mount command of Alluxio, such as alloxio fs mount/test s3a:// aaa/aa. Wherein, alloxio is a command name (if the data sharing system is realized by the alloxio, namely the alloxio, if the data sharing system is realized by the alloxio, the command name can be randomly set and declared in a program), fs and mount are command parameters, and/test is a mount point, namely a directory, and s3a:// aaa/aa is a file system URL (Uniform Resource Locator) which needs to be mounted.
Fig. 5 is a schematic structural diagram of a data sharing system 500 according to another embodiment of the present invention, and as shown in fig. 5, the data sharing system 500 includes:
the user interaction layer 501 is configured to receive a data processing request, where the data processing request includes a data write request, and the data write request includes data to be written;
a data storage layer 502, configured to persistently store the data to be written according to the data writing request;
a multi-level distributed cache layer having a plurality of cache nodes, each of the cache nodes having at least one tag; the multi-level distributed cache layer is used for performing label adaptation on the data to be written, so as to determine a first target cache node from the plurality of cache nodes, and cache the data to be written to the first target cache node.
And the metadata service layer 503 is configured to record a storage path and a cache path of the data to be written.
In the embodiment, the storage positions (the storage path and the cache path) of the data are uniformly managed by the metadata service layer, so that the subsequent data maintenance is facilitated.
Fig. 6 is a schematic diagram of a main flow of a data sharing method according to an embodiment of the present invention, as shown in fig. 6, the method includes:
step S601: receiving a data processing request, wherein the data processing request comprises a data writing request, and the data writing request comprises data to be written and a data reading request;
step S602: according to the data writing request, persistently storing the data to be written;
step S603: performing label adaptation on the data to be written so as to determine a first target cache node from a plurality of cache nodes and cache the data to be written to the first target cache node; wherein each cache node of the plurality of cache nodes has at least one tag;
step S604: and determining a second target cache node according to the data reading request, and reading target data from the second target cache node.
The data sharing method of the embodiment of the invention is applied to the data sharing system. When data writing operation is carried out, a cache address can be individually specified through different tag combinations, for example, the geographical position is city a, the cache speed is high speed, and the like, and nearly any cache strategy can be achieved through the self-defined tags. When a read operation is performed, the cache is preferentially fetched.
In an optional embodiment, before receiving the data processing request, the method further comprises: receiving label configuration information, and determining at least one label of the cache node according to the label configuration information. The tag configuration information includes one or more of: the physical position, the machine room position, the read-write speed, the hardware configuration information, the data source type, the data use and the residual cache space of the cache node.
As a specific example, if the tag of the cache node is determined according to the physical location, the geographic location of the cache node may be used as the tag, such as beijing, shanghai, and the like. If the label of the cache node is determined according to the machine room location, the machine room location where the cache node is located or the machine room identifier may be used as the label, for example, the first machine room in beijing, or the machine room in beijing 0001. If the tag of the cache node is determined according to the read-write speed, different read-write speeds can be set to different levels, such as high speed, medium speed, low speed, and the like, and the tag is high-speed read-write, medium-speed read-write, or low-speed read-write. If the tag of the cache node is determined according to the hardware configuration information, the system type, the total storage space, and the like of the cache node may be used as the tag. And if the label of the cache node is determined according to the type of the data source, the data source or the business or item name related to the data is used as the label. If the tag of the cache node is determined according to the data application, the data application can be used as the tag, such as real-time analysis, off-line calculation, item recommendation, user representation, and the like. If the tag of the cache node is determined according to the remaining cache space, the size of the current remaining cache space of the cache node may be used as the tag, or the remaining cache spaces with different sizes may be divided in advance, for example, the remaining cache space larger than 5T is sufficient as the remaining cache space, the remaining cache space between 2T and 5T is moderate as the remaining cache space, and the remaining cache space smaller than 2T is insufficient as the remaining cache space. It should be noted that the remaining cache space is automatically updated, and each time data is written into a cache node, the remaining cache space of the cache node is reduced, and the corresponding tag is updated.
In this embodiment, the tags may be flexibly and dynamically combined, and the data is cached to a more suitable cache node through the tag designation, so as to obtain better performance.
According to the data sharing method, efficient data sharing is carried out through the customized and personalized tags, and a flexible caching strategy is achieved through the dynamic combination tags, so that data are cached to a more suitable caching node, data can be read and analyzed quickly, and data sharing efficiency and data use value are improved.
Fig. 7 illustrates an exemplary system architecture 700 to which the data sharing method or data sharing system of embodiments of the present invention may be applied.
As shown in fig. 7, the system architecture 700 may include terminal devices 701, 702, 703, a network 704, and a server 705. The network 704 serves to provide a medium for communication links between the terminal devices 701, 702, 703 and the server 705. Network 704 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 701, 702, 703 to interact with a server 705 over a network 704, to receive or send messages or the like. Various communication client applications, such as shopping applications, web browser applications, search applications, instant messaging tools, mailbox clients, social platform software, and the like, may be installed on the terminal devices 701, 702, and 703.
The terminal devices 701, 702, 703 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 705 may be a server that provides various services, such as a background management server that supports shopping websites browsed by users using the terminal devices 701, 702, and 703. The background management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (e.g., target push information and product information) to the terminal device.
It should be noted that the data sharing method provided by the embodiment of the present invention is generally executed by the server 705, and accordingly, the data sharing system is generally disposed in the server 705.
It should be understood that the number of terminal devices, networks, and servers in fig. 7 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 8, shown is a block diagram of a computer system 800 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program executes the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a sending module, an obtaining module, a determining module, and a first processing module. The names of these modules do not in some cases constitute a limitation on the unit itself, and for example, the sending module may also be described as a "module that sends a picture acquisition request to a connected server".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise:
receiving a data processing request, wherein the data processing request comprises a data writing request, and the data writing request comprises data to be written and a data reading request;
according to the data writing request, persistently storing the data to be written;
performing label adaptation on the data to be written so as to determine a first target cache node from a plurality of cache nodes and cache the data to be written to the first target cache node; wherein each cache node of the plurality of cache nodes has at least one tag.
According to the technical scheme of the embodiment of the invention, when data writing operation is carried out, the cache address can be specified in a personalized manner through different label combinations, for example, the geographic position is Beijing, the cache speed is high speed, and the like, and nearly any cache strategy can be achieved through self-defining the label. When a read operation is performed, the cache is preferentially fetched.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (14)

1. A data sharing system, comprising:
the system comprises a user interaction layer, a data processing layer and a data processing layer, wherein the user interaction layer is used for receiving a data processing request, the data processing request comprises a data writing request, and the data writing request comprises data to be written;
the data storage layer is used for persistently storing the data to be written according to the data writing request;
a multi-level distributed cache layer having a plurality of cache nodes, each of the cache nodes having at least one tag; the multi-level distributed cache layer is used for performing label adaptation on the data to be written, so as to determine a first target cache node from the plurality of cache nodes, and cache the data to be written to the first target cache node.
2. The system of claim 1, wherein the multi-level distributed cache layer further comprises a tag configuration module configured to:
receiving tag configuration information;
and determining at least one label of the cache node according to the label configuration information.
3. The system of claim 2, wherein the tag configuration information comprises one or more of: the physical position, the machine room position, the read-write speed, the hardware configuration information, the data source type, the data use and the residual cache space of the cache node.
4. The system of claim 1, wherein the multi-level distributed cache layer further comprises a hierarchy management unit configured to perform hierarchy partitioning on cache resources of the cache nodes.
5. The system of claim 4, wherein each of the cache nodes has the same number of levels.
6. The system of claim 1, wherein the data storage layer further comprises a first file system and at least one second file system mounted under a global directory of the first file system.
7. The system of any of claims 1-6, wherein the data processing request further comprises a data read request;
and the multi-level distributed cache layer is further used for determining a second target cache node according to the data reading request and reading target data from the second target cache node.
8. The system of claim 7, further comprising a metadata service layer for recording the storage path and the cache path of the data to be written.
9. A data sharing method applied to the data sharing system according to any one of claims 1 to 8, the data sharing method comprising:
receiving a data processing request, wherein the data processing request comprises a data writing request, and the data writing request comprises data to be written;
according to the data writing request, persistently storing the data to be written;
performing label adaptation on the data to be written so as to determine a first target cache node from a plurality of cache nodes and cache the data to be written to the first target cache node; wherein each cache node of the plurality of cache nodes has at least one tag.
10. The method of claim 9, wherein prior to receiving a data processing request, the method further comprises:
receiving label configuration information, and determining at least one label of the cache node according to the label configuration information.
11. The method of claim 10, wherein the tag configuration information comprises one or more of: the physical position of the cache node, the machine room position, the read-write speed, the hardware configuration information, the file system type, the data source type, the data purpose and the residual cache space.
12. The method of claim 9, wherein the data processing request further comprises a data read request;
the method further comprises the following steps: and determining a second target cache node according to the data reading request, and reading target data from the second target cache node.
13. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 9-12.
14. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 9-12.
CN202210004214.0A 2022-01-04 2022-01-04 Data sharing system and method Pending CN114356873A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210004214.0A CN114356873A (en) 2022-01-04 2022-01-04 Data sharing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210004214.0A CN114356873A (en) 2022-01-04 2022-01-04 Data sharing system and method

Publications (1)

Publication Number Publication Date
CN114356873A true CN114356873A (en) 2022-04-15

Family

ID=81107008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210004214.0A Pending CN114356873A (en) 2022-01-04 2022-01-04 Data sharing system and method

Country Status (1)

Country Link
CN (1) CN114356873A (en)

Similar Documents

Publication Publication Date Title
US11200044B2 (en) Providing access to a hybrid application offline
US11711420B2 (en) Automated management of resource attributes across network-based services
CN109254733B (en) Method, device and system for storing data
CN109976667B (en) Mirror image management method, device and system
US8799409B2 (en) Server side data cache system
CN109189841B (en) Multi-data source access method and system
CN107103011B (en) Method and device for realizing terminal data search
CN110765187A (en) Data source route management method and device
US20170153909A1 (en) Methods and Devices for Acquiring Data Using Virtual Machine and Host Machine
CN110837423A (en) Method and device for automatically acquiring data of guided transport vehicle
CN112783887A (en) Data processing method and device based on data warehouse
CN110110184B (en) Information inquiry method, system, computer system and storage medium
CN111753226A (en) Page loading method and device
CN113535673B (en) Method and device for generating configuration file and data processing
CN114356873A (en) Data sharing system and method
CN112711572B (en) Online capacity expansion method and device suitable for database and table division
US10712959B2 (en) Method, device and computer program product for storing data
CN113051244A (en) Data access method and device, and data acquisition method and device
US20220191104A1 (en) Access management for a multi-endpoint data store
CN110851192A (en) Method and device for responding to configuration of degraded switch
WO2018217406A1 (en) Providing instant preview of cloud based file
CN109446183B (en) Global anti-duplication method and device
CN113781088A (en) User tag processing method, device and system
CN117421380A (en) Lake bin metadata label creation method and lake bin metadata label query method
CN113760860A (en) Data reading method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination