CN110334069B - Data sharing method among multiple processes and related device - Google Patents

Data sharing method among multiple processes and related device Download PDF

Info

Publication number
CN110334069B
CN110334069B CN201910620883.9A CN201910620883A CN110334069B CN 110334069 B CN110334069 B CN 110334069B CN 201910620883 A CN201910620883 A CN 201910620883A CN 110334069 B CN110334069 B CN 110334069B
Authority
CN
China
Prior art keywords
cache
data
loading
application
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910620883.9A
Other languages
Chinese (zh)
Other versions
CN110334069A (en
Inventor
王海
段锴
崔华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Travelsky Technology Co Ltd
Original Assignee
China Travelsky Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Travelsky Technology Co Ltd filed Critical China Travelsky Technology Co Ltd
Priority to CN201910620883.9A priority Critical patent/CN110334069B/en
Publication of CN110334069A publication Critical patent/CN110334069A/en
Application granted granted Critical
Publication of CN110334069B publication Critical patent/CN110334069B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes

Abstract

The invention provides a data sharing method and a related device among multiple processes. And each application on the same host machine utilizes the memory-based shared cache to perform data access. On one hand, the process accesses the shared cache data conveniently and quickly as accessing a stack memory or a heap memory, and the data exchange efficiency among the processes is improved; on the other hand, only one share of the shared cache data is in the physical memory of the host, thereby effectively reducing the overhead of a system protocol stack and improving the overall performance of the system; furthermore, through the cache loading master control component, after the cache data shared by all the host machines in the distributed computing system is loaded successfully, the cache data shared by all the host machines takes effect uniformly, and the consistency of the cache data of all the computing nodes is ensured.

Description

Data sharing method among multiple processes and related device
Technical Field
The present invention relates to the field of distributed computing systems, and more particularly, to a method, an apparatus, a system, and a readable storage medium for sharing data among multiple processes.
Background
Before the advent of container technologies, such as Docker, socket, etc., it was very difficult to deploy multiple homogeneous or heterogeneous applications on the same computer. The underlying databases that different applications depend on may conflict, and different applications may preempt computing resources such as a Central Processing Unit (CPU) and a memory. And multiple applications are deployed on the host based on the resource isolation characteristic of the container, so that the computing resources at each corner of the host are utilized, and the utilization rate of the computing resources is improved by 5-10 times. Meanwhile, the container technology is combined with a micro-service architecture, a single application program is divided into smaller micro-services, and a distributed computing system is naturally constructed on a single host.
In a distributed computing system, in order to speed up access to server-side data, a process in a computing node typically introduces a cache to manage data that is inherited to be accessed. Meanwhile, in a distributed scenario, data at the server may change at any time, and in order to ensure the effectiveness of the cache maintained by each process, the cache managed by the process needs to be updated in an irregular time. However, no matter each compute node pulls data from the server periodically or the server occasionally pushes data to each node cache, memory consumption is brought to the system.
For the multi-thread model, because the address space is shared, the data storage can be realized by adopting map; for inter-process sharing of data, the operating system provided shared memory mechanism, such as SysVSHM, is relied upon. For read-only data, in order to further improve performance, a lock-free mechanism is adopted, and cache updating is realized in an A/B block switching mode. However, since each Docker running instance has its own name space (namespaces), the SysVSHM mechanism cannot be continuously adopted to achieve inter-process memory sharing.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus, a system, and a readable storage medium for sharing data among multiple processes, so as to achieve the purpose of high-speed communication among multiple processes.
In order to achieve the above object, the following solutions are proposed:
a method for sharing data among multiple processes comprises the following steps:
mapping the cache directory in each application container on the same host machine to the same folder of the host machine;
generating a cache file from the application cache data loaded by each application container on the same host machine, and storing the cache file under the folder;
mapping the cache files under the folders by the application loaded by each application container in an MMAP (memory mapped file) sharing mode;
after the application carried by the application container updates the cache data, updating the cache file under the folder and updating the cache file path pointed by the file soft link;
before accessing the cache file under the folder each time, the application borne by each application container detects whether the cache file path pointed by the file soft link is changed, if so, the application in each application container releases the original mapping to the cache file, and maps the cache file after the cache file path is changed in an MMAP sharing mode.
Optionally, after the application loaded by the application container updates the cache data, the step of updating the cache file in the folder and updating the cache file path pointed by the file soft link includes:
after the application carried by the application container updates the cache data, sending a loading request instruction to a cache loading master control component;
after receiving a loading instruction sent by the cache loading master control component, updating a cache file according to updated cache data;
after the cache file is updated successfully, a loading success instruction is sent to the cache loading master control component;
and after an effective instruction sent by the cache loading master control component is received, updating a cache file path pointed by the file soft link, wherein the effective instruction is sent by the cache loading master control component after the successful loading instructions of all the loading components are received.
An apparatus for sharing data among multiple processes, comprising:
the shared cache file unit is used for mapping the cache directory in each application container on the same host machine to the same folder of the host machine;
the cache loading component is used for generating cache files from the data cached by the application loaded by each application container on the same host machine and storing the cache files under the folder;
the mapping unit is used for mapping the cache files in the folder by the application loaded by each application container in an MMAP (multimedia Messaging protocol) sharing mode;
the cache loading component is further configured to update the cache file in the folder and update the cache file path pointed by the file soft link after the application loaded by the application container updates the cache data;
and the cache sharing updating unit is used for detecting whether a cache file path pointed by the file soft link is changed or not before the application borne by each application container accesses the cache file under the folder every time, if so, releasing the original mapping of the cache file by the application in each application container, and mapping the cache file after the cache file path is changed in an MMAP (multimedia Messaging access protocol) sharing mode.
Optionally, the cache loading component specifically includes:
the request subunit is used for sending a loading request instruction to the cache loading master control component after the application carried by the application container updates the cache data;
the updating subunit is used for updating the cache file according to the updated cache data after receiving the loading instruction sent by the cache loading master control component;
the feedback subunit is used for sending a loading success instruction to the cache loading master control component after the cache file is updated successfully;
and the effective sub-unit is used for updating a cache file path pointed by the file soft link after receiving an effective instruction sent by the cache loading master control component, wherein the effective instruction is sent by the cache loading master control component after receiving successful loading instructions of all loading components.
A data sharing system among multiple processes comprises N data sharing devices and a cache load master control component, wherein each data sharing device is arranged in a host, and N is an integer not less than 2.
A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the data sharing method as described above.
A data sharing device, comprising: a memory and a processor;
the memory for storing a computer program;
the processor is configured to execute the computer program to implement the steps of the data sharing method.
Compared with the prior art, the technical scheme of the invention has the following advantages:
according to the data sharing method and the related device among the multiple processes, the applications borne by each application container on the same host machine are mapped to the cache file in an MMAP sharing mode, and memory-based shared caching is achieved. On one hand, processes access shared cache data as conveniently and quickly as accessing stack memories or heap memories, and the data exchange efficiency among the processes is improved; on the other hand, only one share of the shared cache data is in the physical memory of the host, thereby effectively reducing the overhead of a system protocol stack and improving the overall performance of the system.
Furthermore, through the cache loading master control component, after the cache data shared by all the host machines in the distributed computing system is loaded successfully, the cache data shared by all the host machines takes effect uniformly, and the consistency of the cache data of all the computing nodes is ensured.
Of course, it is not necessary for any product in which the invention is practiced to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic logical structure diagram of a data sharing apparatus between multiple processes according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a data composition of a cache file according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the indexing scheme of the Hash + Linked List;
FIG. 4 is a schematic diagram of the indexing scheme of the 3-dimensional matrix + index linked list;
FIG. 5 is a schematic diagram of the indexing scheme of the 3-dimensional matrix + n-ary tree + linked list;
fig. 6 is a schematic diagram of a directory structure of a cache file according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a logical structure of a system for sharing data among multiple processes according to an embodiment of the present invention;
fig. 8 is a flowchart illustrating a method for sharing data among multiple processes according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a data sharing device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a data sharing apparatus between multiple processes according to this embodiment is deployed according to a host 31. The data sharing apparatus includes: a shared cache file unit 11, a cache loading component 12, a mapping unit (not shown) and a cache shared update unit (not shown).
The shared cache file unit 11 is configured to map the cache directory in each application container on the same host 31 to the same folder F of the host 31. Specifically, the cache directory in each application container may be mapped to the same folder of the host through PaaS (platform as a Service) technology, such as K8s, so that the application carried by each application container may access the same file. That is, each application container on the same host is mounted on the same file path, so that the application borne by each application container can access all files under the file path.
And the cache loading component 12 is configured to generate a cache file for the data cached by the application carried by each application container on the same host, and store the cache file in the folder F. Specifically, according to the set index algorithm of the cache file, the cache file including the control information, the data and the index is generated. See fig. 2 for the data composition of each cache file. TableHead (control information) is the total number of records recorded, index offset; record (data) is a data structure defined by application, and a database table is defined by mapping; index data created by Index.
The index is for locating data with high efficiency, and according to the rules and characteristics of the application service for accessing the data, the index can be created by using different index algorithms according to different dimensions of the data. The index mode of the Hash + linked list is suitable for all tables needing to establish indexes. Referring to fig. 3, the index mode of the Hash + linked list is specifically to create a Hash bucket array, where the size of a bucket is a very large prime number; each element in the array, called an entry, points to a linked list. The application needs to concatenate one or more fields in the query condition into a string, map it into an entry using the Hash algorithm, and then insert the data pointer into the linked list.
The index mode of the 3-dimensional matrix + linked list is suitable for index creation of a 3-word code field. Three-letter codes, i.e. specific codes representing different airports as established by the international air transport association. Referring to fig. 4, the index manner of the 3-dimensional matrix + index linked list is specifically that a 3-word matrix is established by querying a 3-word field in a condition, and each point in the matrix points to a linked list satisfying data of the 3-word field. The 3-character code matrix is a hash matrix of 36 × 36, each point in the matrix is called an entry and points to a single linked list; each entry in the linked list records a pointer to a piece of data.
The index mode of the 3-dimensional matrix + index linked list requires that N in the applied key value [ N ] must be 3, and simultaneously requires that the first three characters must be letters or numbers. The index algorithm carries out hash operation by using three characters of key [0], key [1] and key [2], the hash algorithm can ensure that a hash value falls on a certain point of a 3-word code matrix, and then a data pointer is inserted into a linked list. The pointers mentioned above all refer to offsets.
The index mode of the 3-dimensional matrix + n-way tree + linked list is suitable for index creation of two fields, wherein one field meets the requirement of a 3-character code matrix, and the other field is 3 capital English letters. Referring to fig. 5, the index manner of the 3-dimensional matrix + n-ary tree + linked list is specifically that a three-dimensional matrix (i.e. a 3-word matrix) is created according to fields meeting the requirements of the 3-word matrix, and the entry corresponding to each record is determined; and according to the characters of another field, three layers of 26-fork trees are created, 3 characters are respectively arranged at the layers 1, 2 and 3 of the trees, and the leaf node of each tree is a linked list.
The cache loading component 12 is further configured to update the cache file in the folder F and update the cache file path pointed by the file soft link after the application carried by the application container updates the cache data. The directory structure of the cache file is shown in fig. 6. The linux soft link is a file soft link. After the application updates the cache data, the Linux soft link is updated to a cache file path pointing to the new data from a cache file path pointing to the old data. And the process of updating the cache file path pointed by the file soft link is the process of taking the updated cache data into effect. The cache loading component 12 needs to load all data tables for the first time, and often only some data tables change in the actual operation process, so that only the changed data tables need to be loaded, and only the hard link needs to be created for other unchanged data tables, and data files do not need to be copied.
And the mapping unit is used for mapping the cache files in the folder in an MMAP sharing mode for the application borne by each application container. MMAP is a linux system function, and is a method for mapping files in a memory; the invention adopts a MAP _ SHARED mode to MAP the cache file, so that a plurality of applications share one memory.
And the cache sharing updating unit is used for detecting whether the cache file path pointed by the file soft link is changed or not before the application borne by each application container accesses the cache file under the folder every time, if so, releasing the original mapping of the cache file by the application borne by each application container, and mapping the cache file after the cache file path is changed in an MMAP (multimedia Messaging access protocol) sharing mode. Munmap (memory map release) is a linux system function and is used for releasing mapping; the applications share one memory, so that the memory can be really recycled by the operating system after the mapping of the applications borne by each application container is released.
The application microservice component is the smallest unit of work execution within the data access architecture. Each job done in the application provisioning service can be abstracted as an application microservice component. The host 31 carries a plurality of application microservice components by container technology, and simultaneously supports providing a data storage and file sharing mechanism for the application microservice components running thereon in a data volume manner.
In order to realize consistency of cache files of a plurality of hosts, namely, sharing the same file data, the invention provides a data sharing system among multiple processes, as shown in fig. 7, the system comprises N data sharing devices as shown in fig. 1 and a cache load general control component 21, where N is an integer not less than 2. Each data sharing device is provided in one host 31.
The cache load total control component 21 is responsible for snooping and managing the cache load components 12 in all hosts 31. In a distributed environment, a plurality of hosts 31 are online at the same time, and each host 31 generates a cache file. At this time, the cache load total control component 21 is used to maintain the consistency of the cache data on all hosts. The cache load master control component 21 maintains the consistency of the cache data on all hosts and comprises two stages. In the first stage, the cache load master control component 21 obtains the IPs of the cache load components 12 on all the hosts 31 through k8 s; send load instructions to the various cache load components 12 and confirm that all cache load components 12 successfully loaded the data. In the second stage, the cache load total control component 21 sends an effective instruction to each cache load component 12 to notify each cache load component 12 of the effective cache data. Since the application cache data and the updated cache data in each host 31 are the same, the cache data becomes effective after each cache loading component 12 is loaded successfully through the cache loading master control component 21, and the consistency of the cache data on all the hosts is ensured.
The cache load component 12 includes a request subunit, an update subunit, a feedback subunit, and an validate subunit.
And the request subunit is configured to send a load request instruction to the cache load general control component 21 after the application carried by the application container updates the cache data.
And the updating subunit is configured to update the cache file according to the updated cache data after receiving the load instruction sent by the cache load master control component 21.
And the feedback subunit is configured to send a successful-loading instruction to the cache load master control component 21 after the cache file is successfully updated.
And the effective sub-unit is used for updating the cache file path pointed by the file soft link after receiving an effective instruction sent by the cache loading master control component, wherein the effective instruction is sent by the cache loading master control component 21 after receiving successful loading instructions of all loading components.
Referring to fig. 8, the present invention provides a method for sharing data among multiple processes, which includes the steps of:
s81: and mapping the cache directory in each application container on the same host machine to the same folder F of the host machine.
S82: and generating a cache file from the application cache data borne by each application container on the same host machine, and storing the cache file in a folder F.
S83: and mapping the cache file in the folder F by the application carried by each application container in an MMAP sharing mode.
S84: and after the application carried by the application container updates the cache data, updating the cache file in the folder F, and updating the cache file path pointed by the file soft link.
S85: and the application borne by each application container detects whether the cache file path pointed by the file soft link is changed or not before accessing the cache file in the folder F each time, if so, the application in each application container releases the original mapping of the cache file, and the cache file with the changed cache file path is mapped in an MMAP sharing mode.
In order to implement consistency of the cache files of the respective hosts, after the application loaded by the application container updates the cache data, the step of updating the cache files in the folder and updating the cache file path pointed by the file soft link specifically includes:
after the application carried by the application container updates the cache data, sending a loading request instruction to a cache loading master control component;
after receiving a loading instruction sent by the cache loading master control component, updating a cache file according to updated cache data;
after the cache file is updated successfully, a loading success instruction is sent to the cache loading master control component;
and after an effective instruction sent by the cache loading master control component is received, updating a cache file path pointed by the file soft link, wherein the effective instruction is sent by the cache loading master control component after the successful loading instructions of all the loading components are received.
For simplicity of explanation, the foregoing method embodiments are described as a series of acts or combinations, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the invention.
The data sharing device provided by the embodiment of the invention can be applied to data sharing equipment, namely hosts, such as cloud platforms, servers, server clusters and the like. The server can be one or more of a rack server, a blade server, a tower server and a cabinet server. The data sharing equipment in the invention is the data sharing equipment for installing the linux operating system. Fig. 9 is a schematic diagram of a data sharing apparatus according to a preferred embodiment of the present invention. The hardware structure of the data sharing device may include: at least one processor 91, at least one communication interface 92, at least one memory 93 and at least one communication bus 94.
In the embodiment of the present invention, the number of the processor 91, the communication interface 92, the memory 93 and the communication bus 94 is at least one, and the processor 91, the communication interface 92 and the memory 93 complete mutual communication through the communication bus 94.
The processor 91 may be a Central Processing Unit (CPU) in some embodiments.
The communication interface 92 may include a standard wired interface, a wireless interface (e.g., WI-FI interface). Are commonly used to establish communication connections between intelligent terminals and other electronic devices or systems.
The memory 93 includes at least one type of readable storage medium. The readable storage medium may be an NVM (non-volatile memory) such as flash memory, hard disk, multimedia card, card-type memory, etc. The readable storage medium may also be a high-speed RAM (random access memory) memory.
Wherein the memory 93 stores a computer program and the processor 91 may invoke the computer program stored in the memory 93 for:
mapping the cache directory in each application container on the same host machine to the same folder of the host machine;
generating a cache file from the application cache data loaded by each application container on the same host machine, and storing the cache file under the folder;
mapping the cache files under the folders by the application loaded by each application container in an MMAP (multimedia Messaging protocol) sharing mode;
after the application carried by the application container updates the cache data, updating the cache file under the folder and updating the cache file path pointed by the file soft link;
before accessing the cache file under the folder each time, the application borne by each application container detects whether the cache file path pointed by the file soft link is changed, if so, the application in each application container releases the original mapping to the cache file, and maps the cache file after the cache file path is changed in an MMAP sharing mode.
The refinement function and the extension function of the program may be referred to as described above.
Embodiments of the present invention also provide a readable storage medium, where the readable storage medium may store a computer program adapted to be executed by a processor, where the computer program is configured to:
mapping the cache directory in each application container on the same host machine to the same folder of the host machine;
generating a cache file from the application cache data loaded by each application container on the same host machine, and storing the cache file under the folder;
mapping the cache files under the folders by the application loaded by each application container in an MMAP (multimedia Messaging protocol) sharing mode;
after the application carried by the application container updates the cache data, updating the cache file under the folder and updating the cache file path pointed by the file soft link;
before accessing the cache file under the folder each time, the application borne by each application container detects whether the cache file path pointed by the file soft link is changed, if so, the application in each application container releases the original mapping to the cache file, and maps the cache file after the cache file path is changed in an MMAP sharing mode.
The refinement function and the extension function of the program may be referred to as described above.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (5)

1. A method for sharing data among multiple processes is characterized by comprising the following steps:
mapping the cache directory in each application container on the same host machine to the same folder of the host machine;
generating a cache file from the application cache data loaded by each application container on the same host machine, and storing the cache file under the folder;
mapping the cache files under the folders by the application loaded by each application container in an MMAP (multimedia Messaging protocol) sharing mode;
after the application carried by the application container updates the cache data, sending a loading request instruction to a cache loading master control component;
after receiving a loading instruction sent by the cache loading master control component, updating a cache file according to updated cache data;
after the cache file is updated successfully, a loading success instruction is sent to the cache loading master control component;
after an effective instruction sent by the cache loading master control component is received, updating a cache file path pointed by the file soft link, wherein the effective instruction is sent by the cache loading master control component after the successful loading instructions of all loading components are received;
before accessing the cache file under the folder each time, the application borne by each application container detects whether the cache file path pointed by the file soft link is changed, if so, the application in each application container releases the original mapping to the cache file, and maps the cache file after the cache file path is changed in an MMAP sharing mode.
2. An apparatus for sharing data among multiple processes, comprising:
the shared cache file unit is used for mapping the cache directory in each application container on the same host machine to the same folder of the host machine;
the cache loading component is used for generating cache files from the data cached by the application loaded by each application container on the same host machine and storing the cache files under the folder;
the mapping unit is used for mapping the cache files in the folder by the application loaded by each application container in an MMAP (multimedia Messaging protocol) sharing mode;
the cache loading component is further configured to update the cache file in the folder and update the cache file path pointed by the file soft link after the application loaded by the application container updates the cache data;
the cache sharing updating unit is used for detecting whether a cache file path pointed by the file soft link is changed or not before the application borne by each application container accesses the cache file under the folder every time, if so, the application in each application container releases the original mapping of the cache file, and the cache file with the changed cache file path is mapped in an MMAP sharing mode;
the cache loading component specifically includes:
the request subunit is used for sending a loading request instruction to the cache loading master control component after the application carried by the application container updates the cache data;
the updating subunit is used for updating the cache file according to the updated cache data after receiving the loading instruction sent by the cache loading master control component;
the feedback subunit is used for sending a loading success instruction to the cache loading master control component after the cache file is updated successfully;
and the effective sub-unit is used for updating a cache file path pointed by the file soft link after receiving an effective instruction sent by the cache loading master control component, wherein the effective instruction is sent by the cache loading master control component after receiving successful loading instructions of all loading components.
3. A multiprocess data sharing system comprising N data sharing apparatuses according to claim 2, each of which is provided in a host, and a cache load general control component, where N is an integer not less than 2.
4. A readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the data sharing method according to claim 1.
5. A data sharing device, comprising: a memory and a processor;
the memory for storing a computer program;
the processor, configured to execute the computer program, implements the steps of the data sharing method according to claim 1.
CN201910620883.9A 2019-07-10 2019-07-10 Data sharing method among multiple processes and related device Active CN110334069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910620883.9A CN110334069B (en) 2019-07-10 2019-07-10 Data sharing method among multiple processes and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910620883.9A CN110334069B (en) 2019-07-10 2019-07-10 Data sharing method among multiple processes and related device

Publications (2)

Publication Number Publication Date
CN110334069A CN110334069A (en) 2019-10-15
CN110334069B true CN110334069B (en) 2022-02-01

Family

ID=68146009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910620883.9A Active CN110334069B (en) 2019-07-10 2019-07-10 Data sharing method among multiple processes and related device

Country Status (1)

Country Link
CN (1) CN110334069B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112269655B (en) * 2020-10-15 2023-01-13 北京百度网讯科技有限公司 Memory mapping file cleaning method and device, electronic equipment and storage medium
CN113110944A (en) * 2021-03-31 2021-07-13 北京达佳互联信息技术有限公司 Information searching method, device, server, readable storage medium and program product
CN114840356B (en) * 2022-07-06 2022-11-01 山东矩阵软件工程股份有限公司 Data processing method, data processing system and related device
CN116107515B (en) * 2023-04-03 2023-08-18 阿里巴巴(中国)有限公司 Storage volume mounting and accessing method, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740413A (en) * 2016-01-29 2016-07-06 珠海全志科技股份有限公司 File movement method by FUSE on Linux platform
CN108322307A (en) * 2017-01-16 2018-07-24 中标软件有限公司 Communication system and method between container based on kernel memory sharing
CN109213571A (en) * 2018-08-30 2019-01-15 北京百悟科技有限公司 A kind of internal memory sharing method, Container Management platform and computer readable storage medium
CN109274722A (en) * 2018-08-24 2019-01-25 北京北信源信息安全技术有限公司 Data sharing method, device and electronic equipment
CN109298935A (en) * 2018-09-06 2019-02-01 华泰证券股份有限公司 A kind of method and application of the multi-process single-write and multiple-read without lock shared drive

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163232B (en) * 2011-04-18 2012-12-05 国电南瑞科技股份有限公司 SQL (Structured Query Language) interface implementing method supporting IEC61850 object query

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740413A (en) * 2016-01-29 2016-07-06 珠海全志科技股份有限公司 File movement method by FUSE on Linux platform
CN108322307A (en) * 2017-01-16 2018-07-24 中标软件有限公司 Communication system and method between container based on kernel memory sharing
CN109274722A (en) * 2018-08-24 2019-01-25 北京北信源信息安全技术有限公司 Data sharing method, device and electronic equipment
CN109213571A (en) * 2018-08-30 2019-01-15 北京百悟科技有限公司 A kind of internal memory sharing method, Container Management platform and computer readable storage medium
CN109298935A (en) * 2018-09-06 2019-02-01 华泰证券股份有限公司 A kind of method and application of the multi-process single-write and multiple-read without lock shared drive

Also Published As

Publication number Publication date
CN110334069A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN110334069B (en) Data sharing method among multiple processes and related device
US10691373B2 (en) Object headers facilitating storage of data in a write buffer of a storage system
US10705965B2 (en) Metadata loading in storage systems
US10261693B1 (en) Storage system with decoupling and reordering of logical and physical capacity removal
US10558613B1 (en) Storage system with decrement protection of reference counts
US10838863B2 (en) Storage system with write cache release protection
US11562091B2 (en) Low latency access to physical storage locations by implementing multiple levels of metadata
US10817385B2 (en) Storage system with backup control utilizing content-based signatures
CN106294190B (en) Storage space management method and device
US10826990B2 (en) Clustered storage system configured for bandwidth efficient processing of writes at sizes below a native page size
US10852999B2 (en) Storage system with decoupling of reference count updates
CN112328435B (en) Method, device, equipment and storage medium for backing up and recovering target data
US11126361B1 (en) Multi-level bucket aggregation for journal destaging in a distributed storage system
US10747677B2 (en) Snapshot locking mechanism
WO2016148670A1 (en) Deduplication and garbage collection across logical databases
US10169348B2 (en) Using a file path to determine file locality for applications
CN107665095B (en) Apparatus, method and readable storage medium for memory space management
US20200379686A1 (en) Flash registry with write leveling
US11086558B2 (en) Storage system with storage volume undelete functionality
US9020977B1 (en) Managing multiprotocol directories
WO2016018450A1 (en) Distributed segmented file systems
US11429517B2 (en) Clustered storage system with stateless inter-module communication for processing of count-key-data tracks
JP6418419B2 (en) Method and apparatus for hard disk to execute application code
JP6607044B2 (en) Server device, distributed file system, distributed file system control method, and program
US11372772B2 (en) Content addressable storage system configured for efficient storage of count-key-data tracks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant