CN117608860A - Multi-process data processing method and device - Google Patents
Multi-process data processing method and device Download PDFInfo
- Publication number
- CN117608860A CN117608860A CN202311812920.9A CN202311812920A CN117608860A CN 117608860 A CN117608860 A CN 117608860A CN 202311812920 A CN202311812920 A CN 202311812920A CN 117608860 A CN117608860 A CN 117608860A
- Authority
- CN
- China
- Prior art keywords
- file
- target
- target file
- data
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 219
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 230000008569 process Effects 0.000 claims abstract description 152
- 230000004044 response Effects 0.000 claims abstract description 14
- 238000013507 mapping Methods 0.000 claims description 21
- 238000012795 verification Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 claims description 5
- 238000013461 design Methods 0.000 abstract description 8
- 230000000903 blocking effect Effects 0.000 abstract 1
- 238000012217 deletion Methods 0.000 description 11
- 230000037430 deletion Effects 0.000 description 11
- 238000001514 detection method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000007246 mechanism Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000007726 management method Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000004886 process control Methods 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/13—File access structures, e.g. distributed indices
- G06F16/137—Hash-based
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/14—Details of searching files based on file metadata
- G06F16/148—File search processing
- G06F16/152—File search processing using file content signatures, e.g. hash values
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/16—File or folder operations, e.g. details of user interfaces specifically adapted to file systems
- G06F16/162—Delete operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/172—Caching, prefetching or hoarding of files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Library & Information Science (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application provides a multi-process data processing method, which is used for CDN nodes, and comprises the following steps: determining a target process from a plurality of processes in response to a data acquisition request for the target file; locally searching a target file from a CDN node through the target process; under the condition that the target file is not found locally or the target file is expired, creating a temporary file through the target process; acquiring data of a target file from a source server and writing the data into a temporary file; and renaming the temporary file to obtain a local target file for distribution under the condition that the temporary file writing is completed. According to the technical scheme, the temporary file and renaming operation can be used, so that the problem of resource competition of multiple processes is effectively solved; by adopting the lock-free design, the problem of lock competition of multiple processes is solved, the files cached to the local can also be converged to the source request, and the stress of user blocking and a back-end server is reduced.
Description
Technical Field
The embodiments of the present application relate to the field of data processing technologies, and in particular, to a multi-process data processing method, device, computer device, and computer readable storage medium.
Background
In a live scenario, the CDN may distribute live files to users via a variety of real-time transport protocols (e.g., long connection-based RTMP, FLV, or short connection-based HLS). Wherein a short connection based live protocol may result in a significant increase in QPS that the user triggers back to the source. In order to better utilize resources, CDN nodes may employ a multi-process mode.
However, the multi-process mode may involve problems of inter-process synchronization, resource contention, and the like. The introduction of a lock mechanism, while alleviating the foregoing problems, can result in a strong competition for locks. And the live file data is updated frequently, so that the competition phenomenon of the lock is further aggravated, the CPU utilization rate is reduced, the user is difficult to acquire the latest live file data in time, and the user experience is poor.
It should be noted that the foregoing is not necessarily prior art, and is not intended to limit the scope of the patent protection of the present application.
Disclosure of Invention
Embodiments of the present application provide a multi-process data processing method, apparatus, computer device, and computer readable storage medium, so as to solve or alleviate one or more of the technical problems set forth above.
An aspect of an embodiment of the present application provides a multi-process data processing method, for a CDN node, where the method includes:
Determining a target process from a plurality of processes in response to a data acquisition request for the target file;
locally searching a target file from a CDN node through the target process;
under the condition that the target file is not found locally or the target file is expired, creating a temporary file through the target process;
acquiring data of a target file from a source server and writing the data into a temporary file;
and renaming the temporary file to obtain a local target file for distribution under the condition that the temporary file writing is completed.
Optionally, each process has associated therewith a file control queue configured to manage and update a plurality of file information therein, the updating including adding and deleting; the multi-process data processing method further comprises the following steps:
under the condition that a local target file is obtained, acquiring file information of the local target file;
adding the file information of the local target file into a file control queue associated with the target process;
if one file information is deleted by the file control queue, deleting the file corresponding to the deleted file information.
Optionally, the file information includes file name and file description information, and the file control queue includes a linked list and a hash table; the nodes of the linked list are used for storing file description information, and the hash table is used for establishing a mapping relation between file names and the linked list nodes storing the file description information.
Optionally, the file information of the local target file includes a target file name and target file description information; correspondingly, adding the file information of the local target file into the file control queue associated with the target process comprises the following steps:
matching the target file name with the hash table;
in the case where the hash table has the target file name: determining corresponding linked list nodes through the hash table; updating the corresponding linked list nodes according to the target file description information; and under the condition that the file control queue is the least recently used queue, moving the updated linked list node to the head of the link;
in the case where the hash table does not have the target file name: and inserting the target file description information into the linked list as a new node, and establishing a mapping relation between the target file name and the linked list new node in the hash table.
Optionally, the file control queue is a first-in first-out queue; correspondingly, if one file information is deleted by the file control queue, deleting the file corresponding to the deleted file information, including:
acquiring the length of a hash table;
Deleting a link head node and a file corresponding to the link head node under the condition that the length of the hash table exceeds the preset length;
and deleting the mapping relation corresponding to the link head node in the hash table.
Optionally, the file control queue is a least recently used queue; correspondingly, if one file information is deleted by the file control queue, deleting the file corresponding to the deleted file information, including:
acquiring the length of a hash table;
deleting a chain tail node and a file corresponding to the chain tail node under the condition that the length of the hash table exceeds a preset length;
and deleting the mapping relation corresponding to the chain tail node in the hash table.
Optionally, the preset length is determined by:
monitoring available resources of the CDN node;
and dynamically determining the preset length according to the available resources.
Optionally, the multi-process data processing method further includes:
under the condition that the target file is found locally and the target file is not expired, the target file is obtained locally from the CDN node;
analyzing the target file according to the data acquisition request to acquire target data;
And distributing the target data to a request end.
Optionally, searching the target file locally at the CDN node includes:
under the condition that the target file is found, cache control information associated with the found target file is obtained;
analyzing the cache control information to obtain expiration time and content verification data;
and determining whether the searched target file is out of date according to the out-of-date time and the content verification data.
Another aspect of an embodiment of the present application provides a multi-process data processing apparatus for a CDN node, where the apparatus includes:
a determining module for determining a target process from a plurality of processes in response to a data acquisition request for the target file;
the searching module is used for searching the target file from the CDN node locally through the target process;
the creating module is used for creating a temporary file through the target process under the condition that the target file is not found locally or the target file is expired;
the writing module is used for acquiring the data of the target file from the source server and writing the data into the temporary file;
and the renaming module is used for renaming the temporary file under the condition that the temporary file writing is completed so as to obtain a local target file for distribution.
Another aspect of an embodiment of the present application provides a computer device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor;
wherein: the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
Another aspect of the embodiments provides a computer-readable storage medium having stored therein computer instructions which, when executed by a processor, implement a method as described above.
The technical scheme adopted by the embodiment of the application can comprise the following advantages:
upon receiving a user data retrieval request for a target file, the CDN node may select a target process from a plurality of processes to handle the request. The target process may preferentially see if the target file exists locally at the CDN node. And triggering the source back in the case that the target file does not exist or is out of date. During the source return process, the target process creates an exclusive temporary file locally for writing the target file data acquired from the source server. In the event that the temporary file write is complete (back source complete), the temporary file is renamed to obtain a local target file that is available for distribution. As can be seen, the embodiments of the present application can effectively alleviate the resource contention problem of multiple processes by using temporary files and renaming operations: the atomicity of the reading operation of each process is ensured, the condition of reading temporary data is avoided, meanwhile, the writing operation of a plurality of processes to the same temporary file is prevented, and the correctness of the data is ensured. By adopting the lock-free design, the lock competition problem of multiple processes is eliminated: when one process returns to the source, other processes can execute the source returning operation at the same time, the lock competition result is not required to be waited, the source returning time is ensured, and the user experience is improved. The files cached locally can also converge back to the source request, reduce user chunking, and reduce the pressure of the backend server.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 schematically illustrates a diagram of an operating environment for a multi-process data processing method according to an embodiment of the present application;
FIG. 2 schematically illustrates a flow chart of a multi-process data processing method according to an embodiment of the present application;
FIG. 3 schematically illustrates a new flow chart of a multi-process data processing method according to an embodiment of the present application;
FIG. 4 schematically illustrates a new flow chart of a multi-process data processing method according to an embodiment of the present application;
FIG. 5 schematically illustrates a new flow chart of a multi-process data processing method according to an embodiment of the present application;
fig. 6 schematically shows a sub-step flow chart of step S502 in fig. 5;
FIG. 7 is an application example diagram of a multiprocessing data processing method according to an embodiment of the application;
FIG. 8 is an application example diagram of a multiprocessing data processing method according to an embodiment of the application;
FIG. 9 schematically illustrates a block diagram of a multi-process data processing apparatus according to a second embodiment of the present application; and
Fig. 10 schematically shows a hardware architecture diagram of a computer device according to a third embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that the descriptions of "first," "second," etc. in the embodiments of the present application are for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be regarded as not exist and not within the protection scope of the present application.
In the description of the present application, it should be understood that the numerical references before the steps do not identify the order of performing the steps, but are only used for convenience in describing the present application and distinguishing each step, and thus should not be construed as limiting the present application.
First, a term explanation is provided in relation to the present application:
CDN (Content Delivery Network ): and deploying nodes at the network edge, and caching static resources of the website to the edge node which is closer to the user, so that the access speed of the user is improved, the bandwidth consumption is reduced, and the concurrency capacity is increased.
Live broadcast: the live broadcast is indicated, and audio and video signals are transmitted in real time through networks such as the Internet in a certain time for users to watch online.
Short connection: the connection is terminated in time after the request is completed, unlike long connections which need to maintain a long-time connection state. Short connections may be used in scenarios where small amounts of data are transmitted in a short time, such as HTTP request responses, instant messaging, etc. Because the long-term connection state is not required to be maintained, the short connection is lighter than the long connection, and the load pressure of the server can be effectively reduced.
HLS (HTTP Live Streaming) protocol: an HTTP-based streaming media Transport protocol is provided by slicing an entire media file into a series of small, independent TS (Transport Stream) files, while generating an M3U8 file (playlist file) that indicates TS file order and location. The user can download the TS files one by one through the HTTP protocol, so that the playing of media is realized, and the user can watch the live broadcast content in progress and can jump to any live broadcast position for review.
Pushing flow: the audio and video data of the client (such as the anchor) is collected, encoded and encapsulated by using a transmission protocol, and then transmitted to a server, where the server may be a source station (source server) in the CDN architecture.
Clustering: a computer acquisition server architecture combines multiple independent computers (nodes) together to collectively accomplish a particular task or provide a particular service. These nodes work cooperatively to improve performance, scalability, availability, and fault tolerance.
Multiprocessing: refers to running multiple independent processes simultaneously in an operating system. Each process has independent memory space and resources, and mutual influence is avoided. Multiprocessing is a fundamental feature of an operating system that allows multiple tasks to be performed simultaneously, increasing concurrency and efficiency of the system, but multiprocessing programming requires consideration of inter-process communication, synchronization, resource contention, etc.
P2P (Peer-to-Peer), point-to-point network): a computer network architecture in which each participant (node or computer) can communicate directly with other nodes without going through a central server or intermediate node. In a P2P network, all nodes are connected peer-to-peer, share and provide resources, forming a distributed network.
QPS (Queries Per Second, query rate per second): number of queries or requests processed per second.
FIFO (First-In-First-Out, first-In-First-Out queue): a data structure follows the principle of first-in first-out. In computer science, FIFO queues are one way to store and retrieve data, with the first element to enter the queue being fetched first. Such data structures may be used in buffer, task scheduling, etc. scenarios.
LRU (Least Recently Used, least recently used queue): a core idea of a cache eviction policy is to prioritize eviction of a least recently used cache direction when the cache reaches a certain capacity to make room for storing new cache entries.
Next, in order to facilitate understanding of the technical solutions provided in the embodiments of the present application by those skilled in the art, the following description is made on related technologies:
in a live scenario, the CDN may use different transport protocols, such as long connection-based RTMP (Real-Time Messaging Protocol ), FLV (Flash Viedo), or short connection-based HLS. Because the HLS protocol is transmitted based on the HTTP protocol, the method has good compatibility, is suitable for various network environments, and the slicing characteristic of the HLS protocol is suitable for a P2P distribution mode, so that the consumption of server resources (CPU, memory, bandwidth and the like) can be reduced, and the instantaneity is improved. Therefore, CDNs use HLS protocols more for live broadcast. Live protocols based on short connections (e.g., HLS) can cause the QPS of the user trigger back to the source to increase significantly, increasing the pressure on the backend server, as compared to long connections. In order to better utilize resources, CDN nodes may employ a multi-process mode.
However, the applicant has appreciated that: the multi-process mode may involve the problem of name number of inter-process communication and resource competition. Although the above problem can be alleviated to a certain extent by the way of locking the shared memory in the related art, the locking mechanism can cause the multi-process to compete strongly against the lock. And the updating of live file data is frequent, the competition phenomenon of the lock can be further aggravated, so that the CPU cannot be effectively utilized, and meanwhile, a user cannot timely acquire the latest live file data, so that the user experience is poor.
Therefore, the embodiment of the application provides a multi-process data processing technical scheme. In the technical scheme, the method comprises the following steps: (1) The disk file (such as a live file cached in a disk) is used for converging the source return request, so that the pressure of a back-end server is reduced; (2) Each process is configured with a file control queue (such as a FIFO (first in first out) queue, a LRU (least recently used) queue and the like) thereof, the file control queue can be used for storing file information of live files, the information of each live file generally exists in the file control queue of one process, the process controls the deletion of the file and releases the disk space, and the competition problem of the deletion of the file is relieved; (3) One process may store the data information such as a header (metadata), a body (data body), an expiration time (additional information) and the like of a certain live file in a disk (may also be a memory disk with higher performance), and other processes may preferentially read the local disk file when a user request arrives, so as to obtain related data. Compared with a scheme of locking and inquiring a global index tree by multiple processes, the scheme can effectively relieve the competition problem of acquiring file data information; (4) The temporary file and renaming operation are used, so that the resource competition problem of multiple processes is effectively relieved; (5) The lock-free design is adopted, so that the lock competition problem of multiple processes is eliminated. See in particular below.
Finally, for ease of understanding, an exemplary operating environment is provided below.
As shown in fig. 1, the running environment diagram includes: the system comprises a main broadcasting terminal, an origin server, a CDN node and a spectator terminal. Under a live broadcast scene, the anchor terminal can push live broadcast files to the source server, and the source server can distribute live broadcast contents to the audience terminal in real time through CDN nodes.
The source server can be configured to receive and store live files pushed by the anchor terminal, and configured as a source station of the CDN architecture, and is configured to respond to a source return request of the CDN node and transmit corresponding resources (such as live files) to the CDN node. The origin server may be a single server, a cluster of servers, or a cloud computing service center.
CDN nodes can provide content delivery services. The CDN node may be configured to: in response to a request for a particular resource (e.g., live file), when the particular resource is local to the CDN node, then provision is made directly; and when the CDN node does not have the specific resource locally, returning to the source server.
And the anchor terminal is used for generating the live broadcast file in real time and performing push stream operation. The live file may include audio or video. The anchor terminal can be an electronic device such as a smart phone, a tablet computer and the like. Of course, the anchor terminal may also be a virtual compute instance within the origin server.
The audience terminal may be configured to send a data acquisition request to a CDN node or origin server. The audience terminal may be any type of computer device, such as a smart phone, tablet device, laptop, smart television, vehicle terminal, or the like. The viewer terminal may incorporate a browser, a specialized program, or a player for receiving data and outputting content to the user. Wherein the content may include video, audio, comments, text data, and/or the like.
The anchor terminal, origin server, CDN nodes, and audience terminals may be connected by a network. The network may include various network devices such as routers, switches, multiplexers, hubs, modems, bridges, repeaters, firewalls, and/or proxy devices, etc. The network may include physical links such as coaxial cable links, twisted pair cable links, fiber optic links, combinations thereof, and/or the like. The network may include wireless links, such as cellular links, satellite links, wi-Fi links, and/or the like.
It should be noted that the numbers of the anchor terminal, the audience terminal and the CDN nodes in the drawing are only illustrative, and are not intended to limit the scope of patent protection of the present application. There may be any number of anchor terminals, audience terminals, and CDN nodes depending on the actual situation.
The technical scheme of the application is introduced through a plurality of embodiments by taking CDN nodes as an execution main body. It should be understood that these embodiments may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.
Example 1
Fig. 2 schematically shows a flow chart of a multi-process data processing method according to an embodiment of the present application.
As shown in fig. 2, the multi-process data processing method may include steps S200 to S208, wherein:
step S200, in response to a data acquisition request for a target file, determining a target process from a plurality of processes.
Step S202, locally searching a target file from the CDN node through the target process.
In step S204, in the case that the target file is not found locally or the target file has expired, a temporary file is created by the target process.
Step S206, obtaining the data of the target file from the source server and writing the data into the temporary file.
Step S208, renaming the temporary file to obtain a local target file for distribution when the temporary file writing is completed.
According to the multi-process data processing method provided by the embodiment, when a data acquisition request of a user for a target file is received, a CDN node can select one target process from a plurality of processes to process the request. The target process may preferentially see if the target file exists locally at the CDN node. And triggering the source back in the case that the target file does not exist or is out of date. During the source return process, the target process creates a temporary file locally, which is exclusive to the target process, for writing the target file data acquired from the source server. In the event that the temporary file write is complete (back source complete), the temporary file is renamed to obtain a local target file that is available for distribution. As can be seen, the embodiments of the present application can effectively alleviate the resource contention problem of multiple processes by using temporary files and renaming operations: the atomicity of the reading operation of each process is ensured, the condition of reading temporary data is avoided, meanwhile, the writing operation of a plurality of processes to the same temporary file is prevented, and the correctness of the data is ensured. By adopting the lock-free design, the lock competition problem of multiple processes is eliminated: when one process returns to the source, other processes can execute the source returning operation at the same time, the lock competition result is not required to be waited, the source returning time is ensured, and the user experience is improved. The files cached locally can also converge back to the source request, reduce user chunking, and reduce the pressure of the backend server.
Each of steps S200 to S208 and optionally other steps are described in detail below in conjunction with fig. 2.
Step S200In response to a data acquisition request for a target file, a target process is determined from a plurality of processes.
For example, in a live scenario using HLS protocol, the target file may be a series of small, independent TS files generated from the live stream via slicing. The viewer terminal may acquire these TS files by sending a data acquisition request to the CDN nodes to view the live content in progress or jump to any location of the live for review. It should be noted that the target file may also be a plurality of different types of media files or data segments, for example: video, audio, pictures, documents, etc. to meet the needs of different application scenarios.
When the CDN node receives the data acquisition request from the viewer terminal, a plurality of methods may be used to select a process from a plurality of processes as a target process to process the data acquisition request. For example: the selection may be based on process ID order selection, random selection, based on process performance or system load conditions. In this embodiment, a lock-free design is adopted for multiple processes, and when a target process processes a current data acquisition request, if a new user request arrives, other processes can also respond immediately, without waiting for a process competing for a lock after the target process releases the lock to process the new request. The method avoids the problem of frequent competition lock of multiple processes, and can effectively improve the CPU utilization rate and the request response speed.
The specific procedure by which the target process processes the data acquisition request will be exemplarily described below.
Step S202And searching the target file from the CDN node locally through the target process.
In order to reduce the burden of the source server and increase the processing speed, the target process can preferentially check whether the CDN node locally caches the target file. For example, the target process may obtain the target file name by parsing the data acquisition request, so as to quickly find the file corresponding to the target file name in the local cache (e.g. disk). In practical applications, if the target file is not found in the local cache, which means that the CDN node does not have the cache locally, in order to respond to the data acquisition request in time and effectively converge back to the source request, the target file needs to be cached from the source server. If the target file found in the local cache has expired, a new, valid target file needs to be obtained from the source server. And if a valid target file is found in the local cache, the target process can respond to the request faster. An exemplary scheme for responding to requests is provided below.
In an alternative embodiment, as shown in fig. 3, the multi-process data processing method may further include:
Step S300, when the target file is found locally and the target file is not expired, obtaining the target file locally from the CDN node.
Step S302, according to the data obtaining request, the target file is parsed to obtain target data.
Step S304, distributing the target data to a request end.
In the case where valid and unexpired target data is found, the target file can be read directly from the local cache (disk). The header and body data of the target file may then be parsed. The parsing header data may extract metadata of the target file, such as file type, size, creation time, and the like. Parsing the body data may then extract the actual data (e.g., live content) of the target file. From the data acquisition request, the desired target data may be determined and transmitted to the viewer terminal.
In this embodiment, the target data required by the terminal of the audience is quickly obtained by reading and analyzing the locally cached target file, so that the request is accurately responded, and the playing fluency is improved.
As described above, in the case where the CDN node finds the target file locally, it is also necessary to verify the validity of the target file. In practical applications, whether the found target file is valid may be determined in various manners, for example: version identification, time stamp, integrity check, status flag, etc. An exemplary scheme is provided below.
In an alternative embodiment, as shown in fig. 4, the multi-process data processing method may further include:
step S400, under the condition that the target file is found, the cache control information associated with the found target file is obtained.
Step S402, the cache control information is analyzed to obtain the expiration time and the content verification data.
Step S404, determining whether the searched target file is out of date according to the out-of-date time and the content verification data.
The cache control information may include information such as expiration time of the file, MD5 (Message Digest Algorithm 5) data (hash value) of the file contents, checksum, etc., which may be used to verify whether the file is expired, or corrupted.
When the target file is found, the cache control information corresponding to the target file can be acquired and analyzed to acquire the expiration time and the content verification data of the target file. And determining that the target file is temporarily valid under the condition that the current time does not reach the expiration time and the content verification data is not tampered. If the expiration time has been reached or the content verification data has changed, the locally cached target file is deemed not available (expired/stale), requiring a back source.
In this embodiment, by caching the control information, whether the target file is available or not can be quickly and accurately determined, and the source or response request can be timely triggered, so that instantaneity is improved.
Step S204And under the condition that the target file is not found locally or the target file is expired, creating a temporary file through the target process.
In the case where there is no target file or available target files locally, the target process needs to be timely sourced back. Because of adopting the lock-free design, in order to avoid the problems of dirty data (inconsistent or invalid data) and the like caused by multi-process resource competition (such as that one process is reading a file when writing the file and the other process is writing the same file by a plurality of processes), the embodiment can locally create a temporary file which is exclusive by the target process through the target process and is used for storing target file data obtained by a return source. Illustratively, the target process may use a timestamp, process ID, random number, or other unique identifier to generate a non-duplicate temporary file name to create an exclusive temporary file.
In this embodiment, by creating a temporary file monopolized by the target process, the situation that other processes read the file to obtain temporary data when the target process writes the file can be avoided, and the problem of multi-process resource competition such as writing of a plurality of processes to the same file is avoided, so that the consistency and accuracy of the data are effectively ensured.
Step S206And acquiring the data of the target file from the source server and writing the data into the temporary file.
After creating the temporary file, the target process may initiate a source-returning request for the target file to the source server, so as to obtain the data of the target file transmitted by the source server. The target process (e.g., process a) writes the data (stream) of the target file into its exclusive temporary file. In this back-to-source process, if a new user request arrives for the target file, other processes (e.g., process B) may also respond immediately. Process B preferentially looks up the target file in the local cache because process a writes the data of the target file into its unique temporary file, which process B does not read. If the target file is not found or has expired, process B does not need to wait for lock contention and can immediately start the source-back process.
In the embodiment, the problem of multi-process resource competition is avoided through the temporary file, and the multi-process can concurrently process the user request through the lock-free design, so that the response speed and the resource utilization rate are improved.
Step S208And renaming the temporary file to obtain a local target file for distribution under the condition that the temporary file writing is completed.
In the case of the temporary file completing the data writing, to ensure that the source-back and the cache are successful, a series of detection may be performed on the contents of the temporary file, such as MD5 detection, length data detection, source-back header detection, and the like. After confirming the integrity and validity of the temporary file, the target process may rename (rename) the temporary file in a number of ways to obtain a local target file that is available for distribution. By way of example, the temporary file can be renamed according to the target file name carried by the data acquisition request, so that subsequent process searching is facilitated, and response speed and instantaneity are improved. Temporary files may also be renamed by computing MD5 data (hash values) of the temporary files. After the local target file is obtained, the corresponding data content (e.g., target data) may be returned in accordance with the data acquisition request of the viewer terminal. When a user request for the target file is received later, the process allocated to process the request can directly read the local target file from the local disk, and corresponding data is quickly returned to the user. Compared with the situation that the global index tree needs to be queried in multi-process locking, the embodiment can avoid the problem of competition for acquiring file data.
In this embodiment, the problem of resource competition of multiple processes is effectively alleviated by using temporary files and renaming operations. The target file is cached to the local disk, so that the source request can be effectively converged, the convergence effect of the user request can be ensured, the pressure of the source server is reduced, and the user is blocked.
The embodiments described above introduce a multi-process lockless data processing method, and the source request can be effectively converged by the local target file of the target process. In order to alleviate server stress, in case of expiration or failure of a target file, the target file needs to be deleted in time to release disk space and recycle resources. To avoid the problem of multi-process competition caused by file deletion, a number of exemplary embodiments are provided below.
In an alternative embodiment, each process has associated therewith a file control queue configured to manage and update a plurality of file information therein, the updating including adding and deleting. As shown in fig. 5, the multi-process data processing method may further include:
step S500, under the condition that the local target file is obtained, file information of the local target file is obtained.
Step S502, adding the file information of the local target file to a file control queue associated with the target process.
In step S504, if one file information is deleted by the file control queue, the file corresponding to the deleted file information is deleted.
Under the condition of successful source return, the target process can store the data such as a header, a body and the like of the local target file into the local disk. Meanwhile, the file information (such as descriptive information) of the local target file can be extracted and added into a file control queue of the target process, and whether the target file is deleted or not is controlled by the target process. Because of the limited upper length of the file control queue, it may be necessary to delete the original file information in the queue in order to add new file information. If the file control queue deletes one file information, the target process can delete the file corresponding to the deleted file information correspondingly. Since the file information of a file generally exists in the file control queue of a process, the process controls the deletion of the file and releases the disk space, so that the competition problem of the deletion of the file can be effectively avoided.
In this embodiment, the deletion time of the file is controlled by controlling the length of the file control queue, so that the outdated file can be deleted accurately and timely. The deleting time of the local target file is related to the target process of the local target file, so that unified management is convenient, and the competing problem of file deletion is avoided.
In an alternative embodiment, the file information includes a file name and file description information, and the file control queue includes a linked list and a hash table; the nodes of the linked list are used for storing file description information, and the hash table is used for establishing a mapping relation between file names and the linked list nodes storing the file description information.
For example, the file information may include a file name and file description information. The file description information may include a file descriptor FD (File Descriptor), a path along which the file is located, and the like. The file control queue may include a linked list and a hash table. Nodes in the linked list may be used to store file description information. The hash table may then be used to record the mapping between the file name and the linked list node where the file description information is stored. Wherein the linked list may be used to maintain a sequential relationship between the plurality of file information. The hash table maps the file names to index positions in the linked list through the hash function, so that a faster and more efficient searching method is provided, and file description information corresponding to the file names can be rapidly positioned without traversing the linked list.
In this embodiment, the file control queue management file information combined with the linked list and the hash table is used, so that the performance and efficiency of the system can be improved.
In an alternative embodiment, the file information of the local target file includes a target file name and target file description information. Correspondingly, as shown in fig. 6, the step S502 may include:
and step S600, matching the target file name with the hash table.
Step S602, where the hash table has the target file name: determining corresponding linked list nodes through the hash table; updating the corresponding linked list nodes according to the target file description information; and under the condition that the file control queue is the least recently used queue, moving the updated linked list node to the head of the chain.
Step S604, in the case where the hash table does not have the target file name: and inserting the target file description information into the linked list as a new node, and establishing a mapping relation between the target file name and the linked list new node in the hash table.
The specific procedure for inserting the file information of the local target file into the file control queue is exemplified as follows: and determining the target file name and the target file description information according to the file information. And searching whether the target file name exists in the hash table. If so, the corresponding linked list node is rapidly determined according to the mapping relation corresponding to the target file name in the hash table. And updating file description information originally stored in the linked list node according to the description information of the target file, so that the file control queue can store the latest file information, and effective and accurate management is realized. Under the condition that the file control queue is a Least Recently Used (LRU) queue, based on an eliminating mechanism of the LRU (sequentially deleting from the last node of the queue), updated linked list nodes can be moved to the head of the chain, so that all deleted cold resources (the access quantity is lower than a threshold value) are ensured to be deleted, and not hot resources, repeated source return caused by false deletion is avoided, and the convergence effect is further improved. In some embodiments, if the file control queue is a FIFO queue, i.e., a FIFO mechanism, the updated linked list nodes may be moved to the end of the chain, avoiding erroneous deletion of hot resources. If the target file name does not exist in the hash table, the target description information is inserted into the linked list as a new node, and a mapping relation between the target file name and the linked list new node is built in the hash table, so that the management efficiency is improved.
In this embodiment, each time file information is added, the file information is first retrieved based on the hash table to quickly add the file information or update the file information. The deleting time of the corresponding file can be flexibly controlled by adjusting the position of the updated file information in the linked list, so that the false deletion and repeated source returning are reduced, and the pressure of a source server is further reduced.
In an alternative embodiment, the file control queue is a first-in-first-out queue. Correspondingly, step S504 may include: acquiring the length of a hash table; deleting a link head node and a file corresponding to the link head node under the condition that the length of the hash table exceeds the preset length; and deleting the mapping relation corresponding to the link head node in the hash table.
In this embodiment, by monitoring the length of the hash table, when the length of the hash table exceeds a preset length (reaches the upper limit of the queue), the head-of-chain node is deleted based on the FIFO queue FIFO mechanism, and the file corresponding to the head-of-chain node is deleted according to the path in which the file recorded by the head-of-chain node is located. And deleting the mapping relation corresponding to the link head node in the hash table. Therefore, the files can be deleted in time, the disk space is released, the data consistency of the linked list and the hash table is maintained, and the operation efficiency is improved.
In an alternative embodiment, the file control queue is a least recently used queue. Correspondingly, step S504 may further include: acquiring the length of a hash table; deleting a chain tail node and a file corresponding to the chain tail node under the condition that the length of the hash table exceeds a preset length; and deleting the mapping relation corresponding to the chain tail node in the hash table.
In this embodiment, when the hash table length exceeds the preset length, it is indicated that the file control queue reaches the upper limit, the tail node is deleted based on the LRU elimination mechanism, and the file corresponding to the tail node is deleted according to the path where the file recorded by the tail node is located. Meanwhile, the mapping relation corresponding to the chain tail node is deleted in the hash table, so that the data consistency can be maintained, and the resources can be released in time.
The preset length may be determined according to various manners, for example: system configuration parameters, queue adaptation, etc. An exemplary scheme is provided below.
In an alternative embodiment, the multi-process data processing method may further include: monitoring available resources of the CDN node; and dynamically determining the preset length according to the available resources.
Illustratively, the available resources of the CDN node are calculated according to the current system load, memory usage, bandwidth conditions, and the like of the CDN node. The preset length (the upper limit of the file control queue) is dynamically adjusted according to available resources, so that the number of cache files is controlled at any time, and full load of a disk can be avoided while the disk resources are utilized to the maximum extent.
To make this application more readily understood, an exemplary application is provided below in connection with fig. 7-8.
S11: a user request (data acquisition request for a target file) is received, and a process (target process) is selected to process the user request.
S12: the target process preferentially checks whether a file (target file) exists locally.
If the file is found to exist in the disk and is not expired, corresponding data is returned according to the user request.
S13: if the file cache control information (the expiration time and the MD5 data of the file content) which does not exist or is analyzed is invalid, the file cache control information is directly returned to the source, and a unique temporary file is established in the source returning process for storing the file data, so that the temporary file name ensures that the repeated condition does not occur.
S14: after the back source is finished, a series of detection (including but not limited to MD5 detection, length data detection, back source header detection and the like) is performed on the file content, so that successful caching is ensured.
S15: in case of successful source-back, the node containing the file information is inserted in the LRU queue (file control queue) of the process (linked list and hash table) for subsequent deletion and release of disk space.
Each time of insertion can search whether the file information exists in the queue of the process, if so, the file information is updated and moved to the forefront of the queue, meanwhile, whether the number of the nodes of the queue reaches the upper limit is judged, and if so, the file corresponding to the last node is deleted. The length of the queue may be dynamically adjusted to control the number of disk cache files.
In this exemplary application: upon receiving a user data retrieval request for a target file, the CDN node may select a target process from a plurality of processes to handle the request. The target process may preferentially see if the target file exists locally at the CDN node. And triggering the source back in the case that the target file does not exist or is out of date. During the back-source process, the target process creates a unique temporary file locally for writing target file data obtained from the source server. In the event that the temporary file write is complete (back source complete), the temporary file is renamed to obtain a local target file that is available for distribution. As can be seen, the embodiments of the present application can effectively alleviate the resource contention problem of multiple processes by using temporary files and renaming operations: the atomicity of the reading operation of each process is ensured, the condition of reading temporary data is avoided, meanwhile, the writing operation of a plurality of processes to the same temporary file is prevented, and the correctness of the data is ensured. By adopting the lock-free design, the lock competition problem of multiple processes is eliminated: when one process returns to the source, other processes can execute the source returning operation at the same time, the lock competition result is not required to be waited, the source returning time is ensured, and the user experience is improved. The files cached locally can also converge back to the source request, reduce user chunking, and reduce the pressure of the backend server.
Example two
Fig. 9 schematically shows a block diagram of a multi-process data processing apparatus according to a second embodiment of the present application, which may be used in a CDN node, may be partitioned into one or more program modules, which are stored in a storage medium and executed by one or more processors to complete the embodiments of the present application. Program modules in the embodiments of the present application refer to a series of computer program instruction segments capable of implementing specific functions, and the following description specifically describes the functions of each program module in the embodiment. As shown in fig. 9, the apparatus 1000 may include: a determination module 1100, a lookup module 1200, a creation module 1300, a writing module 1400, and a renaming module 1500, wherein:
a determining module 1100, configured to determine a target process from a plurality of processes in response to a data acquisition request for the target file;
the searching module 1200 is configured to locally search, through the target process, a target file from a CDN node;
a creating module 1300, configured to create a temporary file through the target process if the target file is not found locally or if the target file has expired;
a writing module 1400, configured to obtain data of the target file from the source server, and write the data into the temporary file;
And the renaming module 1500 is configured to rename the temporary file to obtain a local target file for distribution when the temporary file writing is completed.
As an alternative embodiment, each process has associated therewith a file control queue configured to manage and update a plurality of file information therein, the updating including adding and deleting; the apparatus 1000 is also for:
under the condition that a local target file is obtained, acquiring file information of the local target file;
adding the file information of the local target file into a file control queue associated with the target process;
if one file information is deleted by the file control queue, deleting the file corresponding to the deleted file information.
As an alternative embodiment, the file information includes a file name and file description information, and the file control queue includes a linked list and a hash table; the nodes of the linked list are used for storing file description information, and the hash table is used for establishing a mapping relation between file names and the linked list nodes storing the file description information.
As an alternative embodiment, the file information of the local target file includes a target file name and target file description information; the apparatus 1000 is also for:
Matching the target file name with the hash table;
in the case where the hash table has the target file name: determining corresponding linked list nodes through the hash table; updating the corresponding linked list nodes according to the target file description information; and under the condition that the file control queue is the least recently used queue, moving the updated linked list node to the head of the link;
in the case where the hash table does not have the target file name: and inserting the target file description information into the linked list as a new node, and establishing a mapping relation between the target file name and the linked list new node in the hash table.
As an optional embodiment, the file control queue is a first-in first-out queue; the apparatus 1000 is also for:
acquiring the length of a hash table;
deleting a link head node and a file corresponding to the link head node under the condition that the length of the hash table exceeds the preset length;
and deleting the mapping relation corresponding to the link head node in the hash table.
As an alternative embodiment, the file control queue is a least recently used queue; the apparatus 1000 is also for:
acquiring the length of a hash table;
Deleting a chain tail node and a file corresponding to the chain tail node under the condition that the length of the hash table exceeds a preset length;
and deleting the mapping relation corresponding to the chain tail node in the hash table.
As an alternative embodiment, the preset length is determined by:
monitoring available resources of the CDN node;
and dynamically determining the preset length according to the available resources.
As an alternative embodiment, the apparatus 1000 is further configured to:
under the condition that the target file is found locally and the target file is not expired, the target file is obtained locally from the CDN node;
analyzing the target file according to the data acquisition request to acquire target data;
and distributing the target data to a request end.
As an alternative embodiment, the apparatus 1000 is further configured to:
under the condition that the target file is found, cache control information associated with the found target file is obtained;
analyzing the cache control information to obtain expiration time and content verification data;
and determining whether the searched target file is out of date according to the out-of-date time and the content verification data.
Example III
Fig. 10 schematically illustrates a hardware architecture diagram of a computer device 10000 adapted to implement a multiprocessing data processing method according to a third embodiment of the present application. In some embodiments, computer device 10000 may be a smart phone, a wearable device, a tablet, a personal computer, a vehicle terminal, a gaming machine, a virtual device, a workstation, a digital assistant, a set top box, a robot, or the like. In other embodiments, the computer device 10000 may be a rack server, a blade server, a tower server, or a rack server (including a stand-alone server, or a server cluster composed of multiple servers), or the like. As shown in fig. 10, the computer device 10000 includes, but is not limited to: the memory 10010, processor 10020, network interface 10030 may be communicatively linked to each other via a system bus. Wherein:
Memory 10010 includes at least one type of computer-readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory), random Access Memory (RAM), static Random Access Memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like. In some embodiments, memory 10010 may be an internal storage module of computer device 10000, such as a hard disk or memory of computer device 10000. In other embodiments, the memory 10010 may also be an external storage device of the computer device 10000, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the computer device 10000. Of course, the memory 10010 may also include both an internal memory module of the computer device 10000 and an external memory device thereof. In this embodiment, the memory 10010 is typically used for storing an operating system installed on the computer device 10000 and various application software, such as program codes of a multi-process data processing method. In addition, the memory 10010 may be used to temporarily store various types of data that have been output or are to be output.
The processor 10020 may be a central processing unit (Centra lProcessing Unit, CPU), controller, microcontroller, microprocessor, or other chip in some embodiments. The processor 10020 is typically configured to control overall operation of the computer device 10000, such as performing control and processing related to data interaction or communication with the computer device 10000. In this embodiment, the processor 10020 is configured to execute program codes or process data stored in the memory 10010.
The network interface 10030 may comprise a wireless network interface or a wired network interface, which network interface 10030 is typically used to establish a communication link between the computer device 10000 and other computer devices. For example, the network interface 10030 is used to connect the computer device 10000 to an external terminal through a network, establish a data transmission channel and a communication link between the computer device 10000 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a global system for mobile communications (Globa lSystem of Mobile communication, abbreviated as GSM), wideband code division multiple access (Wideband Code Division Multiple Access, abbreviated as WCDMA), a 4G network, a 5G network, bluetooth (Bluetooth), wi-Fi, etc.
It should be noted that fig. 10 only shows a computer device having components 10010-10030, but it should be understood that not all of the illustrated components are required to be implemented, and that more or fewer components may be implemented instead.
In this embodiment, the multi-process data processing method stored in the memory 10010 may be further divided into one or more program modules and executed by one or more processors (such as the processor 10020) to complete the embodiments of the present application.
Example IV
The present application also provides a computer-readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the steps of the multi-process data processing method of the embodiments.
In this embodiment, the computer-readable storage medium includes a flash memory, a hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the computer readable storage medium may be an internal storage unit of a computer device, such as a hard disk or a memory of the computer device. In other embodiments, the computer readable storage medium may also be an external storage device of a computer device, such as a plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash memory Card (Flash Card), etc. that are provided on the computer device. Of course, the computer-readable storage medium may also include both internal storage units of a computer device and external storage devices. In this embodiment, the computer readable storage medium is typically used to store an operating system and various types of application software installed on a computer device, such as program codes of a multi-process data processing method in the embodiment, and the like. Furthermore, the computer-readable storage medium may also be used to temporarily store various types of data that have been output or are to be output.
It will be apparent to those skilled in the art that the modules or steps of the embodiments of the present application described above may be implemented in a general-purpose computer device, they may be concentrated on a single computer device, or distributed across a network of multiple computer devices, or they may alternatively be implemented in program code executable by a computer device, so that they may be stored in a storage device for execution by a computer device, and in some cases, the steps shown or described may be performed in an order different from that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or a plurality of modules or steps in them may be fabricated into a single integrated circuit module. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
It should be noted that the foregoing is only a preferred embodiment of the present application, and is not intended to limit the scope of the patent protection of the present application, and all equivalent structures or equivalent processes using the descriptions and the contents of the present application or direct or indirect application to other related technical fields are included in the scope of the patent protection of the present application.
Claims (12)
1. A multi-process data processing method, for a CDN node, the method comprising:
determining a target process from a plurality of processes in response to a data acquisition request for the target file;
locally searching a target file from a CDN node through the target process;
under the condition that the target file is not found locally or the target file is expired, creating a temporary file through the target process;
acquiring data of a target file from a source server and writing the data into a temporary file;
and renaming the temporary file to obtain a local target file for distribution under the condition that the temporary file writing is completed.
2. The method of claim 1, wherein each process has associated therewith a file control queue configured to manage and update a plurality of file information therein, the updating comprising adding and deleting; the method further comprises the steps of:
under the condition that a local target file is obtained, acquiring file information of the local target file;
adding the file information of the local target file into a file control queue associated with the target process;
if one file information is deleted by the file control queue, deleting the file corresponding to the deleted file information.
3. The method of claim 2, wherein the file information includes a file name and file description information, and the file control queue includes a linked list and a hash table; the nodes of the linked list are used for storing file description information, and the hash table is used for establishing a mapping relation between file names and the linked list nodes storing the file description information.
4. A method according to claim 3, wherein the file information of the local target file includes a target file name and target file description information; correspondingly, adding the file information of the local target file into the file control queue associated with the target process comprises the following steps:
matching the target file name with the hash table;
in the case where the hash table has the target file name: determining corresponding linked list nodes through the hash table; updating the corresponding linked list nodes according to the target file description information; and under the condition that the file control queue is the least recently used queue, moving the updated linked list node to the head of the link;
in the case where the hash table does not have the target file name: and inserting the target file description information into the linked list as a new node, and establishing a mapping relation between the target file name and the linked list new node in the hash table.
5. The method of claim 3, wherein the file control queue is a first-in-first-out queue; correspondingly, if one file information is deleted by the file control queue, deleting the file corresponding to the deleted file information, including:
acquiring the length of a hash table;
deleting a link head node and a file corresponding to the link head node under the condition that the length of the hash table exceeds the preset length;
and deleting the mapping relation corresponding to the link head node in the hash table.
6. The method of claim 3, wherein the file control queue is a least recently used queue; correspondingly, if one file information is deleted by the file control queue, deleting the file corresponding to the deleted file information, including:
acquiring the length of a hash table;
deleting a chain tail node and a file corresponding to the chain tail node under the condition that the length of the hash table exceeds a preset length;
and deleting the mapping relation corresponding to the chain tail node in the hash table.
7. The method of claim 6, wherein the preset length is determined by:
Monitoring available resources of the CDN node;
and dynamically determining the preset length according to the available resources.
8. The method according to any one of claims 1 to 7, further comprising:
under the condition that the target file is found locally and the target file is not expired, the target file is obtained locally from the CDN node;
analyzing the target file according to the data acquisition request to acquire target data;
and distributing the target data to a request end.
9. The method according to any one of claims 1 to 7, wherein searching the target file locally at the CDN node comprises:
under the condition that the target file is found, cache control information associated with the found target file is obtained;
analyzing the cache control information to obtain expiration time and content verification data;
and determining whether the searched target file is out of date according to the out-of-date time and the content verification data.
10. A multi-process data processing apparatus for a CDN node, the apparatus comprising:
a determining module for determining a target process from a plurality of processes in response to a data acquisition request for the target file;
The searching module is used for searching the target file from the CDN node locally through the target process;
the creating module is used for creating a temporary file through the target process under the condition that the target file is not found locally or the target file is expired;
the writing module is used for acquiring the data of the target file from the source server and writing the data into the temporary file;
and the renaming module is used for renaming the temporary file under the condition that the temporary file writing is completed so as to obtain a local target file for distribution.
11. A computer device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein:
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 9.
12. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein computer instructions which, when executed by a processor, implement the method of any of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311812920.9A CN117608860A (en) | 2023-12-26 | 2023-12-26 | Multi-process data processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311812920.9A CN117608860A (en) | 2023-12-26 | 2023-12-26 | Multi-process data processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117608860A true CN117608860A (en) | 2024-02-27 |
Family
ID=89950064
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311812920.9A Pending CN117608860A (en) | 2023-12-26 | 2023-12-26 | Multi-process data processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117608860A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118550890A (en) * | 2024-07-26 | 2024-08-27 | 天翼云科技有限公司 | Multi-process file distribution method and device |
-
2023
- 2023-12-26 CN CN202311812920.9A patent/CN117608860A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118550890A (en) * | 2024-07-26 | 2024-08-27 | 天翼云科技有限公司 | Multi-process file distribution method and device |
CN118550890B (en) * | 2024-07-26 | 2024-09-27 | 天翼云科技有限公司 | Multi-process file distribution method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104731516B (en) | A kind of method, apparatus and distributed memory system of accessing file | |
CN106993054B (en) | File distribution method, node and system | |
US9015275B2 (en) | Partial object distribution in content delivery network | |
US7685255B2 (en) | System and method for prefetching uncacheable embedded objects | |
EP2800310B1 (en) | Content transmitting system, method for optimizing network traffic in the system, central control device and local caching device | |
EP2773080A1 (en) | Sharing control system and method for network resources download information | |
US10579595B2 (en) | Method and device for calling a distributed file system | |
US10367871B2 (en) | System and method for all-in-one content stream in content-centric networks | |
US20120011281A1 (en) | Content conversion system and content conversion server | |
CN117608860A (en) | Multi-process data processing method and device | |
EP2988512B1 (en) | System and method for reconstructable all-in-one content stream | |
KR20080028869A (en) | Content syndication platform | |
CN109525622B (en) | Fragment resource ID generation method, resource sharing method, device and electronic equipment | |
CN113273163A (en) | File uploading method, file downloading method and file management device | |
CN108540510B (en) | Cloud host creation method and device and cloud service system | |
WO2013078797A1 (en) | Network file transmission method and system | |
KR101822401B1 (en) | Method and apparatus for sharing a collaborative editing document | |
CN107659626B (en) | Temporary metadata oriented separation storage method | |
CN103118049B (en) | A kind of method and system that file is downloaded by network-caching | |
CN101958934B (en) | Electronic program guide incremental content synchronization method, device and system | |
CN110895583B (en) | Method, device and system for acquiring data resources | |
CN111245949A (en) | File filing and transmission method, device and equipment | |
US9483575B2 (en) | Reproducing a graphical user interface display | |
CN113808711B (en) | DICOM file processing method, DICOM file processing device, DICOM file processing computer equipment and DICOM file storage medium | |
CN115811515A (en) | File access method and system based on fragments under edge cloud environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |