CN111198885A - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN111198885A
CN111198885A CN201911400667.XA CN201911400667A CN111198885A CN 111198885 A CN111198885 A CN 111198885A CN 201911400667 A CN201911400667 A CN 201911400667A CN 111198885 A CN111198885 A CN 111198885A
Authority
CN
China
Prior art keywords
data
cache
byte
rated capacity
fragment data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911400667.XA
Other languages
Chinese (zh)
Inventor
吉毅
叶权
陈永旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201911400667.XA priority Critical patent/CN111198885A/en
Publication of CN111198885A publication Critical patent/CN111198885A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the invention provides a data processing method and a data processing device, wherein the method comprises the following steps: acquiring a plurality of fragment data from a client, and setting the rated capacity of a cache region according to the data volume of the fragment data, wherein the data volume of the fragment data is smaller than the rated capacity of the cache region, and the integral multiple of the data volume of the fragment data is different from the rated capacity; respectively storing the fragment data to corresponding cache regions; and when the sum of the data volumes of the stored plurality of fragment data is the same as the rated capacity of the cache region, transmitting the fragment data stored in the cache region to a server side. The embodiment of the invention does not limit the data volume of the fragment data of the client, does not require that the integral multiple of the data volume of the fragment data is the same as the rated capacity of the cache region, and further can achieve the technical effect of reducing the requirement on the data volume of the fragment data of the client.

Description

Data processing method and device
Technical Field
The present invention relates to the field of data storage technologies, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.
Background
At present, with the increasing internet business and the development of big data technology, more and more companies need to store a large amount of data files. Object Storage Service (OSS) gradually becomes the center of gravity of internet Storage Service. With the advent of various public cloud object storage services, many companies are no longer limited to storing data files on a public cloud, and therefore, the upload proxy service becomes an operational bridge between the client and the storage server.
Conventional upload proxy services provide Software Development Kit (SDK) to client calls. And the data file can be uploaded to the public cloud only by calling the SDK interface without writing a read-write file code at the client. However, when there are a large number of clients calling the SDK, if the SDK needs to be upgraded, the service logic of each client needs to be modified accordingly, which results in a lot of effort and effort to update the SDK, and therefore, the upload proxy service is separated, the independent upload proxy service is used to interface with the public cloud SDK, and an upload interface is provided for each client.
When the client uploads a large data file to the storage server through the independent upload agent service, the client divides the data file into a plurality of small fragment data and then transmits the fragment data to the upload agent service through the upload interface. The uploading proxy service creates a plurality of cache areas with certain sizes in advance according to the requirements of the storage server, and the cache areas are used for temporarily storing the fragment data. Since the sizes of the fragmented data divided by each client are different, a situation that one fragmented data is stored in a plurality of cache regions is easy to occur. At this time, the upload proxy service cannot directly splice and transfer the data in the cache area to the storage server. Therefore, the existing upload proxy service requires that integral multiples of the size of the fragment data customized by the client are consistent with the size of the cache region. However, the client often cannot divide the fragmented data according to the requirement of the upload proxy service, which causes the integral multiple of the size of the fragmented data and the size of the cache region to be inconsistent, thereby causing the problems that the upload proxy service cannot be directly spliced and the data in the cache region is transferred to the storage server.
Disclosure of Invention
The embodiment of the invention aims to provide a data processing method, a data processing device, electronic equipment and a storage medium, so as to achieve the purpose of reducing the size requirement of client-defined fragment data by an uploading proxy service. The specific technical scheme is as follows:
in a first aspect of the present invention, there is provided a data processing method, including: acquiring a plurality of fragment data from a client, and setting the rated capacity of a cache region according to the data volume of the fragment data, wherein the data volume of the fragment data is smaller than the rated capacity of the cache region, and the integral multiple of the data volume of the fragment data is different from the rated capacity; respectively storing the fragment data to corresponding cache regions; and when the sum of the data volumes of the stored plurality of fragment data is the same as the rated capacity of the cache region, transmitting the fragment data stored in the cache region to a server side.
Optionally, in the step of obtaining the plurality of sliced data from the client, the method further includes: obtaining directory identification information of a cache directory corresponding to a data file from the client, initial byte index information of the fragment data in the data file, and a data volume of the fragment data, where the initial byte index information represents position information of an initial byte of the fragment data in the data file, and the fragment data is obtained by dividing the data file.
Optionally, the step of storing the plurality of sliced data in corresponding cache regions respectively includes: matching a corresponding cache directory according to the directory identification information, wherein the matched cache directory represents the cache directory where the cache region of the fragment data to be stored is located; determining the cache region corresponding to each byte in the matched cache directory according to the data volume of the data file, the initial byte index information, the rated capacity and the length information of each byte of the fragment data; and storing each byte of the fragment data to the corresponding cache region.
Optionally, the step of determining, according to the data size of the data file, the starting byte index information, the rated capacity, and length information of each byte of the fragment data, the cache region corresponding to each byte in the matching cache directory includes: taking the ratio of the data volume of the data file to the rated capacity or the sum of the ratio and a preset numerical value as the number of the cache areas in the cache directory matched with the ratio; for each byte, taking the sum of the ratio of the sum of the index information of the starting byte and the length information of the current byte to the rated capacity and the sum of the preset numerical value as the sequence number of the cache region corresponding to the current byte in the matched cache directory; and for each byte, determining the cache region corresponding to the byte in the matched cache directory according to the sequence number and the number of the cache regions in the matched cache directory.
Optionally, before the step of obtaining the plurality of sliced data from the client, the method further includes: acquiring the data volume and attribute information of the data file, wherein the attribute information comprises file identification information and/or transmission acceleration information of the data file; creating the cache directory; and generating the directory identification information for the cache directory according to the attribute information, and establishing a corresponding relation between the cache directory and the directory identification information.
Optionally, before the step of transmitting the fragmented data stored in the cache region to a server, the method further includes: starting a detection thread; acquiring the initial position information and the end position information of each byte in each cache region in the cache directory by using the detection thread; and detecting whether the sum of the data amount of the stored plurality of fragment data is the same as the rated capacity of the cache region or not according to the starting position information, the ending position information and the rated capacity.
Optionally, the step of detecting whether a sum of data amounts of the stored plurality of slice data is the same as a rated capacity of the buffer area according to the starting position information, the ending position information, and the rated capacity includes: merging a plurality of bytes meeting preset merging conditions to obtain a merged byte combination; judging whether the occupation amount of the byte combination in the cache region is the same as the rated capacity or not; if the occupied amount is the same as the rated capacity, determining that the sum of the data amounts of the stored plurality of fragment data is the same as the rated capacity of the cache region; and if the occupied amount is different from the rated capacity, determining that the sum of the data amounts of the stored plurality of the fragment data is different from the rated capacity of the cache region.
Optionally, the merging condition is that the start position information of the byte is located between the start position information and the end position information of other bytes, and/or the end position information of the byte is located between the start position information and the end position information of other bytes.
In a second aspect of the present invention, there is also provided a data processing apparatus, including: the acquisition module is used for acquiring a plurality of fragment data from the client; the setting module is used for setting the rated capacity of a cache region according to the data volume of the fragment data, the data volume of the fragment data is smaller than the rated capacity of the cache region, and the integral multiple of the data volume of the fragment data is different from the rated capacity; the storage module is used for respectively storing the fragment data to corresponding cache regions; and the transmission module is used for transmitting the fragment data stored in the cache region to a server side when the sum of the data amount of the plurality of stored fragment data is the same as the rated capacity of the cache region.
Optionally, the obtaining module is further configured to, when obtaining a plurality of fragmented data from a client, obtain directory identification information of a cache directory corresponding to a data file from the client, start byte index information of the fragmented data in the data file, and a data amount of the fragmented data, where the start byte index information indicates position information of a start byte of the fragmented data in the data file, and the fragmented data is obtained by dividing the data file.
Optionally, the storage module includes: the matching module is used for matching a corresponding cache directory according to the directory identification information, and the matched cache directory represents the cache directory where the cache region of the fragment data to be stored is located; a determining module, configured to determine, according to a data size of the data file, the initial byte index information, the rated capacity, and length information of each byte of the sliced data, a cache region corresponding to each byte in the matched cache directory; a byte storage module, configured to store each byte of the sliced data to the corresponding cache region.
Optionally, the determining module includes: the quantity determining module is used for taking the ratio of the data quantity of the data file to the rated capacity or the sum of the ratio and a preset numerical value as the quantity of the cache areas in the matched cache directory; a sequence number determining module, configured to, for each byte, use a ratio of a sum of the starting byte index information and the length information of the current byte to the rated capacity, and a sum of the preset numerical value as a sequence number of the cache region corresponding to the current byte in the matched cache directory; and a cache region determining module, configured to determine, for each byte, the cache region corresponding to the current byte in the matched cache directory according to the sequence number and the number of the cache regions in the matched cache directory.
Optionally, the obtaining module is further configured to obtain a data volume and attribute information of the data file before obtaining the plurality of fragmented data from the client, where the attribute information includes file identification information and/or transmission acceleration information of the data file; the device further comprises: the creating module is used for creating the cache directory; and the establishing module is used for generating the directory identification information for the cache directory according to the attribute information and establishing the corresponding relation between the cache directory and the directory identification information.
Optionally, the apparatus further comprises: the starting module is used for starting a detection thread before the transmission module transmits the fragment data stored in the cache region to a server end; the obtaining module is further configured to obtain, by using the detection thread, start position information and end position information of each byte in each cache area in the cache directory; the device further comprises: and the detection module is used for detecting whether the sum of the data quantity of the stored plurality of fragment data is the same as the rated capacity of the cache region or not according to the starting position information, the ending position information and the rated capacity.
Optionally, the detection module includes: the byte merging module is used for merging a plurality of bytes meeting preset merging conditions to obtain a merged byte combination; the capacity judging module is used for judging whether the occupation amount of the byte combination in the cache region is the same as the rated capacity; a capacity determining module, configured to determine that a sum of data amounts of the stored multiple pieces of fragmented data is the same as a rated capacity of the cache region if the occupied amount is the same as the rated capacity; and if the occupied amount is different from the rated capacity, determining that the sum of the data amounts of the stored plurality of the fragment data is different from the rated capacity of the cache region.
Optionally, the merging condition is that the start position information of the byte is located between the start position information and the end position information of other bytes, and/or the end position information of the byte is located between the start position information and the end position information of other bytes.
In yet another aspect of the present invention, there is also provided a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to execute any of the above-described data processing methods.
In yet another aspect of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of processing data as described in any one of the above.
According to the data processing scheme provided by the embodiment of the invention, a plurality of fragment data from a client are obtained, the rated capacity of the cache region is set according to the data volume of the fragment data, and the plurality of fragment data are respectively stored to the corresponding cache regions, wherein the data volume of the fragment data is smaller than the rated capacity of the cache region, and the integral multiple of the data volume of the fragment data is different from the rated capacity of the cache region. And when the storage of the cache region is finished, namely the rated capacity of the cache region is occupied by the fragment data, transmitting the fragment data stored in the cache region to the server side. The embodiment of the invention does not limit the data volume of the fragment data of the client, does not require that the integral multiple of the data volume of the fragment data is the same as the rated capacity of the cache region, and further can achieve the technical effect of reducing the requirement on the data volume of the fragment data of the client.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a diagram illustrating an architecture of a system for uploading data files, according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for processing data according to an embodiment of the present invention;
FIG. 3 is a flow chart of steps of another method for processing data according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating detecting whether the cache area is completely stored according to an embodiment of the present invention;
FIG. 5 is a block diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
In the embodiment of the invention, the client divides the data file into a plurality of fragmented data with the same data volume or different data volumes, and then transmits the plurality of fragmented data to the uploading proxy service by using the uniform uploading interface provided by the uploading proxy service. The uploading agent service respectively stores each piece of fragment data into the corresponding cache region, uploads the stored piece of fragment data in the cache region to the server side until all piece of fragment data from the client side are uploaded to the server side, and therefore the purpose of uploading the data file of the client side to the server side is achieved.
It should be noted that the upload proxy service in the embodiment of the present invention may be deployed in a client or a server. If the proxy service is deployed in the client, the upload proxy service can be understood as an upload proxy client; if deployed in a server, the upload proxy service may be understood as an upload proxy server. The embodiment of the present invention does not specifically limit the type, configuration, location, and other conditions of the terminal device deployed by the upload agent service. The server in the embodiment of the present invention may be a distributed object storage, and the distributed object storage is used for storing a data file of the client.
The upload proxy service is usually a reverse proxy, that is, the upload proxy service acts as a proxy for the server side. And the data traffic of the server side is transmitted to the client side through the uploading agent service. In embodiments of the present invention, the upload broker service is more inclined to broker clients. The upload proxy service may be deployed in a separate upload proxy service client or upload proxy server. If the client needs to upload the data file, the data file needs to be transmitted to various data storage servers through the upload proxy service. Meanwhile, the upload proxy service can also serve as a reverse proxy to serve as a proxy for the server side so as to carry some non-upload data traffic transmission and the like.
Referring to fig. 1, a system architecture diagram for uploading data files according to an embodiment of the present invention is shown. The client C initializes the big data file uploading service, and firstly sends the size and other attributes of the data file to be uploaded to the agent. The proxy terminal prepares a cache directory for the data file, and returns a file unique identifier (fileId) corresponding to the cache directory to the client terminal C. After receiving the file unique identifier (fileId), the client C divides the data file into a plurality of fragment data according to actual requirements, for example, the data file is divided into nine fragment data with serial numbers of 1-2-3-4-5-6-7-8-9, and the data volume of each fragment data is 15 MB. And the client C sequentially pushes the fragment data to the agent end in a data stream mode according to the numbering sequence. When pushing each piece of fragment data to the agent, the client C needs to carry a file unique identifier (fileId), a starting byte index (offset) of the piece of fragment data in the whole data file, and a fragment data size (length) for each piece of fragment data, for example, the file unique identifier is "a-1", the starting byte index is "60 MB", and the fragment data size is "15 MB". The client C may push the fragmented data with a single thread, or may also push the fragmented data with multiple threads.
At the time of initialization of the agent, a plurality of cache blocks (cache blocks, i.e., cache areas) having a size of 32MB are created in advance by the device a in accordance with the request of the object storage service, and the start and end positions of each cache block are marked. The cache block is used for temporarily storing the fragment data from the client C. Although the client C sequentially transmits the fragment data to the agent end according to the serial number sequence of the fragment data, the sequence of the fragment data reaching the agent end is not the sequence of transmitting the fragment data for the client C, and may be 1-3-4-2-5-6-8-7-9, under the influence of the network between the client C and the agent end. When receiving one piece of fragment data, device a is responsible for calculating the cache block where each byte in the piece of fragment data is located. For example, the fragment data numbered 1 and 2 are correspondingly stored in the 1 st cache block, the fragment data numbered 3 and 4 are correspondingly stored in the 2 nd cache block, the fragment data numbered 5 and 6 are correspondingly stored in the 3 rd cache block, the fragment data numbered 8 and 7 are correspondingly stored in the 4 th cache block, and the fragment data numbered 9 are correspondingly stored in the 1 st, 2 nd, 3 th, 4 th and 5 th cache blocks. The slice data numbered 9 occupies 2MB of capacity in 1 st, 2 nd, 3 th and 4 th cache blocks, and occupies only 7MB of capacity in 5 th cache block. Device a needs to calculate a corresponding cache block for each byte of each piece of sliced data, and there may be a case where one piece of sliced data corresponds to two or more cache blocks.
When the device A stores the fragmented data to the corresponding cache block, the device B of the agent end also can circularly check whether each cache block in the cache directory corresponding to each file unique identifier (fileId) is stored completely, if a certain cache block is stored completely, the cache block is immediately uploaded to the distributed object storage end, and the successfully uploaded cache block is removed, so that the occupation of a disk space is avoided.
In an embodiment of the invention, device a may maintain a starting index of the cache block at initialization. When receiving each piece of fragment data, calculating a cache block where each byte of each piece of fragment data is located, writing the byte into the cache block, and combining the indexes written into the cache block. The device a also records the start position of the received part in each buffer block, and indicates that the buffer block has been received when the start positions of the buffer blocks of the received bytes are merged to match the buffer block start index. The device B can be a daemon thread, circularly detects whether the cache block is completely received or not, and sends the received cache block to the distributed object storage end if the cache block is completely received, and is responsible for cleaning work. In addition, device B may also be responsible for computing a fingerprint of the cache block, such as a Message Digest Algorithm (MD 5) value or a Secure Hash Algorithm 1 (SHA 1) value, to verify the integrity of the received data file while providing a retry upload function.
In the embodiment of the present invention, the process of pushing the fragment data to the agent by the client C may be executed synchronously with the process of storing the fragment data to the cache block by the agent and uploading the cache block to the distributed object storage. Therefore, after the client C finishes pushing the last piece of fragment data, the whole data file can be read from the distributed object storage end only by waiting for the time of the agent end uploading a cache block to the distributed object storage end.
Referring to fig. 2, a flow chart of steps of a data processing method according to an embodiment of the present invention is shown. The data processing method may be applied to the agent side, and specifically may include the following steps.
In step S21, a plurality of sliced data from the client is acquired.
In the embodiment of the invention, the client divides the data file to be uploaded into a plurality of fragment data, and pushes the fragment data to the agent according to a preset sequence. The preset sequence may be a sequence in which the client divides the data file into a plurality of fragment data.
When acquiring the multiple pieces of fragment data from the client, the multiple pieces of fragment data may be received according to the preset order, or may also be received according to another order.
Step S22, storing the plurality of sliced data in the corresponding buffer areas, respectively.
In the embodiment of the invention, each time one piece of fragment data is received, the cache region corresponding to each byte in the fragment data is calculated, and each byte in the fragment data is stored to the corresponding cache region.
Generally, the data size of each slice data may be smaller than the rated capacity of the buffer area, for example, the data size of the slice data is 15MB, and the rated capacity of one block of the buffer area is 32 MB. Furthermore, the embodiment of the present invention does not impose that the integral multiple of the data size of one piece of fragmented data must be the same as the rated capacity of one block of buffer area. The data size of the fragmented data may be any value and any unit, and the embodiment of the present invention does not specifically limit the value, the unit, and the like of the data size of the fragmented data.
In the embodiment of the present invention, the rated capacity of each cache region of the proxy side may be the same as the rated capacity of each cache region of the distributed object storage side. When the distributed object storage end receives the fragment data of each cache region sent by the agent end, the fragment data can be stored in the local cache region without calculation processes such as segmentation and combination, and the storage efficiency of the distributed object storage end is further improved. It should be noted that, in the embodiment of the present invention, the rated capacity of the cache region at the proxy end may also be different from the rated capacity of the cache region at the distributed object storage end. The embodiment of the present invention does not specifically limit the size relationship between the rated capacity of the cache region of the proxy side and the rated capacity of the cache region of the distributed object storage side.
Step S23, when the storage of the cache region is completed, transmitting the fragment data stored in the cache region to the server.
In the embodiment of the present invention, when one or more cache regions of the agent end are completely stored, the fragmented data stored in the completely stored cache regions may be transmitted to the distributed object storage end, so that the distributed object storage end stores the fragmented data in the local cache region. The buffer area is stored completely, which means that the sum of the data amounts of the stored fragmented data is the same as the rated capacity of the buffer area.
And when the storage of each piece of fragment data sent by the client in the corresponding cache region is finished, the agent terminal transmits the piece of fragment data in each stored cache region to the distributed object storage terminal.
According to the data processing method provided by the embodiment of the invention, a plurality of fragment data from a client are obtained and are respectively stored in the corresponding cache regions, wherein the data volume of the fragment data is smaller than the rated capacity of the cache regions, and the integral multiple of the data volume of the fragment data is different from the rated capacity of the cache regions. And when the storage of the cache region is finished, namely the rated capacity of the cache region is occupied by the fragment data, transmitting the fragment data stored in the cache region to the server side. The embodiment of the invention does not limit the data volume of the fragment data of the client, does not require that the integral multiple of the data volume of the fragment data is the same as the rated capacity of the cache region, and further can achieve the technical effect of reducing the requirement on the data volume of the fragment data of the client.
Referring to FIG. 3, a flow chart of steps of another data processing method according to an embodiment of the invention is shown. The data processing method may be applied to the agent side, and specifically may include the following steps.
In step S31, the data amount and attribute information of the data file are acquired from the client.
In the embodiment of the invention, the client transmits the data volume and the attribute information of the data file to be uploaded to the agent terminal. Wherein the amount of data may represent the size of the entire data file. The attribute information may include file identification information and/or transmission acceleration information of the data file, and the like. For example, the file identification information may be an MD5 value, an SHA1 value, or the like. The transmission acceleration information may be whether to turn on fast User Datagram Protocol (Quick) network connection (QUIC) acceleration, whether to turn on edge acceleration, and the like.
Step S32, creating a cache directory, generating directory identification information for the cache directory according to the attribute information, and establishing a corresponding relationship between the cache directory and the directory identification information.
In the embodiment of the present invention, the agent side may create a cache directory, and record the created cache directory in the database. In practical applications, directory identification information may be generated for the cache directory according to the current date, the current time, the file name of the data file, the file identification information, the random number, and the like, for example, MD5 values of the current date, the current time, the file name of the data file, the file identification information, and the random number are calculated, and the MD5 value obtained by the calculation is used as the directory identification information. The created cache directory may include cache regions divided in advance. And the proxy end returns the directory identification information to the client.
In a preferred embodiment of the present invention, the agent side may divide the plurality of cache regions in advance according to the requirement of the distributed object storage side during initialization. The rated capacity of the cache region may be the same as the rated capacity of the cache region of the distributed object storage side. The agent end can establish an association relationship between all or part of the divided cache regions and the created cache directory. When there are multiple clients pushing data files to the agent, the agent may create respective cache directories for each data file pushed by each client, so as to correspondingly store the fragment data of each data file into the cache regions associated with the respective cache directories.
In step S33, a plurality of sliced data from the client is acquired.
In the embodiment of the present invention, since the data size of the data file to be uploaded is large, the client needs to divide the data file into a plurality of fragmented data. The data volume of the fragmented data can be set by the client, and the embodiment of the invention does not specifically limit the numerical value, unit and the like of the data volume of the fragmented data.
In practical application, a client may divide a certain data file with a data size of 75MB into five pieces of sliced data with serial numbers 1, 2, 3, 4 and 5, and the data size of each piece of sliced data is 15 MB. The client may sequentially push the five pieces of fragmented data to the agent end according to the sequence numbered 1, 2, 3, 4, and 5, or the client may simultaneously push the five pieces of fragmented data or a part of fragmented data to the agent end.
In a preferred embodiment of the present invention, when the client pushes the fragment data to the agent, the client may also push directory identification information of a cache directory corresponding to a data file to which each fragment data belongs, start byte index information of the fragment data in the data file to which the fragment data belongs, and a data amount of the fragment data to the agent, where the start byte index information may indicate position information of a start byte of the fragment data in the data file.
Step S34, storing the plurality of sliced data in the corresponding buffer areas, respectively.
In an embodiment of the present invention, each slice data may be composed of a plurality of bytes. The agent receives the fragmented data, i.e. receives an array of bytes consisting of a plurality of bytes. When the agent end receives one piece of fragment data, the cache region corresponding to each byte in the byte array needs to be determined, and then each byte in the piece of fragment data is stored in the corresponding cache region.
And step S35, when the storage of the cache region is completed, transmitting the fragment data stored in the cache region after the storage to the server.
In the embodiment of the present invention, that the storage area is completely stored may be understood as that the storage area is already full, that is, the sum of the data amounts of the stored plurality of fragment data is the same as the rated capacity of the cache area, or no other bytes need to be stored in the storage area. When the storage of a certain storage area is finished, the agent end can immediately transmit the fragment data stored in the storage area to the distributed object storage end. The agent end does not need to wait for all the storage areas to be stored completely.
In a preferred embodiment of the present invention, in the process of executing step S34, the corresponding cache directory may be matched according to the directory identification information carried in the fragmented data, so as to determine the cache area corresponding to the fragmented data. And then, determining a cache region corresponding to each byte according to the data volume of the data file, the initial byte index information of the fragment data in the data file, the rated capacity of the cache region and the length information of each byte, and finally storing each byte into the corresponding cache region.
In practical applications, when determining the cache region corresponding to each byte, the data amount of the data file may be divided by the rated capacity of the cache region. If the division of the data amount of the data file and the rated capacity of the cache region has no remainder, the quotient obtained by dividing the data amount of the data file and the rated capacity of the cache region is used as the number of the cache regions for caching the data file. If the data amount of the data file is divided by the rated capacity of the cache region by a remainder, the sum of a quotient obtained by dividing the data amount of the data file by the rated capacity of the cache region and a preset numerical value (for example, 1) is used as the number of the cache regions for caching the data file. Then, for each byte, the sum of the ratio of the sum of the index information of the start byte and the length information of the current byte to the rated capacity and a preset numerical value is used as the sequence number of the corresponding cache region of the current byte in the matched cache directory. And determining the corresponding cache region of the current byte in the matched cache directory according to the sequence number and the number of the cache regions for each byte. For example, if the rated capacity of the buffer area is m, the data size of the data file is n, the initial byte index information of the fragmented data in the data file is x, and the byte length information is y, then the sequence number of the buffer area corresponding to the current byte is (x + y)/m + 1.
In a preferred embodiment of the present invention, before transmitting the fragmented data in the completely stored cache regions to the distributed object storage, a detection thread may be started, and the detection thread is used to circularly detect whether each cache region is completely stored.
In practical application, a cache region is taken as an example for introduction, when a detection thread is used for detecting whether the cache region is completely stored, the start position information and the end position information of each byte in the cache region can be acquired, and then whether the storage is completely detected according to the start position information, the end position information and the rated capacity of the cache region. Specifically, a plurality of bytes meeting a preset merging condition may be merged to obtain a merged byte combination, and then it is determined whether an occupied amount of the byte combination in the cache region is the same as a rated capacity of the cache region. If the occupied amount is equal to the rated capacity, the storage of the cache region is finished; if the occupied amount is smaller than the rated capacity, the cache region is not stored completely.
Referring to fig. 4, a schematic diagram illustrating detecting whether the cache area is completely stored in the embodiment of the present invention is shown.
The rated capacity of the buffer area is 32MB, and the start and end positions of the buffer area are expressed as <0, 32MB >. After any byte is stored in the cache area, the starting position and the ending position of the byte in the cache area are recorded. As shown in fig. 4, part a, part B and part C have been stored in the buffer area. Each portion indicates that there are bytes already stored in the corresponding cache region. The starting and ending positions of the part A are <0, 6MB >, the starting and ending positions of the part B are <8MB, 16MB >, and the starting and ending positions of the part C are <26MB, 32MB >. If there are bytes of the D part to be stored in the cache area, the start and end positions of the D part are <5MB, 8MB >. Since the start position 5MB of the part D is located between the start and end positions <0, 6MB > of the part a, the part D spans the part a, and the part a and part D are merged to obtain the start and end positions of the part AD of <0, 8MB >. Since the end position 8MB of the AD part is located between the start and end positions <8MB, 16MB > of the B part, and the AD part spans the B part, the AD part and the B part are merged to obtain the start and end positions of the ADB part as <0MB, 16MB >. If the bytes of the Xxx part are stored in the cache region, continuing to combine the bytes until the start position and the end position of the combined ADBXxxC part are <0MB, 32MB >, and finishing the storage of the storage region.
The data processing method provided by the embodiment of the invention can store the fragment data received from the client into the cache region in real time, and immediately store one fragment data into the cache region when receiving one fragment data. And when the storage of the cache region is finished, immediately transmitting the fragment data in the cache region to a distributed object storage end. If the client divides the data file into 3 pieces of fragment data, the client pushes the 3 pieces of fragment data to the uploading proxy service, and the uploading proxy service stores the fragment data to the corresponding cache region every time the uploading proxy service receives one piece of fragment data. The process of receiving the fragmented data by the upload agent service and the process of storing the fragmented data may be executed in parallel. And if the uploading agent service receives one piece of fragment data, storing the piece of fragment data into a cache region, and transmitting the piece of fragment data in the cache region to a distributed object storage end. When the client finishes pushing the last fragment data, the distributed object storage end already receives the two fragment data, and the storage work of the whole data file can be finished only by waiting for receiving the last fragment data. Compared with the method that the whole data file is directly pushed to the uploading agent service and then transmitted to the distributed object storage end by the uploading agent service, the time for transmitting the data file to the distributed object storage end is shortened.
In the embodiment of the invention, the client can self-define the data volume of the fragment data according to actual needs, and can push the fragment data to the upper-level management service according to any sequence. The uploading proxy service can store the received fragment data to the corresponding cache region in real time no matter what sequence the client side pushes the fragment data, so that the receiving operation and the storing operation of the fragment data can be executed in parallel, and the processing effect of the fragment data is improved.
In the embodiment of the invention, after the uploading proxy service transmits the fragment data in the stored cache region to the distribution object storage end, the fragment data in the transmitted cache region can be emptied immediately, so that the space occupation of the fragment data on the uploading proxy service is reduced, and the storage cost of the uploading proxy service is reduced.
Referring to fig. 5, a block diagram of a data processing apparatus according to an embodiment of the present invention is shown. The device can be applied to a proxy end, and particularly comprises the following modules.
An obtaining module 51, configured to obtain multiple fragmented data from a client;
a setting module 52, configured to set a rated capacity of a cache region according to a data amount of the fragment data, where the data amount of the fragment data is smaller than the rated capacity of the cache region, and an integral multiple of the data amount of the fragment data is different from the rated capacity;
the storage module 53 is configured to store the plurality of sliced data into corresponding cache regions respectively;
a transmission module 54, configured to transmit the fragment data stored in the cache region to a server side when a sum of data amounts of the plurality of stored fragment data is the same as a rated capacity of the cache region.
In a preferred embodiment of the present invention, the obtaining module 51 is further configured to, when obtaining a plurality of fragment data from a client, obtain directory identification information of a cache directory corresponding to a data file from the client, start byte index information of the fragment data in the data file, and a data amount of the fragment data, where the start byte index information represents location information of a start byte of the fragment data in the data file, and the fragment data is obtained by dividing the data file.
In a preferred embodiment of the present invention, the storage module 53 includes:
the matching module is used for matching a corresponding cache directory according to the directory identification information, and the matched cache directory represents the cache directory where the cache region of the fragment data to be stored is located;
a determining module, configured to determine, according to a data size of the data file, the initial byte index information, the rated capacity, and length information of each byte of the sliced data, a cache region corresponding to each byte in the matched cache directory;
a byte storage module, configured to store each byte of the sliced data to the corresponding cache region.
In a preferred embodiment of the present invention, the determining module includes:
the quantity determining module is used for taking the ratio of the data quantity of the data file to the rated capacity or the sum of the ratio and a preset numerical value as the quantity of the cache areas in the matched cache directory;
a sequence number determining module, configured to, for each byte, use a ratio of a sum of the starting byte index information and the length information of the current byte to the rated capacity, and a sum of the preset numerical value as a sequence number of the cache region corresponding to the current byte in the matched cache directory;
and a cache region determining module, configured to determine, for each byte, the cache region corresponding to the current byte in the matched cache directory according to the sequence number and the number of the cache regions in the matched cache directory.
In a preferred embodiment of the present invention, the obtaining module 51 is further configured to obtain a data volume and attribute information of the data file before obtaining the plurality of fragmented data from the client, where the attribute information includes file identification information and/or transmission acceleration information of the data file;
the device further comprises:
the creating module is used for creating the cache directory;
and the establishing module is used for generating the directory identification information for the cache directory according to the attribute information and establishing the corresponding relation between the cache directory and the directory identification information.
In a preferred embodiment of the present invention, the apparatus further comprises: a starting module, configured to start a detection thread before the transmission module 54 transmits the fragment data stored in the cache region to a server;
the obtaining module 51 is further configured to obtain, by using the detection thread, start position information and end position information of each byte in each cache area in the cache directory;
the device further comprises: and the detection module is used for detecting whether the sum of the data quantity of the stored plurality of fragment data is the same as the rated capacity of the cache region or not according to the starting position information, the ending position information and the rated capacity.
In a preferred embodiment of the present invention, the detection module includes:
the byte merging module is used for merging a plurality of bytes meeting preset merging conditions to obtain a merged byte combination;
the capacity judging module is used for judging whether the occupation amount of the byte combination in the cache region is the same as the rated capacity;
a capacity determining module, configured to determine that a sum of data amounts of the stored multiple pieces of fragmented data is the same as a rated capacity of the cache region if the occupied amount is the same as the rated capacity; and if the occupied amount is different from the rated capacity, determining that the sum of the data amounts of the stored plurality of the fragment data is different from the rated capacity of the cache region.
In a preferred embodiment of the present invention, the merging condition is that the start position information of the byte is located between the start position information and the end position information of the other bytes, and/or the end position information of the byte is located between the start position information and the end position information of the other bytes.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, including a processor 61, a communication interface 62, a memory 63, and a communication bus 64, where the processor 61, the communication interface 62, and the memory 63 complete mutual communication through the communication bus 64,
a memory 63 for storing a computer program;
the processor 61 is configured to implement the following steps when executing the program stored in the memory 63:
step S1, acquiring a plurality of fragment data from a client, and setting the rated capacity of a cache region according to the data volume of the fragment data, wherein the data volume of the fragment data is smaller than the rated capacity of the cache region, and the integral multiple of the data volume of the fragment data is different from the rated capacity;
step S2, storing the fragment data to corresponding cache regions respectively;
step S3, when the sum of the data volumes of the stored plurality of fragment data is the same as the rated capacity of the cache region, transmitting the fragment data stored in the cache region to a server.
When step S1 is executed, directory identification information of a cache directory corresponding to a data file from the client, start byte index information of the fragment data in the data file, and a data amount of the fragment data may also be obtained, where the start byte index information represents location information of a start byte of the fragment data in the data file, and the fragment data is obtained by dividing the data file.
In the execution process of step S2, matching a corresponding cache directory according to the directory identification information, where the matched cache directory represents the cache directory where the cache region of the fragmented data is located; determining the cache region corresponding to each byte in the matched cache directory according to the data volume of the data file, the initial byte index information, the rated capacity and the length information of each byte of the fragment data; and storing each byte of the fragment data to the corresponding cache region.
In the implementation process of the step of determining the corresponding cache region of each byte in the matched cache directory according to the data volume of the data file, the initial byte index information, the rated capacity and the length information of each byte of the fragment data, the ratio of the data volume of the data file to the rated capacity or the sum of the ratio and a preset numerical value is used as the number of the cache regions in the matched cache directory; for each byte, taking the sum of the ratio of the sum of the index information of the starting byte and the length information of the current byte to the rated capacity and the sum of the preset numerical value as the sequence number of the cache region corresponding to the current byte in the matched cache directory; and for each byte, determining the cache region corresponding to the byte in the matched cache directory according to the sequence number and the number of the cache regions in the matched cache directory.
Before step S1 is executed, the data size and attribute information of the data file may also be acquired, where the attribute information includes file identification information and/or transmission acceleration information of the data file; creating the cache directory; and generating the directory identification information for the cache directory according to the attribute information, and establishing a corresponding relation between the cache directory and the directory identification information.
Before step S3 is executed, a detection thread may also be started; acquiring the initial position information and the end position information of each byte in each cache region in the cache directory by using the detection thread; and detecting whether the sum of the data amount of the stored plurality of fragment data is the same as the rated capacity of the cache region or not according to the starting position information, the ending position information and the rated capacity.
When the sum of the data amounts of the plurality of stored fragment data is detected to be the same as the rated capacity of the cache region according to the starting position information, the ending position information and the rated capacity, merging the plurality of bytes meeting a preset merging condition to obtain a merged byte combination; judging whether the occupation amount of the byte combination in the cache region is the same as the rated capacity or not; if the occupied amount is the same as the rated capacity, determining that the sum of the data amounts of the stored plurality of fragment data is the same as the rated capacity of the cache region; and if the occupied amount is different from the rated capacity, determining that the sum of the data amounts of the stored plurality of the fragment data is different from the rated capacity of the cache region. The merging condition is that the start position information of the byte is located between the start position information and the end position information of other bytes, and/or the end position information of the byte is located between the start position information and the end position information of other bytes.
The communication bus mentioned in the above terminal may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In another embodiment of the present invention, a computer-readable storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to execute the method for processing data described in any of the above embodiments.
In a further embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of processing data as described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (11)

1. A method for processing data, comprising:
acquiring a plurality of fragment data from a client, and setting the rated capacity of a cache region according to the data volume of the fragment data, wherein the data volume of the fragment data is smaller than the rated capacity of the cache region, and the integral multiple of the data volume of the fragment data is different from the rated capacity;
respectively storing the fragment data to corresponding cache regions;
and when the sum of the data volumes of the stored plurality of fragment data is the same as the rated capacity of the cache region, transmitting the fragment data stored in the cache region to a server side.
2. The method of claim 1, wherein in the step of obtaining the fragmented data from the client, the method further comprises:
obtaining directory identification information of a cache directory corresponding to a data file from the client, initial byte index information of the fragment data in the data file, and a data volume of the fragment data, where the initial byte index information represents position information of an initial byte of the fragment data in the data file, and the fragment data is obtained by dividing the data file.
3. The method according to claim 2, wherein the step of storing the plurality of sliced data into corresponding buffer areas respectively comprises:
matching a corresponding cache directory according to the directory identification information, wherein the matched cache directory represents the cache directory where the cache region of the fragment data to be stored is located;
determining the cache region corresponding to each byte in the matched cache directory according to the data volume of the data file, the initial byte index information, the rated capacity and the length information of each byte of the fragment data;
and storing each byte of the fragment data to the corresponding cache region.
4. The method according to claim 3, wherein the step of determining the cache region corresponding to each byte in the matching cache directory according to the data size of the data file, the starting byte index information, the rated capacity, and the length information of each byte of the sliced data comprises:
taking the ratio of the data volume of the data file to the rated capacity or the sum of the ratio and a preset numerical value as the number of the cache areas in the cache directory matched with the ratio;
for each byte, taking the sum of the ratio of the sum of the index information of the starting byte and the length information of the current byte to the rated capacity and the sum of the preset numerical value as the sequence number of the cache region corresponding to the current byte in the matched cache directory;
and for each byte, determining the cache region corresponding to the byte in the matched cache directory according to the sequence number and the number of the cache regions in the matched cache directory.
5. The method according to claim 3 or 4, wherein before the step of obtaining the plurality of sliced data from the client, the method further comprises:
acquiring the data volume and attribute information of the data file, wherein the attribute information comprises file identification information and/or transmission acceleration information of the data file;
creating the cache directory;
and generating the directory identification information for the cache directory according to the attribute information, and establishing a corresponding relation between the cache directory and the directory identification information.
6. The method according to claim 5, wherein before the step of transmitting the fragmented data stored in the cache area to a server, the method further comprises:
starting a detection thread;
acquiring the initial position information and the end position information of each byte in each cache region in the cache directory by using the detection thread;
and detecting whether the sum of the data amount of the stored plurality of fragment data is the same as the rated capacity of the cache region or not according to the starting position information, the ending position information and the rated capacity.
7. The method according to claim 6, wherein the step of detecting whether the sum of the data amounts of the stored plurality of slice data is the same as the rated capacity of the buffer area according to the start position information, the end position information and the rated capacity comprises:
merging a plurality of bytes meeting preset merging conditions to obtain a merged byte combination;
judging whether the occupation amount of the byte combination in the cache region is the same as the rated capacity or not;
if the occupied amount is the same as the rated capacity, determining that the sum of the data amounts of the stored plurality of fragment data is the same as the rated capacity of the cache region;
and if the occupied amount is different from the rated capacity, determining that the sum of the data amounts of the stored plurality of the fragment data is different from the rated capacity of the cache region.
8. The method according to claim 7, wherein the combining condition is that the start position information of the byte is located between the start position information and the end position information of the other bytes, and/or the end position information of the byte is located between the start position information and the end position information of the other bytes.
9. An apparatus for processing data, comprising:
the acquisition module is used for acquiring a plurality of fragment data from the client;
the setting module is used for setting the rated capacity of a cache region according to the data volume of the fragment data, the data volume of the fragment data is smaller than the rated capacity of the cache region, and the integral multiple of the data volume of the fragment data is different from the rated capacity;
the storage module is used for respectively storing the fragment data to corresponding cache regions;
and the transmission module is used for transmitting the fragment data stored in the cache region to a server side when the sum of the data amount of the plurality of stored fragment data is the same as the rated capacity of the cache region.
10. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 8 when executing a program stored in the memory.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN201911400667.XA 2019-12-30 2019-12-30 Data processing method and device Pending CN111198885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911400667.XA CN111198885A (en) 2019-12-30 2019-12-30 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911400667.XA CN111198885A (en) 2019-12-30 2019-12-30 Data processing method and device

Publications (1)

Publication Number Publication Date
CN111198885A true CN111198885A (en) 2020-05-26

Family

ID=70746390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911400667.XA Pending CN111198885A (en) 2019-12-30 2019-12-30 Data processing method and device

Country Status (1)

Country Link
CN (1) CN111198885A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112199342A (en) * 2020-11-04 2021-01-08 江苏特思达电子科技股份有限公司 File uploading method and device and computer equipment
CN112637242A (en) * 2021-01-06 2021-04-09 新华三技术有限公司 Data transmission method and device, electronic equipment and storage medium
CN114301931A (en) * 2022-03-11 2022-04-08 上海凯翔信息科技有限公司 Data synchronization system based on cloud NAS

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100017600A1 (en) * 2008-07-15 2010-01-21 Viasat, Inc. Secure neighbor cache preload
CN104679830A (en) * 2015-01-30 2015-06-03 乐视网信息技术(北京)股份有限公司 File processing method and device
CN105677754A (en) * 2015-12-30 2016-06-15 华为技术有限公司 Method, apparatus and system for acquiring subitem metadata in file system
CN106021381A (en) * 2016-05-11 2016-10-12 北京搜狐新媒体信息技术有限公司 Data access/storage method and device for cloud storage service system
CN108712454A (en) * 2018-02-13 2018-10-26 阿里巴巴集团控股有限公司 A kind of document handling method, device and equipment
CN109831506A (en) * 2019-01-31 2019-05-31 百度在线网络技术(北京)有限公司 File uploading method, device, terminal, server and readable storage medium storing program for executing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100017600A1 (en) * 2008-07-15 2010-01-21 Viasat, Inc. Secure neighbor cache preload
CN104679830A (en) * 2015-01-30 2015-06-03 乐视网信息技术(北京)股份有限公司 File processing method and device
CN105677754A (en) * 2015-12-30 2016-06-15 华为技术有限公司 Method, apparatus and system for acquiring subitem metadata in file system
CN106021381A (en) * 2016-05-11 2016-10-12 北京搜狐新媒体信息技术有限公司 Data access/storage method and device for cloud storage service system
CN108712454A (en) * 2018-02-13 2018-10-26 阿里巴巴集团控股有限公司 A kind of document handling method, device and equipment
CN109831506A (en) * 2019-01-31 2019-05-31 百度在线网络技术(北京)有限公司 File uploading method, device, terminal, server and readable storage medium storing program for executing

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112199342A (en) * 2020-11-04 2021-01-08 江苏特思达电子科技股份有限公司 File uploading method and device and computer equipment
CN112637242A (en) * 2021-01-06 2021-04-09 新华三技术有限公司 Data transmission method and device, electronic equipment and storage medium
CN114301931A (en) * 2022-03-11 2022-04-08 上海凯翔信息科技有限公司 Data synchronization system based on cloud NAS

Similar Documents

Publication Publication Date Title
CN108108127B (en) File reading method and system
CN111198885A (en) Data processing method and device
JP2012089094A5 (en)
US20200301944A1 (en) Method and apparatus for storing off-chain data
CN111291002B (en) File account checking method, device, computer equipment and storage medium
CN111782707A (en) Data query method and system
CN113268439A (en) Memory address searching method and device, electronic equipment and storage medium
CN111443899B (en) Element processing method and device, electronic equipment and storage medium
CN117036115A (en) Contract data verification method, device and server
CN113411364B (en) Resource acquisition method and device and server
CN113485921B (en) File system testing method, device, equipment and storage medium
CN112269665B (en) Memory processing method and device, electronic equipment and storage medium
CN114675776A (en) Resource storage method and device, storage medium and electronic equipment
CN109889608B (en) Dynamic resource loading method and device, electronic equipment and storage medium
CN111061719B (en) Data collection method, device, equipment and storage medium
CN112597119A (en) Method and device for generating processing log and storage medium
CN113419901A (en) Data disaster recovery method and device and server
CN114968963A (en) File overwriting method and device and electronic equipment
CN109491699B (en) Resource checking method, device, equipment and storage medium of application program
CN112910936A (en) Data processing method, device and system, electronic equipment and readable storage medium
CN115037792B (en) Information acquisition method and device, electronic equipment and storage medium
CN112527787B (en) Safe and reliable multiparty data deduplication system, method and device
CN113254483B (en) Request processing method and device, electronic equipment and storage medium
WO2022222665A1 (en) Request recognition method and apparatus, and device and storage medium
CN113157645B (en) Cluster data migration method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination