CN113364848B - File caching method and device, electronic equipment and storage medium - Google Patents

File caching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113364848B
CN113364848B CN202110607882.8A CN202110607882A CN113364848B CN 113364848 B CN113364848 B CN 113364848B CN 202110607882 A CN202110607882 A CN 202110607882A CN 113364848 B CN113364848 B CN 113364848B
Authority
CN
China
Prior art keywords
file
server
client
sub
access request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110607882.8A
Other languages
Chinese (zh)
Other versions
CN113364848A (en
Inventor
陈欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Bank Co Ltd
Original Assignee
Ping An Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Bank Co Ltd filed Critical Ping An Bank Co Ltd
Priority to CN202110607882.8A priority Critical patent/CN113364848B/en
Publication of CN113364848A publication Critical patent/CN113364848A/en
Application granted granted Critical
Publication of CN113364848B publication Critical patent/CN113364848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]

Abstract

The invention relates to the field of data processing, and discloses a file caching method, which comprises the following steps: receiving a server access request transmitted by a client, and verifying the server access request; inquiring a server file corresponding to the server access request when the server access request passes verification; storing and slicing the server side file to generate a plurality of sub-sliced files, and storing the sub-sliced files into a cache node space of the server side; transmitting the sub-fragment files of the cache node space to a client and then combining the sub-fragment files to obtain a client file; identifying whether the client file is consistent with the server file; if the access requests are inconsistent, the server files corresponding to the server access requests are queried again; if the client file is consistent with the server file, the client file is stored in a cache of the client. In addition, the invention also relates to a blockchain technology, and the server side file can be stored in the blockchain. The invention can improve the caching efficiency of the large file.

Description

File caching method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data processing, and in particular, to a file caching method, a device, an electronic apparatus, and a computer readable storage medium.
Background
The file cache generally stores the webpage content accessed by the user in a local client cache or a browser cache, and is used for facilitating the user to directly inquire from the local client cache or the browser cache when accessing the webpage next time, so that the response speed of the webpage is improved. At present, for large files, tens of M, more than several G, when facing large file cache, the following problems easily exist: 1. there are requirements on storage space, and caching schemes such as redis cannot be used; 2. the method has the advantages that the flow is required, the transmission of the large file is easy to be interrupted in the caching process, and retransmission is required when the transmission is interrupted, so that the flow is greatly wasted and more time is consumed.
Therefore, a file caching scheme is needed to solve the above-mentioned problems of large file caching.
Disclosure of Invention
The invention provides a file caching method, a file caching device, electronic equipment and a computer readable storage medium, and aims to improve caching efficiency of large files.
In order to achieve the above object, the present invention provides a file caching method, including:
receiving a server access request transmitted by a client, and verifying the server access request;
inquiring a server file corresponding to the server access request when the server access request passes verification;
the server side file is subjected to storage fragmentation to generate a plurality of sub-fragmented files, and the sub-fragmented files are stored into a cache node space which is created in the server side in advance;
transmitting the sub-fragment files of the cache node space to the client side, and then combining to obtain a client side file;
identifying whether the client file is consistent with the server file;
if the client file is inconsistent with the server file, re-querying the server file corresponding to the server access request;
and if the client file is consistent with the server file, storing the client file into a cache of the client.
Optionally, the verifying the server access request includes:
acquiring a user identifier of the server access request;
inquiring whether the user identifier exists at the service end corresponding to the client;
if not, the user identification is registered in the corresponding server of the client and then the access request of the server is received again;
and if so, executing the access of the server access request.
Optionally, the querying the server file corresponding to the server access request includes:
acquiring a browsing record of the server access request in a server;
compiling the browsing record into a log file by using a log generating tool;
and screening the user demand file from the log file to obtain a server file.
Optionally, the storing and slicing the server file to generate a plurality of sub-sliced files includes:
performing length slicing on the server-side file based on a preset slicing length to obtain a plurality of length slicing files;
and carrying out fragment number identification on each length fragment file to obtain a plurality of sub-fragment files.
Optionally, the transmitting the sub-tile file in the cache node space to the client includes:
receiving a file cache demand of a client, and identifying a file fragment number of the file cache demand;
inquiring the corresponding sub-fragment files from the cache node space according to the file fragment numbers;
and transmitting the queried sub-fragmented files to the client by using a pre-created file transmission channel.
Optionally, the identifying whether the client file is consistent with the server file includes:
respectively calculating md5 values of the client file and the server file to obtain a client file md5 value and a server file md5 value;
if the md5 value of the client file is inconsistent with the md5 value of the server file, judging that the client file is inconsistent with the server file;
and if the md5 value of the client file is consistent with the md5 value of the server file, judging that the client file is consistent with the server file.
Optionally, the calculating the md5 value of the client file includes:
the md5 value of the client file is calculated using the following method:
fakeMd5 expect =∑md5 i
wherein fakeMd5 expect Represents the client file md5 value, md5 i The file signature of the ith sub-fragment file of the client file is represented, and i represents the fragment number of the sub-fragment file.
In order to solve the above problems, the present invention further provides a file caching apparatus, including:
the verification module is used for receiving a server access request transmitted by a client and verifying the server access request;
the query module is used for querying a server file corresponding to the server access request when the server access request passes the verification;
the slicing module is used for storing and slicing the server side file, generating a plurality of sub-slicing files, and storing the sub-slicing files into a cache node space created in the server side in advance;
the combining module is used for transmitting the sub-fragment files of the cache node space to the client and then combining the sub-fragment files to obtain a client file;
the identification module is used for identifying whether the client file is consistent with the server file or not;
the identification module is further configured to re-query a server file corresponding to the server access request when the client file is inconsistent with the server file;
the identification module is further configured to store the client file into a cache of the client when the client file is consistent with the server file.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to implement the file caching method described above.
In order to solve the above-mentioned problems, the present invention also provides a computer-readable storage medium having stored therein at least one computer program that is executed by a processor in an electronic device to implement the above-mentioned file caching method.
The embodiment of the invention firstly verifies the server access request transmitted by the client, and when the server access request passes the verification, the server file corresponding to the server access request is inquired, so that whether the server access request is legal or not can be identified, and the normal operation of subsequent service access and the security of service access are ensured; secondly, the embodiment of the invention stores and fragments the server side file to generate a plurality of sub-fragmented files, and stores the sub-fragmented files into a buffer node space created in the server side in advance, thereby realizing the fragmented storage of the server side file, meeting the storage space of a large file, and simultaneously ensuring that the server side file can support the continuous transmission of the last failed file when the transmission fails by using the buffer node space; further, in the embodiment of the invention, the sub-fragment files in the cache node space are transmitted to the client side and then combined to obtain the client side file, and the latest state of the client side file can be ensured by executing the storage of the client side file in the cache of the client side according to the identification of whether the client side file is consistent with the server side file. Therefore, the file caching method, the file caching device, the electronic equipment and the storage medium can improve the caching efficiency of large files.
Drawings
FIG. 1 is a flowchart illustrating a method for file caching according to an embodiment of the present invention;
FIG. 2 is a detailed flowchart illustrating one of the steps of the file caching method provided in FIG. 1 according to a first embodiment of the present invention;
FIG. 3 is a detailed flowchart illustrating another step of the file caching method provided in FIG. 1 according to the first embodiment of the present invention;
FIG. 4 is a schematic diagram of a file caching apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an internal structure of an electronic device for implementing a file caching method according to an embodiment of the present invention;
the achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the application provides a file caching method. The execution body of the file caching method includes, but is not limited to, at least one of a server, a terminal and the like capable of being configured to execute the method provided by the embodiment of the application. In other words, the file caching method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to fig. 1, a flowchart of a file caching method according to an embodiment of the invention is shown. In an embodiment of the present invention, the file caching method includes:
s1, receiving a server access request transmitted by a client, and verifying the server access request.
In a preferred embodiment of the present invention, the client may also be referred to as a mobile terminal, and is configured to perform web access, including: cell phones, tablets, PCs, etc. The server access request refers to a requirement of accessing a certain service in the server, for example, inquiring a logistics condition of a mall order, searching an IP address of a certain server, and viewing a short video of a web page.
Further, the verifying the server access request includes: acquiring a user identifier of the server access request; inquiring whether the user identifier exists at a server corresponding to the client; if not, registering a user identifier in the server corresponding to the client, and then re-receiving a server access request; and if the access request exists, executing the access of the server access request.
Wherein, the user identifier refers to a unique identifier for characterizing user identity information, and the user identifier includes, but is not limited to: and according to the user identification, whether the access request of the server is legal or not can be identified, so that the normal operation of subsequent service access and the security of service access are ensured.
S2, inquiring a server file corresponding to the server access request when the server access request passes verification.
After the verification of the server access request is passed, the embodiment of the invention inquires the server file corresponding to the server access request. The server file includes information generated after the user browses through a browser, such as video, text, pictures and the like browsed by the user. It should be stated that, in the embodiment of the present invention, the server file is a large file, that is, the file size of the server file is at least greater than 10M.
In detail, the querying the server file corresponding to the server access request includes: acquiring a browsing record of the server access request in a server; compiling the browsing record into a log file by using a log generating tool; and screening the user demand file from the log file to obtain a server file.
The browsing records are obtained through a web crawler technology, the web crawler technology can be a node. Js technology, and the log generation tool can be compiled through JavaScript scripts and is used for compiling the browsing records into files in a log form so as to more intuitively know browsing traces of the server access requests at the server.
Further, in the embodiment of the present invention, the user requirement file is screened by a get () method, for example, obtaining a browsing record of the server access request in the server is: video recording, picture recording and text recording, compiling the video recording, the picture recording and the text recording into a log file. When receiving a user demand to acquire a picture A and a picture B, the embodiment of the invention queries the picture A and the picture B from the browsing record by using an import method, screens the picture A and the picture B from the picture record by using a get () method, and obtains a server file.
Further, in another embodiment of the present invention, after the querying the server file corresponding to the server access request, the method further includes: inquiring whether the same files exist in the server files, deleting any server file if the same files exist, and if the same files do not exist, not processing the server files so as to avoid the condition that the server files are redundant and release system resources of the server.
Further, to ensure the security and privacy of the server file, the server file may also be stored in a blockchain node.
S3, storing the server side file into fragments, generating a plurality of sub-fragment files, and storing the sub-fragment files into a cache node space created in the server side in advance.
Because the storage space of the server side file is larger, if the server side file is directly cached, the problem of insufficient cache space is easily caused, and the subsequent server side file cannot be queried normally.
In detail, referring to fig. 2, the storing and slicing the server file to generate a plurality of sub-sliced files includes:
s20, performing length slicing on the server-side file based on a preset slicing length to obtain a plurality of length slicing files;
s21, carrying out fragment number identification on each length fragment file to obtain a plurality of sub-fragment files.
The preset fragment length is set according to the corresponding server file size, for example: the size of the server side file a is 5T, and the preset fragment length may be 1T, so that the embodiment of the invention may divide the server side file a into 5 length fragment files in sequence, including a length fragment file a\0, a length fragment file a\1, a length fragment file a\2, a length fragment file a\3, and a length fragment file a\4.
Further, the fragment number is used to represent a unique message of the corresponding length fragment file, and preferably, the embodiment of the invention realizes the fragment number identification of the length fragment file through id, for example, the fragment number of the length fragment file a\0 may be set to id:0, the fragment number of the length fragment file a\1 may be set to id:1, the fragment number of the length fragment file a\2 may be set to id:2, and so on.
Further, the embodiment of the invention stores the sub-fragment files into a cache node space created in the server in advance, wherein the cache node space refers to an edge node space of the server and is used for improving the transmission speed of the server files between the server and the client. Preferably, in the embodiment of the present invention, the cache node space is created according to the distributed cache content, where the distributed cache content may be an operator, a region, or the like, for example, a cache node space is created in the south China, a cache node space is created in the east China, and a cache node space is created in the north China, so as to improve the access speed of the server file.
It should be stated that, if the storing of the sub-sliced file into the cache node space fails, the embodiment of the invention supports the continuous storing of the sub-sliced file which fails to be stored last time, for example, ten sub-sliced files need to be stored into the cache node space, and when the sixth sub-sliced file is stored, the storing of the sixth sub-sliced file fails because the server is down, and when the normal operation of the server is restored, the embodiment of the invention supports the continuous storing of the sixth sub-sliced file, so as to improve the file storing efficiency. Optionally, the invention supports the sub-fragment file which fails to be stored last time to be stored continuously through a monitoring tool, and the monitoring tool compiles through java language.
And S4, transmitting the sub-fragment files of the cache node space to the client side, and then combining to obtain the client side file.
In a preferred embodiment of the present invention, referring to fig. 4, the transmitting the sub-slice file in the cache node space to the client includes:
s30, receiving a file cache requirement of a client, and identifying a file fragment number of the file cache requirement;
s31, inquiring the corresponding sub-fragment files from the cache node space according to the file fragment numbers;
s32, transmitting the queried sub-fragment files to the client by using a pre-created file transmission channel.
Specifically, the file cache requirement is input based on a user requirement, for example, the picture A and the picture B are acquired, and the file fragment number is acquired by querying a file id of the file cache requirement.
And querying the corresponding sub-fragment file from the cache node space according to the file fragment number, wherein the method comprises the following steps: querying the sub-fragmented files with the same file fragment numbers from the cache node space by using a query statement, wherein the query statement comprises: select statement.
In an alternative embodiment, the pre-created file transfer channel may be configured with currently known message middleware, such as: MQ message middleware.
Further, the embodiment of the invention combines the sub-fragment files transmitted to the client to obtain the client file so as to ensure the integrity of the corresponding server file.
S5, identifying whether the client file is consistent with the server file.
In the embodiment of the invention, whether the client file is in the latest file state or not is identified through an md5 information summary algorithm, the md5 information summary algorithm is a widely used password hash function, and a 128-bit (16-byte) hash value (hash value) can be generated to ensure complete consistency of file transmission.
In detail, the identifying whether the client file is consistent with the server file includes:
calculating md5 values of the client file and the server file to obtain a client file md5 value and a server file md5 value, judging that the client file is inconsistent with the server file if the client file md5 value is inconsistent with the server file md5 value, and judging that the client file is consistent with the server file if the client file md5 value is consistent with the server file md5 value.
In an alternative embodiment, the md5 value of the client file is calculated using the following method:
fakeMd5 expect =∑md5 i
wherein fakeMd5 expect Represents the client file md5 value, md5 i The file signature of the ith sub-fragment file of the client file is represented, and i represents the fragment number of the sub-fragment file.
Further, the method for calculating the md5 value of the server file is the same as the method for calculating the md5 value of the client file, which is not further described.
And if the client file is inconsistent with the server file, re-executing S2, and inquiring the server file corresponding to the server access request.
In the embodiment of the invention, when the client file is inconsistent with the server file, the client file is identified not to be in the latest file state, so that the latest state of the client file is ensured by re-inquiring the server file corresponding to the server access request.
And if the client file is consistent with the server file, executing S6, and storing the client file into a cache of the client.
In the embodiment of the invention, when the client file is consistent with the server file, the client file can be identified to be in the latest file state, so that the client file is stored in the client cache, the client file is convenient for a user to directly inquire from the client cache when accessing the webpage next time, and the response speed of the webpage is improved. The cache of the client can be a space opened up in a disk of the client and is used for storing the server file and improving the reading speed of the server file.
The embodiment of the invention firstly verifies the server access request transmitted by the client, and when the server access request passes the verification, the server file corresponding to the server access request is inquired, so that whether the server access request is legal or not can be identified, and the normal operation of subsequent service access and the security of service access are ensured; secondly, the embodiment of the invention stores and fragments the server side file to generate a plurality of sub-fragmented files, and stores the sub-fragmented files into a buffer node space created in the server side in advance, thereby realizing the fragmented storage of the server side file, meeting the storage space of a large file, and simultaneously ensuring that the server side file can support the continuous transmission of the last failed file when the transmission fails by using the buffer node space; further, in the embodiment of the invention, the sub-fragment files in the cache node space are transmitted to the client side and then combined to obtain the client side file, and the latest state of the client side file can be ensured by executing the storage of the client side file in the cache of the client side according to the identification of whether the client side file is consistent with the server side file. Therefore, the invention can improve the cache efficiency of the large file.
FIG. 4 is a functional block diagram of the file caching apparatus of the present invention.
The file caching apparatus 100 of the present invention may be installed in an electronic device. The file caching apparatus may include a verification module 101, a query module 102, a fragmentation module 103, a combination module 104, and an identification module 105, depending on the functions implemented. The module of the present invention may also be referred to as a unit, meaning a series of computer program segments capable of being executed by the processor of the electronic device and of performing fixed functions, stored in the memory of the electronic device.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the verification module 101 is configured to receive a server access request transmitted by a client, and verify the server access request;
the query module 102 is configured to query a server file corresponding to the server access request when the server access request passes the verification;
the slicing module 103 is configured to store the server side file into slices, generate a plurality of sub-slice files, and store the sub-slice files into a cache node space created in the server side in advance;
the combination module 104 is configured to transmit the sub-fragment files of the cache node space to the client and then combine the sub-fragment files to obtain a client file;
the identifying module 105 is configured to identify whether the client file is consistent with the server file;
the identification module 105 is further configured to re-query a server file corresponding to the server access request when the client file is inconsistent with the server file;
the identification module 105 is further configured to store the client file in a cache of the client when the client file is consistent with the server file.
In detail, the modules in the file caching apparatus 100 in the embodiment of the present invention use the same technical means as the file caching method described in fig. 1 to 3, and can produce the same technical effects, which are not described herein.
Fig. 5 is a schematic structural diagram of an electronic device for implementing the file caching method according to the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a file cache program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, including flash memory, a mobile hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as code of a file cache, but also for temporarily storing data that has been output or is to be output.
The processor 10 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device 1 and processes data by running or executing programs or modules (e.g., executing file caches, etc.) stored in the memory 11, and calling the data stored in the memory 11.
The bus may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
Fig. 5 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further include a power source (such as a battery) for supplying power to each component, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
Further, the electronic device 1 may also comprise a network interface, optionally the network interface may comprise a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the electronic device 1 and other electronic devices.
The electronic device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The file cache 12 stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, may implement:
receiving a server access request transmitted by a client, and verifying the server access request;
inquiring a server file corresponding to the server access request when the server access request passes verification;
the server side file is subjected to storage fragmentation to generate a plurality of sub-fragmented files, and the sub-fragmented files are stored into a cache node space which is created in the server side in advance;
transmitting the sub-fragment files of the cache node space to the client side, and then combining to obtain a client side file;
identifying whether the client file is consistent with the server file;
if the client file is inconsistent with the server file, re-querying the server file corresponding to the server access request;
and if the client file is consistent with the server file, storing the client file into a cache of the client.
Specifically, the specific implementation method of the above instructions by the processor 10 may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
Further, the integrated modules/units of the electronic device 1 may be stored in a non-volatile computer readable storage medium if implemented in the form of software functional units and sold or used as a stand alone product. The computer readable storage medium may be volatile or nonvolatile. For example, the computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
The present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor of an electronic device, can implement:
receiving a server access request transmitted by a client, and verifying the server access request;
inquiring a server file corresponding to the server access request when the server access request passes verification;
the server side file is subjected to storage fragmentation to generate a plurality of sub-fragmented files, and the sub-fragmented files are stored into a cache node space which is created in the server side in advance;
transmitting the sub-fragment files of the cache node space to the client side, and then combining to obtain a client side file;
identifying whether the client file is consistent with the server file;
if the client file is inconsistent with the server file, re-querying the server file corresponding to the server access request;
and if the client file is consistent with the server file, storing the client file into a cache of the client.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (10)

1. A method for caching a file, the method comprising:
receiving a server access request transmitted by a client, and verifying the server access request;
inquiring a server file corresponding to the server access request when the server access request passes verification;
storing and slicing the server side file to generate a plurality of sub-sliced files, storing the sub-sliced files into a cache node space which is created in the server side in advance, and continuously storing the sub-sliced files into the cache node space from the last sub-sliced file which is not stored when the sub-sliced files are restored to be normal if the sub-sliced files are not stored into the cache node space, wherein the cache node space is an edge node space of the server side;
transmitting the sub-fragment files of the cache node space to the client side, and then combining to obtain a client side file;
identifying whether the client file is consistent with the server file;
if the client file is inconsistent with the server file, re-querying the server file corresponding to the server access request;
and if the client file is consistent with the server file, storing the client file into a cache of the client.
2. The method of claim 1, wherein validating the server access request comprises:
acquiring a user identifier of the server access request;
inquiring whether the user identifier exists at the service end corresponding to the client;
if not, the user identification is registered in the corresponding server of the client and then the access request of the server is received again;
and if so, executing the access of the server access request.
3. The method of claim 1, wherein querying the server file corresponding to the server access request comprises:
acquiring a browsing record of the server access request in a server;
compiling the browsing record into a log file by using a log generating tool;
and screening the user demand file from the log file to obtain a server file.
4. The method of claim 1, wherein the storing and slicing the server file to generate a plurality of sub-sliced files includes:
performing length slicing on the server-side file based on a preset slicing length to obtain a plurality of length slicing files;
and carrying out fragment number identification on each length fragment file to obtain a plurality of sub-fragment files.
5. The method of claim 4, wherein transmitting the sub-partitioned files in the cache node space to the client comprises:
receiving a file cache demand of a client, and identifying a file fragment number of the file cache demand;
inquiring the corresponding sub-fragment files from the cache node space according to the file fragment numbers;
and transmitting the queried sub-fragmented files to the client by using a pre-created file transmission channel.
6. The file caching method according to any one of claims 1 to 5, wherein the identifying whether the client file is consistent with the server file includes:
respectively calculating md5 values of the client file and the server file to obtain a client file md5 value and a server file md5 value;
if the md5 value of the client file is inconsistent with the md5 value of the server file, judging that the client file is inconsistent with the server file;
and if the md5 value of the client file is consistent with the md5 value of the server file, judging that the client file is consistent with the server file.
7. The file caching method as claimed in claim 6, wherein said calculating an md5 value of said client file comprises:
the md5 value of the client file is calculated using the following method:
fakeMd5 expect =∑md5 i
wherein fakeMd5 expect Represents the client file md5 value, md5 i The file signature of the ith sub-fragment file of the client file is represented, and i represents the fragment number of the sub-fragment file.
8. A file caching apparatus, the apparatus comprising:
the verification module is used for receiving a server access request transmitted by a client and verifying the server access request;
the query module is used for querying a server file corresponding to the server access request when the server access request passes the verification;
the server side file storing module is used for storing and slicing the server side file, generating a plurality of sub-sliced files, storing the sub-sliced files into a cache node space created in the server side in advance, and if the sub-sliced files fail to be stored into the cache node space, continuing to store the sub-sliced files which fail to be stored last time into the cache node space when the sub-sliced files are recovered to be normal, wherein the cache node space is an edge node space of the server side;
the combining module is used for transmitting the sub-fragment files of the cache node space to the client and then combining the sub-fragment files to obtain a client file;
the identification module is used for identifying whether the client file is consistent with the server file or not;
the identification module is further configured to re-query a server file corresponding to the server access request when the client file is inconsistent with the server file;
the identification module is further configured to store the client file into a cache of the client when the client file is consistent with the server file.
9. An electronic device, the electronic device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the file caching method of any one of claims 1 to 7.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the file caching method according to any one of claims 1 to 7.
CN202110607882.8A 2021-06-01 2021-06-01 File caching method and device, electronic equipment and storage medium Active CN113364848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110607882.8A CN113364848B (en) 2021-06-01 2021-06-01 File caching method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110607882.8A CN113364848B (en) 2021-06-01 2021-06-01 File caching method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113364848A CN113364848A (en) 2021-09-07
CN113364848B true CN113364848B (en) 2024-03-19

Family

ID=77530702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110607882.8A Active CN113364848B (en) 2021-06-01 2021-06-01 File caching method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113364848B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615258A (en) * 2022-03-28 2022-06-10 重庆长安汽车股份有限公司 Method and device for uploading large files to file server in fragmented manner

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017096830A1 (en) * 2015-12-08 2017-06-15 乐视控股(北京)有限公司 Content delivery method and scheduling proxy server for cdn platform
WO2019157929A1 (en) * 2018-02-13 2019-08-22 阿里巴巴集团控股有限公司 File processing method, device, and equipment
CN110290186A (en) * 2016-12-20 2019-09-27 北京并行科技股份有限公司 A kind of system and method suitable for the transmission of more Supercomputer Center's files
CN112257088A (en) * 2020-10-26 2021-01-22 上海睿成软件有限公司 File cache encryption system, equipment and storage medium
CN112492033A (en) * 2020-11-30 2021-03-12 深圳市移卡科技有限公司 File transmission method, system and computer readable storage medium
CN112528307A (en) * 2020-12-18 2021-03-19 平安银行股份有限公司 Service request checking method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017096830A1 (en) * 2015-12-08 2017-06-15 乐视控股(北京)有限公司 Content delivery method and scheduling proxy server for cdn platform
CN110290186A (en) * 2016-12-20 2019-09-27 北京并行科技股份有限公司 A kind of system and method suitable for the transmission of more Supercomputer Center's files
WO2019157929A1 (en) * 2018-02-13 2019-08-22 阿里巴巴集团控股有限公司 File processing method, device, and equipment
CN112257088A (en) * 2020-10-26 2021-01-22 上海睿成软件有限公司 File cache encryption system, equipment and storage medium
CN112492033A (en) * 2020-11-30 2021-03-12 深圳市移卡科技有限公司 File transmission method, system and computer readable storage medium
CN112528307A (en) * 2020-12-18 2021-03-19 平安银行股份有限公司 Service request checking method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113364848A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
US9569400B2 (en) RDMA-optimized high-performance distributed cache
US10979440B1 (en) Preventing serverless application package tampering
JP5975501B2 (en) Mechanisms that promote storage data encryption-free integrity protection in computing systems
CN110825479A (en) Page processing method and device, terminal equipment, server and storage medium
EP3557437B1 (en) Systems and methods for search template generation
CN112653760B (en) Cross-server file transmission method and device, electronic equipment and storage medium
CN112596932A (en) Service registration and interception method and device, electronic equipment and readable storage medium
CN112860737B (en) Data query method and device, electronic equipment and readable storage medium
CN111209557A (en) Cross-domain single sign-on method and device, electronic equipment and storage medium
CN113204345A (en) Page generation method and device, electronic equipment and storage medium
CN113364848B (en) File caching method and device, electronic equipment and storage medium
WO2022088710A1 (en) Mirror image management method and apparatus
CN113221154A (en) Service password obtaining method and device, electronic equipment and storage medium
CN116842012A (en) Method, device, equipment and storage medium for storing Redis cluster in fragments
CN111245727A (en) Message routing method, electronic device, proxy node and medium based on DHT network
CN116069725A (en) File migration method, device, apparatus, medium and program product
CN110674426A (en) Webpage behavior reporting method and device
CN113590703B (en) ES data importing method and device, electronic equipment and readable storage medium
CN110705935B (en) Logistics document processing method and device
CN112416875A (en) Log management method and device, computer equipment and storage medium
CN112487400A (en) Single sign-on method and device based on multiple pages, electronic equipment and storage medium
CN116418580B (en) Data integrity protection detection method and device for local area network and electronic equipment
CN113438221B (en) Local end file loading method and device, electronic equipment and medium
CN112000945B (en) Authorization method, device, equipment and medium based on artificial intelligence
CN113542387B (en) System release method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant