CN113364848A - File caching method and device, electronic equipment and storage medium - Google Patents

File caching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113364848A
CN113364848A CN202110607882.8A CN202110607882A CN113364848A CN 113364848 A CN113364848 A CN 113364848A CN 202110607882 A CN202110607882 A CN 202110607882A CN 113364848 A CN113364848 A CN 113364848A
Authority
CN
China
Prior art keywords
file
server
client
access request
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110607882.8A
Other languages
Chinese (zh)
Other versions
CN113364848B (en
Inventor
陈欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Bank Co Ltd
Original Assignee
Ping An Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Bank Co Ltd filed Critical Ping An Bank Co Ltd
Priority to CN202110607882.8A priority Critical patent/CN113364848B/en
Publication of CN113364848A publication Critical patent/CN113364848A/en
Application granted granted Critical
Publication of CN113364848B publication Critical patent/CN113364848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]

Abstract

The invention relates to the field of data processing, and discloses a file caching method, which comprises the following steps: receiving a server access request transmitted by a client, and verifying the server access request; when the server side access request passes the verification, inquiring a server side file corresponding to the server side access request; storing and fragmenting a server file to generate a plurality of sub-fragment files, and storing the sub-fragment files into a cache node space of the server; transmitting the sub-fragment files of the cache node space to a client and then combining the sub-fragment files to obtain a client file; identifying whether the client file is consistent with the server file; if not, re-inquiring the server file corresponding to the server access request; and if the client file is consistent with the server file, storing the client file into a cache of the client. In addition, the invention also relates to a block chain technology, and the server file can be stored in the block chain. The invention can improve the caching efficiency of the large file.

Description

File caching method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data processing, and in particular, to a file caching method and apparatus, an electronic device, and a computer-readable storage medium.
Background
The file cache is generally used for storing the webpage content accessed by the user in a local client cache or a browser cache, so that the user can conveniently and directly cache and inquire the webpage from the local client cache or the browser cache when accessing the webpage next time, and the response speed of the webpage is improved. At present, for a large file, when the size is dozens of M and the number is more than a few G, the following problems are easily caused when the large file is cached: 1. there is a requirement for storage space, and caching schemes such as redis cannot be used; 2. the method has the advantages that the flow is required, the transmission of the large files is easy to interrupt in the caching process, the large files need to be retransmitted when the transmission is interrupted, and the flow is greatly wasted while more time is consumed.
Therefore, a file caching scheme is needed to solve the above-mentioned problems of large file caching.
Disclosure of Invention
The invention provides a file caching method, a file caching device, electronic equipment and a computer readable storage medium, and mainly aims to improve the caching efficiency of large files.
In order to achieve the above object, the present invention provides a file caching method, including:
receiving a server access request transmitted by a client, and verifying the server access request;
when the server side access request passes the verification, inquiring a server side file corresponding to the server side access request;
storing and fragmenting the server side file to generate a plurality of sub-fragment files, and storing the sub-fragment files into a cache node space which is created in the server side in advance;
transmitting the sub-fragment files of the cache node space to the client and then combining the sub-fragment files to obtain a client file;
identifying whether the client file is consistent with the server file;
if the client file is inconsistent with the server file, re-querying the server file corresponding to the server access request;
and if the client file is consistent with the server file, storing the client file into a cache of the client.
Optionally, the verifying the server access request includes:
acquiring a user identifier of the server access request;
inquiring whether the server corresponding to the client has the user identification;
if not, the user identification is registered in the server corresponding to the client, and then the server access request is received again;
and if so, executing the access of the server access request.
Optionally, the querying a server file corresponding to the server access request includes:
acquiring a browsing record of the server access request in a server;
compiling the browsing record into a log file by using a log generation tool;
and screening out a user requirement file from the log file to obtain a server file.
Optionally, the performing storage fragmentation on the server file to generate a plurality of sub-fragment files includes:
length fragmentation is carried out on the server side file based on a preset fragmentation length, and a plurality of length fragmentation files are obtained;
and carrying out fragment number identification on each fragment file with the length to obtain a plurality of sub-fragment files.
Optionally, the transmitting the sub-tile file in the cache node space to the client includes:
receiving a file caching requirement of a client, and identifying a file fragment number of the file caching requirement;
according to the file fragment number, inquiring a corresponding sub-fragment file from the cache node space;
and transmitting the queried sub-fragment file to the client by utilizing a pre-established file transmission channel.
Optionally, the identifying whether the client file is consistent with the server file includes:
respectively calculating md5 values of the client file and the server file to obtain an md5 value of the client file and an md5 value of the server file;
if the md5 value of the client file is inconsistent with the md5 value of the server file, judging that the client file is inconsistent with the server file;
and if the value of the client file md5 is consistent with the value of the server file md5, judging that the client file is consistent with the server file.
Optionally, the calculating the md5 value of the client file includes:
the md5 value for the client file is calculated using the following method:
fakeMd5expect=∑md5i
wherein, fakeMd5expectRepresenting client file md5 value, md5iAnd the file signature represents the ith sub-fragment file of the client file, and i represents the fragment number of the sub-fragment file.
In order to solve the above problem, the present invention further provides a file caching apparatus, including:
the verification module is used for receiving a server access request transmitted by a client and verifying the server access request;
the query module is used for querying a server file corresponding to the server access request when the server access request passes the verification;
the fragmentation module is used for carrying out storage fragmentation on the server side file to generate a plurality of sub-fragmentation files and storing the sub-fragmentation files into a cache node space which is created in the server side in advance;
the combination module is used for transmitting the sub-fragment files of the cache node space to the client and then combining the sub-fragment files to obtain a client file;
the identification module is used for identifying whether the client file is consistent with the server file;
the identification module is further configured to, when the client file is inconsistent with the server file, re-query the server file corresponding to the server access request;
the identification module is further configured to store the client file in a cache of the client when the client file is consistent with the server file.
In order to solve the above problem, the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executed by the at least one processor to implement the file caching method described above.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is executed by a processor in an electronic device to implement the file caching method described above.
The embodiment of the invention firstly verifies the server access request transmitted by the client, and when the server access request passes the verification, the server file corresponding to the server access request is inquired, so that whether the server access request is legal or not can be identified, thereby ensuring the normal operation of subsequent service access and ensuring the safety of the service access; secondly, the embodiment of the invention carries out storage fragmentation on the server file to generate a plurality of sub-fragment files, and stores the sub-fragment files in a cache node space which is created in the server in advance, thereby realizing the fragmentation storage of the server file, meeting the storage space of a large file, and simultaneously ensuring that the server file can support the continuous transmission of the last failed file when the transmission of the server file fails by utilizing the cache node space; furthermore, in the embodiment of the present invention, the sub-fragment files in the cache node space are transmitted to the client and then combined to obtain the client file, and the client file is stored in the client according to whether the client file is consistent with the server file, so that the latest state of the client file can be ensured. Therefore, the file caching method, the file caching device, the electronic equipment and the storage medium can improve the caching efficiency of large files.
Drawings
Fig. 1 is a schematic flowchart of a file caching method according to an embodiment of the present invention;
FIG. 2 is a detailed flowchart illustrating a step of the file caching method shown in FIG. 1 according to a first embodiment of the present invention;
FIG. 3 is a detailed flowchart illustrating another step of the file caching method provided in FIG. 1 according to a first embodiment of the present invention;
fig. 4 is a schematic block diagram of a file caching apparatus according to an embodiment of the present invention;
fig. 5 is a schematic view of an internal structure of an electronic device implementing a file caching method according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides a file caching method. The execution subject of the file caching method includes, but is not limited to, at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiments of the present application. In other words, the file caching method may be performed by software or hardware installed in the terminal device or the server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Fig. 1 is a schematic flow chart of a file caching method according to an embodiment of the present invention. In the embodiment of the present invention, the file caching method includes:
and S1, receiving a server access request transmitted by the client, and verifying the server access request.
In a preferred embodiment of the present invention, the client may also be referred to as a mobile terminal, and is configured to perform web page access, including: cell phones, tablets, and PCs, among others. The server side access request refers to the requirement for accessing a certain service in the server side, such as querying the logistics situation of a mall order, finding the IP address of a certain server, viewing the short video of a webpage, and the like.
Further, the verifying the server access request includes: acquiring a user identifier of the server access request; inquiring whether the user identification exists at a server corresponding to the client; if not, the user identification is registered in the server corresponding to the client, and then the server access request is received again; and if so, executing the access of the server access request.
The user identifier refers to a unique identifier representing user identity information, and the user identifier includes, but is not limited to: and the gesture, the fingerprint, the password and the like of the user can identify whether the server access request is legal or not according to the user identification so as to ensure the normal operation of subsequent service access and the safety of the service access.
And S2, when the server side access request passes the verification, inquiring the server side file corresponding to the server side access request.
After the server-side access request passes the verification, the embodiment of the invention inquires the server-side file corresponding to the server-side access request. The server file includes information generated by the user after browsing in the browser, such as videos, texts, pictures and the like browsed by the user. It should be noted that, in the embodiment of the present invention, the server file is a large file, that is, the file size of the server file is at least greater than 10M.
In detail, the querying a server file corresponding to the server access request includes: acquiring a browsing record of the server access request in a server; compiling the browsing record into a log file by using a log generation tool; and screening out a user requirement file from the log file to obtain a server file.
The browsing records are obtained through a web crawler technology, the web crawler technology can be a node.js technology, and the log generation tool can be compiled through a JavaScript script and is used for compiling the browsing records into a log-form file so as to more intuitively know the browsing trace of the server access request on the server.
Further, in the embodiment of the present invention, the user requirement file is screened by a get () method, for example, the browsing record of the server access request in the server is obtained as follows: video recording, picture recording, and text recording, which are compiled into log files. When receiving the user requirement to acquire the picture A and the picture B, the embodiment of the invention inquires that the picture A and the picture B exist in the picture record from the browsing record by using an import method, and screens the picture A and the picture B from the picture record by using a get () method to obtain a server file.
Further, in another embodiment of the present invention, after querying the server file corresponding to the server access request, the method further includes: and inquiring whether the same file exists in the server files, if so, deleting any server file, and if not, not processing the same file so as to avoid the redundant situation of the server files and release the system resources of the server.
Furthermore, in order to ensure the security and privacy of the server file, the server file may also be stored in a blockchain node.
S3, performing storage fragmentation on the server side file to generate a plurality of sub-fragment files, and storing the sub-fragment files into a cache node space which is created in the server side in advance.
Because the storage space of the server file is large, if the server file is directly cached and stored, the problem of insufficient cache space is easily caused, and subsequent server files cannot be normally inquired, so that the embodiment of the invention performs storage fragmentation on the server file to generate a plurality of sub-fragment files, so that the server file is divided into one small file for storage, and the file in the cache space can be rapidly read.
In detail, referring to fig. 2, the performing storage fragmentation on the server file to generate a plurality of sub-fragment files includes:
s20, length slicing is carried out on the server side file based on the preset slicing length, and a plurality of length sliced files are obtained;
and S21, carrying out fragment number identification on each fragment file with the length to obtain a plurality of sub-fragment files.
Wherein, the preset fragment length is set according to the corresponding server file size, for example: the size of the server file A is 5T, and the preset fragmentation length can be 1T, so that the embodiment of the invention can sequentially divide the server file A into 5 length fragmentation files including a length fragmentation file A \0, a length fragmentation file A \1, a length fragmentation file A \2, a length fragmentation file A \3 and a length fragmentation file A \ 4.
Further, the fragment number is used to represent the only message corresponding to the length fragment file, and preferably, the embodiment of the present invention implements the fragment number identifier of the length fragment file by id, for example, the fragment number of the length fragment file a \0 may be set to id:0, the fragment number of the length fragment file a \1 may be set to id:1, and the fragment number of the length fragment file a \2 may be set to id: 2.
Further, the sub-fragment file is stored in a cache node space created in the server in advance, where the cache node space refers to an edge node space of the server, and is used to improve the transmission speed of the server file between the server and the client. Preferably, in the embodiment of the present invention, a cache node space is created according to a distribution cache content, where the distribution cache content may be an operator, a region, and the like, for example, a cache node space is created in a south china area, a cache node space is created in an east china area, a cache node space is created in a north china area, and the like, so as to improve an access speed of a server file.
It should be noted that, if the storing of the sub-fragment file into the cache node space fails, the embodiment of the present invention supports the continuous storing of the sub-fragment file that has failed in the previous storing, for example, ten sub-fragment files need to be stored into the cache node space, when the sixth sub-fragment file is stored, the storage of the sixth sub-fragment file fails due to the downtime of the server, and when the normal operation of the server is resumed, the embodiment of the present invention supports the continuous storing of the sixth sub-fragment file, so as to improve the file storage efficiency. Optionally, the present invention supports continuous storage of the last storage failure sub-fragment file through a monitoring tool, and the monitoring tool is compiled through java language.
And S4, transmitting the sub-fragment files of the cache node space to the client, and then combining to obtain a client file.
In a preferred embodiment of the present invention, referring to fig. 4, the transmitting the sub-tile file in the cache node space to the client includes:
s30, receiving a file cache requirement of a client, and identifying a file fragment number of the file cache requirement;
s31, according to the file fragment number, inquiring the corresponding sub-fragment file from the cache node space;
and S32, transmitting the sub-fragment files to be inquired to the client by using a pre-established file transmission channel.
Specifically, the file caching requirement is input based on a user requirement, for example, the picture a and the picture B are obtained, and the file fragment number is obtained by querying a file id of the file caching requirement.
The querying the corresponding sub-fragment file from the cache node space according to the file fragment number includes: utilizing an inquiry statement to inquire the sub-fragment files with the same fragment numbers as the file fragments from the cache node space, wherein the inquiry statement comprises: a select statement.
In an alternative embodiment, the pre-created file transfer channel may be configured using currently known message middleware, such as: MQ message middleware.
Furthermore, the sub-fragment files transmitted to the client are combined to obtain the client file, so that the integrity of the corresponding server file is guaranteed.
And S5, identifying whether the client file is consistent with the server file.
In the embodiment of the invention, whether the client file is consistent with the server file is identified through an md5 message digest algorithm, the md5 message digest algorithm is a widely used password hash function and can generate a 128-bit (16-byte) hash value (hash value) for ensuring the complete and consistent file transmission, and in the embodiment of the invention, whether the client file is in the latest file state is identified by comparing md5 values of the client file and the server file, so that whether the client file can be directly stored in a cache of the client is judged, a user can conveniently and directly read the client file, and the file processing efficiency is improved.
In detail, the identifying whether the client file and the server file are consistent includes:
and calculating md5 values of the client file and the server file to obtain a client file md5 value and a server file md5 value, if the client file md5 value is inconsistent with the server file md5 value, judging that the client file is inconsistent with the server file, and if the client file md5 value is consistent with the server file md5 value, judging that the client file is consistent with the server file.
In an alternative embodiment, the md5 value for the client file is calculated using the following method:
fakeMd5expect=∑md5i
wherein, fakeMd5expectRepresenting client file md5 value, md5iFile signature representing the ith sub-segment file of the client file, i representing the sub-segmentSlice number of slice file.
Further, the md5 value of the server file is calculated in the same way as the md5 value of the client file, and will not be further described.
And if the client file is not consistent with the server file, re-executing the step S2 and inquiring the server file corresponding to the server access request.
In the embodiment of the invention, when the client file is inconsistent with the server file, the client file can be identified not to be in the latest file state, so that the latest state of the client file is ensured by re-inquiring the server file corresponding to the server access request.
And if the client file is consistent with the server file, executing S6 and storing the client file into the cache of the client.
In the embodiment of the invention, when the client file is consistent with the server file, the client file can be identified to be in the latest file state, so that the client file is stored in the cache of the client, a user can conveniently and directly inquire the client cache when accessing a webpage next time, and the webpage response speed is improved. The cache of the client can be a space opened in a disk of the client, and is used for storing the server file and improving the reading speed of the server file.
The embodiment of the invention firstly verifies the server access request transmitted by the client, and when the server access request passes the verification, the server file corresponding to the server access request is inquired, so that whether the server access request is legal or not can be identified, thereby ensuring the normal operation of subsequent service access and ensuring the safety of the service access; secondly, the embodiment of the invention carries out storage fragmentation on the server file to generate a plurality of sub-fragment files, and stores the sub-fragment files in a cache node space which is created in the server in advance, thereby realizing the fragmentation storage of the server file, meeting the storage space of a large file, and simultaneously ensuring that the server file can support the continuous transmission of the last failed file when the transmission of the server file fails by utilizing the cache node space; furthermore, in the embodiment of the present invention, the sub-fragment files in the cache node space are transmitted to the client and then combined to obtain the client file, and the client file is stored in the client according to whether the client file is consistent with the server file, so that the latest state of the client file can be ensured. Therefore, the invention can improve the caching efficiency of the large file.
Fig. 4 is a functional block diagram of the file caching apparatus according to the present invention.
The file caching apparatus 100 of the present invention may be installed in an electronic device. According to the implemented functions, the file caching device may include a verification module 101, a query module 102, a fragmentation module 103, a combination module 104, and an identification module 105. A module according to the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the verification module 101 is configured to receive a server access request transmitted by a client, and verify the server access request;
the query module 102 is configured to query a server file corresponding to the server access request when the server access request passes verification;
the fragmentation module 103 is configured to perform storage fragmentation on the server file to generate a plurality of sub-fragmentation files, and store the sub-fragmentation files in a cache node space created in the server in advance;
the combining module 104 is configured to transmit the sub-fragment files of the cache node space to the client and then combine the sub-fragment files to obtain a client file;
the identification module 105 is configured to identify whether the client file is consistent with the server file;
the identification module 105 is further configured to, when the client file is inconsistent with the server file, re-query the server file corresponding to the server access request;
the identification module 105 is further configured to store the client file in the cache of the client when the client file is consistent with the server file.
In detail, in the embodiment of the present invention, when the modules in the file caching apparatus 100 are used, the same technical means as the file caching method described in fig. 1 to 3 are adopted, and the same technical effects can be produced, and no further description is given here.
Fig. 5 is a schematic structural diagram of an electronic device implementing the file caching method according to the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a file caching program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of a file cache, etc., but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., executing a file cache, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 5 only shows an electronic device with components, and it will be understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The file cache 12 stored in the memory 11 of the electronic device 1 is a combination of instructions, which when executed in the processor 10, may implement:
receiving a server access request transmitted by a client, and verifying the server access request;
when the server side access request passes the verification, inquiring a server side file corresponding to the server side access request;
storing and fragmenting the server side file to generate a plurality of sub-fragment files, and storing the sub-fragment files into a cache node space which is created in the server side in advance;
transmitting the sub-fragment files of the cache node space to the client and then combining the sub-fragment files to obtain a client file;
identifying whether the client file is consistent with the server file;
if the client file is inconsistent with the server file, re-querying the server file corresponding to the server access request;
and if the client file is consistent with the server file, storing the client file into a cache of the client.
Specifically, the specific implementation method of the processor 10 for the instruction may refer to the description of the relevant steps in the embodiment corresponding to fig. 1, which is not described herein again.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a non-volatile computer-readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
receiving a server access request transmitted by a client, and verifying the server access request;
when the server side access request passes the verification, inquiring a server side file corresponding to the server side access request;
storing and fragmenting the server side file to generate a plurality of sub-fragment files, and storing the sub-fragment files into a cache node space which is created in the server side in advance;
transmitting the sub-fragment files of the cache node space to the client and then combining the sub-fragment files to obtain a client file;
identifying whether the client file is consistent with the server file;
if the client file is inconsistent with the server file, re-querying the server file corresponding to the server access request;
and if the client file is consistent with the server file, storing the client file into a cache of the client.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A file caching method, characterized in that the method comprises:
receiving a server access request transmitted by a client, and verifying the server access request;
when the server side access request passes the verification, inquiring a server side file corresponding to the server side access request;
storing and fragmenting the server side file to generate a plurality of sub-fragment files, and storing the sub-fragment files into a cache node space which is created in the server side in advance;
transmitting the sub-fragment files of the cache node space to the client and then combining the sub-fragment files to obtain a client file;
identifying whether the client file is consistent with the server file;
if the client file is inconsistent with the server file, re-querying the server file corresponding to the server access request;
and if the client file is consistent with the server file, storing the client file into a cache of the client.
2. The file caching method of claim 1, wherein the validating the server access request comprises:
acquiring a user identifier of the server access request;
inquiring whether the server corresponding to the client has the user identification;
if not, the user identification is registered in the server corresponding to the client, and then the server access request is received again;
and if so, executing the access of the server access request.
3. The file caching method according to claim 1, wherein the querying the server file corresponding to the server access request comprises:
acquiring a browsing record of the server access request in a server;
compiling the browsing record into a log file by using a log generation tool;
and screening out a user requirement file from the log file to obtain a server file.
4. The file caching method according to claim 1, wherein the performing storage fragmentation on the server-side file to generate a plurality of sub-fragment files comprises:
length fragmentation is carried out on the server side file based on a preset fragmentation length, and a plurality of length fragmentation files are obtained;
and carrying out fragment number identification on each fragment file with the length to obtain a plurality of sub-fragment files.
5. The file caching method of claim 4, wherein said transmitting the sub-fragmented files in the caching node space to the client comprises:
receiving a file caching requirement of a client, and identifying a file fragment number of the file caching requirement;
according to the file fragment number, inquiring a corresponding sub-fragment file from the cache node space;
and transmitting the queried sub-fragment file to the client by utilizing a pre-established file transmission channel.
6. The file caching method according to any one of claims 1 to 5, wherein the identifying whether the client file and the server file are consistent comprises:
respectively calculating md5 values of the client file and the server file to obtain an md5 value of the client file and an md5 value of the server file;
if the md5 value of the client file is inconsistent with the md5 value of the server file, judging that the client file is inconsistent with the server file;
and if the value of the client file md5 is consistent with the value of the server file md5, judging that the client file is consistent with the server file.
7. The file caching method of claim 6, wherein said calculating the md5 value for the client file comprises:
the md5 value for the client file is calculated using the following method:
fakeMd5expect=Σmd5i
wherein, fakeMd5expectRepresenting client file md5 value, md5iAnd the file signature represents the ith sub-fragment file of the client file, and i represents the fragment number of the sub-fragment file.
8. A file caching apparatus, the apparatus comprising:
the verification module is used for receiving a server access request transmitted by a client and verifying the server access request;
the query module is used for querying a server file corresponding to the server access request when the server access request passes the verification;
the fragmentation module is used for carrying out storage fragmentation on the server side file to generate a plurality of sub-fragmentation files and storing the sub-fragmentation files into a cache node space which is created in the server side in advance;
the combination module is used for transmitting the sub-fragment files of the cache node space to the client and then combining the sub-fragment files to obtain a client file;
the identification module is used for identifying whether the client file is consistent with the server file;
the identification module is further configured to, when the client file is inconsistent with the server file, re-query the server file corresponding to the server access request;
the identification module is further configured to store the client file in a cache of the client when the client file is consistent with the server file.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the file caching method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements a file caching method according to any one of claims 1 to 7.
CN202110607882.8A 2021-06-01 2021-06-01 File caching method and device, electronic equipment and storage medium Active CN113364848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110607882.8A CN113364848B (en) 2021-06-01 2021-06-01 File caching method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110607882.8A CN113364848B (en) 2021-06-01 2021-06-01 File caching method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113364848A true CN113364848A (en) 2021-09-07
CN113364848B CN113364848B (en) 2024-03-19

Family

ID=77530702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110607882.8A Active CN113364848B (en) 2021-06-01 2021-06-01 File caching method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113364848B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615258A (en) * 2022-03-28 2022-06-10 重庆长安汽车股份有限公司 Method and device for uploading large files to file server in fragmented manner

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017096830A1 (en) * 2015-12-08 2017-06-15 乐视控股(北京)有限公司 Content delivery method and scheduling proxy server for cdn platform
WO2019157929A1 (en) * 2018-02-13 2019-08-22 阿里巴巴集团控股有限公司 File processing method, device, and equipment
CN110290186A (en) * 2016-12-20 2019-09-27 北京并行科技股份有限公司 A kind of system and method suitable for the transmission of more Supercomputer Center's files
CN112257088A (en) * 2020-10-26 2021-01-22 上海睿成软件有限公司 File cache encryption system, equipment and storage medium
CN112492033A (en) * 2020-11-30 2021-03-12 深圳市移卡科技有限公司 File transmission method, system and computer readable storage medium
CN112528307A (en) * 2020-12-18 2021-03-19 平安银行股份有限公司 Service request checking method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017096830A1 (en) * 2015-12-08 2017-06-15 乐视控股(北京)有限公司 Content delivery method and scheduling proxy server for cdn platform
CN110290186A (en) * 2016-12-20 2019-09-27 北京并行科技股份有限公司 A kind of system and method suitable for the transmission of more Supercomputer Center's files
WO2019157929A1 (en) * 2018-02-13 2019-08-22 阿里巴巴集团控股有限公司 File processing method, device, and equipment
CN112257088A (en) * 2020-10-26 2021-01-22 上海睿成软件有限公司 File cache encryption system, equipment and storage medium
CN112492033A (en) * 2020-11-30 2021-03-12 深圳市移卡科技有限公司 File transmission method, system and computer readable storage medium
CN112528307A (en) * 2020-12-18 2021-03-19 平安银行股份有限公司 Service request checking method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615258A (en) * 2022-03-28 2022-06-10 重庆长安汽车股份有限公司 Method and device for uploading large files to file server in fragmented manner

Also Published As

Publication number Publication date
CN113364848B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN112653760B (en) Cross-server file transmission method and device, electronic equipment and storage medium
CN112528307A (en) Service request checking method and device, electronic equipment and storage medium
CN112115145A (en) Data acquisition method and device, electronic equipment and storage medium
CN111209557A (en) Cross-domain single sign-on method and device, electronic equipment and storage medium
CN113364848B (en) File caching method and device, electronic equipment and storage medium
CN113438304A (en) Data query method, device, server and medium based on database cluster
CN115086047B (en) Interface authentication method and device, electronic equipment and storage medium
CN111783119A (en) Form data security control method and device, electronic equipment and storage medium
CN116842012A (en) Method, device, equipment and storage medium for storing Redis cluster in fragments
CN114201466B (en) Anti-cache breakdown method, device, equipment and readable storage medium
CN112416875B (en) Log management method, device, computer equipment and storage medium
CN114826725A (en) Data interaction method, device, equipment and storage medium
CN112540839B (en) Information changing method, device, electronic equipment and storage medium
CN111651509B (en) Hbase database-based data importing method and device, electronic equipment and medium
CN113918517A (en) Multi-type file centralized management method, device, equipment and storage medium
CN112487400A (en) Single sign-on method and device based on multiple pages, electronic equipment and storage medium
CN112328656A (en) Service query method, device, equipment and storage medium based on middle platform architecture
CN112988888A (en) Key management method, key management device, electronic equipment and storage medium
CN111934882A (en) Identity authentication method and device based on block chain, electronic equipment and storage medium
CN113438221B (en) Local end file loading method and device, electronic equipment and medium
CN113542387B (en) System release method and device, electronic equipment and storage medium
CN113452785B (en) Service access method and device based on offline resources, electronic equipment and medium
CN114185502B (en) Log printing method, device, equipment and medium based on production line environment
CN112000945B (en) Authorization method, device, equipment and medium based on artificial intelligence
CN114006877A (en) Message transmission method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant