CN110597772A - Multi-instance file processing method and terminal - Google Patents
Multi-instance file processing method and terminal Download PDFInfo
- Publication number
- CN110597772A CN110597772A CN201910774235.9A CN201910774235A CN110597772A CN 110597772 A CN110597772 A CN 110597772A CN 201910774235 A CN201910774235 A CN 201910774235A CN 110597772 A CN110597772 A CN 110597772A
- Authority
- CN
- China
- Prior art keywords
- file
- instance
- transcoded
- cache
- returning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/11—File system administration, e.g. details of archiving or snapshots
- G06F16/116—Details of conversion of file system types or formats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/13—File access structures, e.g. distributed indices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/14—Details of searching files based on file metadata
- G06F16/148—File search processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/172—Caching, prefetching or hoarding of files
Abstract
The invention discloses a multi-instance file processing method and a terminal, wherein a first file downloading request is obtained and comprises a transcoded file ID and an instance ID; and judging whether the instance ID is the local ID, if so, downloading the transcoded first file from the instance cache in the local computer according to the transcoded file ID and the instance ID and returning the transcoded first file, otherwise, reading the transcoded first file from the instance caches in other hosts according to the transcoded file ID and the instance ID and returning the transcoded first file. The method is directly stored in the instance cache, and a CS server is not needed, so that the hardware cost of cache service is reduced; meanwhile, the file scheduling is the scheduling between two instances, and the access speed between the instances is far greater than that of introducing a new CS file server, so that the file reading time is reduced; for the user, the data transmission performance between the client and the host can be improved because the user always accesses one host.
Description
Technical Field
The invention relates to the technical field of file processing, in particular to a multi-instance file processing method and a terminal.
Background
With the development of technology, the service architecture of the internet is more mature, but under the conditions of shortage of computing resources and extremely high requirements on performance, it is important to realize a faster and more efficient architecture by using fewer resources.
One existing system framework requires that files be uploaded to the CS file storage server first, a file ID be returned, and then the file ID be sent to the instance server. The instance server A processes the request, the instance server A pulls the file from the file server through the file ID, then transcoding is carried out, the file is sent back to the CS file storage server after the processing is finished, and then the user can go to the CS file storage server to download the file.
The above-described conventional frame has the following problems:
1. the introduction of a CS file storage server increases hardware costs and overhead.
2. When a CS file storage server is introduced, the instance server needs to pull data and increase overhead when processing data.
3. And (4) introducing a CS file storage server, and after the instance server is finished, uploading the transcoding result file, so that the overhead is increased.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: a multi-instance file processing method and a terminal are provided, which reduce file reading time and caching cost.
In order to solve the technical problems, the invention adopts the technical scheme that:
a multi-instance file processing method comprises the following steps:
s1, acquiring a first file downloading request, wherein the first file downloading request comprises a transcoded file ID and an instance ID;
s2, judging whether the instance ID is a local ID, if so, downloading and returning a transcoded first file from an instance cache in the local according to the transcoded file ID and the instance ID, otherwise, reading the transcoded first file from instance caches in other hosts according to the transcoded file ID and the instance ID, and returning the transcoded first file.
In order to solve the technical problem, the invention adopts another technical scheme as follows:
a multi-instance document processing terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
s1, acquiring a first file downloading request, wherein the first file downloading request comprises a transcoded file ID and an instance ID;
s2, judging whether the instance ID is a local ID, if so, downloading and returning a transcoded first file from an instance cache in the local according to the transcoded file ID and the instance ID, otherwise, reading the transcoded first file from instance caches in other hosts according to the transcoded file ID and the instance ID, and returning the transcoded first file.
The invention has the beneficial effects that: when a first file downloading request is obtained, a requested host judges whether the host hits the host through an instance ID (identity), if so, the host directly reads the first file in an instance cache in the host, otherwise, the host obtains a transcoded first file through file scheduling, and a file downloading task can be finished; the method is directly stored in the instance cache, and a CS server is not needed, so that the hardware cost of cache service is reduced; meanwhile, the file scheduling is the scheduling between two instances, and the access speed between the instances is far greater than that of introducing a new CS file storage server, so that the file reading time is reduced; for the user, the data transmission performance between the client and the host can be improved because the user always accesses one host.
Drawings
FIG. 1 is a flowchart illustrating a multi-instance document processing method according to an embodiment of the present invention;
FIG. 2 is a simplified flowchart of a multi-instance document processing method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of file uploading according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a file query according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating file downloading according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a multi-instance file processing terminal according to an embodiment of the present invention.
Description of reference numerals:
1. a multi-instance file processing terminal; 2. a processor; 3. a memory.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1 to 5, a multi-instance file processing method includes the steps of:
s1, acquiring a first file downloading request, wherein the first file downloading request comprises a transcoded file ID and an instance ID;
s2, judging whether the instance ID is a local ID, if so, downloading and returning a transcoded first file from an instance cache in the local according to the transcoded file ID and the instance ID, otherwise, reading the transcoded first file from instance caches in other hosts according to the transcoded file ID and the instance ID, and returning the transcoded first file.
From the above description, the beneficial effects of the present invention are: when a first file downloading request is obtained, a requested host judges whether the host hits the local machine or not through an instance ID, if so, the first file in an instance cache in the local machine is directly read, otherwise, the transcoded first file is obtained through file scheduling, and a file downloading task can be finished; the method is directly stored in the instance cache, and a CS server is not needed, so that the hardware cost of cache service is reduced; meanwhile, the file scheduling is the scheduling between two instances, and the access speed between the instances is far greater than that of introducing a new CS file server, so that the file reading time is reduced; for the user, the data transmission performance between the client and the host can be improved because the user always accesses one host.
Further, step S1 is preceded by:
acquiring a first file uploading request, storing an un-transcoded first file to an instance cache, performing asynchronous transcoding processing on the un-transcoded first file, and storing an ID (identity) and a transcoding state of the un-transcoded first file to a cache server;
and if the transcoding state is successful transcoding, updating the un-transcoded first file into a transcoded first file, and simultaneously saving the transcoded first file ID and the stored instance ID of the instance cache to the cache server.
As can be seen from the above description, for a file upload request, asynchronous transcoding and asynchronous storage are adopted, and file state information is saved to the cache server, so that a user can query the real-time state of a currently uploaded file through the cache server at the first time.
Further, step S1 is preceded by:
acquiring a first file query request, querying a transcoding state corresponding to the un-transcoded first file ID from a caching server according to the un-transcoded first file ID, and if the transcoding state is successful, acquiring and returning the transcoded first file ID and the stored instance ID of the instance cache from the caching server.
From the above description, when a file is queried, the query state can be performed through the cache server, so that the user can obtain the file state information at any time.
Further, obtaining and returning the transcoded first file ID and the stored instance ID of the instance cache from the cache server further comprises:
and returning the transcoded first file.
As can be seen from the above description, the query and download interfaces are combined, and if the query is that the task is successfully transcoded, the file can be directly output to the client. The method and the device avoid the step that the user calls the downloading again, and can reduce the resource consumption of the server compared with the prior art that the client sends a downloading request to the CS file storage server again after returning to the successful state of the client or the server triggers the client to jump to a new server for downloading after carrying out 302 page jump to the downloading server.
Further, if the docker in which the instance cache is located includes only the first instance and the second instance, the step S2 specifically includes:
and judging whether the instance ID is the instance ID of the first instance on the local computer, if so, downloading and returning the transcoded first file from the instance cache of the first instance according to the transcoded file ID, otherwise, reading the transcoded first file from the instance cache of the second instance and returning the transcoded first file.
From the above description, that is, when storing a dual-instance file, since the load balancing server is necessarily allocated to the two instances when allocating, when the query is not a local hit, the file can be directly read from the instance cache of the other instance, thereby reducing the file reading time.
Referring to fig. 6, a multi-instance document processing terminal includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the following steps:
s1, acquiring a first file downloading request, wherein the first file downloading request comprises a transcoded file ID and an instance ID;
s2, judging whether the instance ID is a local ID, if so, downloading and returning a transcoded first file from an instance cache in the local according to the transcoded file ID and the instance ID, otherwise, reading the transcoded first file from instance caches in other hosts according to the transcoded file ID and the instance ID, and returning the transcoded first file.
From the above description, the beneficial effects of the present invention are: when a first file downloading request is obtained, a requested host judges whether the host hits the local machine or not through an instance ID, if so, the first file in an instance cache in the local machine is directly read, otherwise, the transcoded first file is obtained through file scheduling, and a file downloading task can be finished; the method is directly stored in the instance cache, and a CS server is not needed, so that the hardware cost of cache service is reduced; meanwhile, the file scheduling is the scheduling between two instances, and the access speed between the instances is far greater than that of introducing a new CS file server, so that the file reading time is reduced; for the user, the data transmission performance between the client and the host can be improved because the user always accesses one host.
Further, before the step S1, the processor executes the computer program to further implement the following steps:
acquiring a first file uploading request, storing an un-transcoded first file to an instance cache, performing asynchronous transcoding processing on the un-transcoded first file, and storing an ID (identity) and a transcoding state of the un-transcoded first file to a cache server;
and if the transcoding state is successful transcoding, updating the un-transcoded first file into a transcoded first file, and simultaneously saving the transcoded first file ID and the stored instance ID of the instance cache to the cache server.
As can be seen from the above description, for a file upload request, asynchronous transcoding and asynchronous storage are adopted, and file state information is saved to the cache server, so that a user can query the real-time state of a currently uploaded file through the cache server at the first time.
Further, before the step S1, the processor executes the computer program to further implement the following steps:
acquiring a first file uploading request, storing an un-transcoded first file to an instance cache, performing asynchronous transcoding processing on the un-transcoded first file, and storing an ID (identity) and a transcoding state of the un-transcoded first file to a cache server;
and if the transcoding state is successful transcoding, updating the un-transcoded first file into a transcoded first file, and simultaneously saving the transcoded first file ID and the stored instance ID of the instance cache to the cache server.
From the above description, when a file is queried, the query state can be performed through the cache server, so that the user can obtain the file state information at any time.
Further, before the step S1, the processor executes the computer program to further implement the following steps:
acquiring a first file query request, querying a transcoding state corresponding to the un-transcoded first file ID from a caching server according to the un-transcoded first file ID, and if the transcoding state is successful, acquiring and returning the transcoded first file ID and the stored instance ID of the instance cache from the caching server.
As can be seen from the above description, the query and download interfaces are combined, and if the query is that the task is successfully transcoded, the file can be directly output to the client. The method and the device avoid the step that the user calls the downloading again, and can reduce the resource consumption of the server compared with the prior art that the client sends a downloading request to the CS file storage server again after returning to the successful state of the client or the server triggers the client to jump to a new server for downloading after carrying out 302 page jump to the downloading server.
Further, if the docker in which the instance cache is located includes only the first instance and the second instance, the step S2 specifically includes:
and judging whether the instance ID is the instance ID of the first instance on the local computer, if so, downloading and returning the transcoded first file from the instance cache of the first instance according to the transcoded file ID, otherwise, reading the transcoded first file from the instance cache of the second instance and returning the transcoded first file.
From the above description, that is, when storing a dual-instance file, since the load balancing server is necessarily allocated to the two instances when allocating, when the query is not a local hit, the file can be directly read from the instance cache of the other instance, thereby reducing the file reading time.
Referring to fig. 1 to 5, a first embodiment of the present invention is:
a multi-instance file processing method is disclosed, as shown in FIG. 2, by introducing an instance cache scheduling mechanism, instances provide three functions of upload, down and query to the outside at the same time; after sending a request, a client converts a domain name in the request into an IP address through a domain name resolution service, and then performs request allocation through a responsible balance service; specifically, the steps of the first host when performing file uploading, file querying, and file downloading are respectively described as follows:
as shown in fig. 3, the step of the first host uploading the file is as follows:
acquiring a first file uploading request, storing an un-transcoded first file to an instance cache, performing asynchronous transcoding processing on the un-transcoded first file, and storing an ID of the un-transcoded first file and a transcoding state to a cache server, where in this embodiment, the cache server is a Redis cache in fig. 3;
and if the transcoding state is successful, updating the un-transcoded first file into a transcoded first file, and simultaneously saving the ID of the transcoded first file and the stored instance ID of the instance cache to the cache server.
As shown in fig. 4, the steps when the first host performs file query are as follows:
acquiring a first file query request, querying a transcoding state corresponding to a non-transcoded first file ID from a caching server according to the non-transcoded first file ID, and if the transcoding state is successful, acquiring and returning a transcoded first file ID and a stored instance ID of an instance cache from the caching server, namely, the step corresponds to the query reading information in the step 4, wherein the transcoded first file ID corresponds to a new file ID in the step 4.
As shown in fig. 5, the steps of the first host performing file downloading are as follows:
s1, acquiring a first file downloading request, wherein the first file downloading request comprises a transcoded file ID and an instance ID;
and S2, judging whether the instance ID is the local ID, if so, downloading and returning the transcoded first file from the instance cache in the local host according to the transcoded file ID and the instance ID, otherwise, reading the transcoded first file from the instance caches in other hosts according to the transcoded file ID and the instance ID, and returning the transcoded first file.
In this embodiment, the docker in which the instance cache is located includes only the first instance and the second instance, and then the step S2 specifically includes:
and judging whether the instance ID is the instance ID of the first instance on the local computer, if so, downloading and returning the transcoded first file from the instance cache of the first instance according to the transcoded file ID, otherwise, reading the transcoded first file from the instance cache of the second instance and returning the transcoded first file.
Referring to fig. 1 to 5, a second embodiment of the present invention is:
on the basis of the first embodiment, the method for processing a multi-instance file, wherein obtaining and returning the transcoded first file ID and the instance ID of the stored instance cache from the cache server further includes:
the transcoded first file is returned.
Namely, when the query file is transcoded successfully, the file is directly returned to the client, so that resource loss caused by the fact that the client calls the download again is avoided.
Referring to fig. 6, a third embodiment of the present invention is:
a multi-instance document processing terminal 1 comprises a memory 3, a processor 2 and a computer program stored on the memory 3 and operable on the processor 2, wherein the processor 3 implements the steps of the first embodiment when executing the computer program.
In the embodiment, a multi-instance file processing terminal 1 is a terminal where one instance is located, and is assumed to be a first host, and when a file is uploaded and downloaded, the request is distributed through a load balancing server, that is, the request may be distributed to the first host, and may also be distributed to a second host. The requested file is only stored in the instance cache of one host, so that the invention judges whether the file is hit by the local host or not through the instance ID, and requests the instance caches of other hosts when the file is not hit, so that the file cache can be hit on any processing node, and the hardware and transmission consumption caused by putting the file on a common CS server are saved.
Referring to fig. 6, a fourth embodiment of the present invention is:
on the basis of the third embodiment, the multi-instance file processing terminal 1 realizes the steps of the second embodiment when the processor 2 executes the computer program.
In summary, according to the multi-instance file processing method and the terminal provided by the present invention, when a first file download request is obtained, a requested host determines whether the requested host hits the local host through an instance ID, if so, the first file in an instance cache in the local host is directly read, otherwise, the transcoded first file is obtained through file scheduling, and a file download task can also be completed; the method is directly stored in the instance cache, and a CS server is not needed, so that the hardware cost of cache service is reduced; meanwhile, the file scheduling is the scheduling between two instances, and the access speed between the instances is far greater than that of introducing a new CS file server, so that the file reading time is reduced; for a user, the data transmission performance between the client and the host can be improved because the user always accesses one host; in addition, the query interface and the download interface are combined, if the query task is that the transcoding is successful, the file can be directly output to the client, and resource consumption of the server can be reduced.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.
Claims (10)
1. A multi-instance file processing method is characterized by comprising the following steps:
s1, acquiring a first file downloading request, wherein the first file downloading request comprises a transcoded file ID and an instance ID;
s2, judging whether the instance ID is a local ID, if so, downloading and returning a transcoded first file from an instance cache in the local according to the transcoded file ID and the instance ID, otherwise, reading the transcoded first file from instance caches in other hosts according to the transcoded file ID and the instance ID, and returning the transcoded first file.
2. The multi-instance file processing method according to claim 1, wherein said step S1 is preceded by:
acquiring a first file uploading request, storing an un-transcoded first file to an instance cache, performing asynchronous transcoding processing on the un-transcoded first file, and storing an ID (identity) and a transcoding state of the un-transcoded first file to a cache server;
and if the transcoding state is successful transcoding, updating the un-transcoded first file into a transcoded first file, and simultaneously saving the transcoded first file ID and the stored instance ID of the instance cache to the cache server.
3. The multi-instance file processing method according to claim 2, wherein said step S1 is preceded by:
acquiring a first file query request, querying a transcoding state corresponding to the un-transcoded first file ID from a caching server according to the un-transcoded first file ID, and if the transcoding state is successful, acquiring and returning the transcoded first file ID and the stored instance ID of the instance cache from the caching server.
4. The method of claim 3, wherein retrieving and returning the transcoded first file ID and the stored instance ID of the instance cache from the cache server further comprises:
and returning the transcoded first file.
5. The method for processing a multi-instance file according to any one of claims 1 to 3, wherein if the docker in which the instance cache is located includes only the first instance and the second instance, the step S2 specifically includes:
and judging whether the instance ID is the instance ID of the first instance on the local computer, if so, downloading and returning the transcoded first file from the instance cache of the first instance according to the transcoded file ID, otherwise, reading the transcoded first file from the instance cache of the second instance and returning the transcoded first file.
6. A multi-instance document processing terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements the steps of:
s1, acquiring a first file downloading request, wherein the first file downloading request comprises a transcoded file ID and an instance ID;
s2, judging whether the instance ID is a local ID, if so, downloading and returning a transcoded first file from an instance cache in the local according to the transcoded file ID and the instance ID, otherwise, reading the transcoded first file from instance caches in other hosts according to the transcoded file ID and the instance ID, and returning the transcoded first file.
7. The multi-instance document processing terminal according to claim 6, wherein before said step S1, said processor when executing said computer program further implements the steps of:
acquiring a first file uploading request, storing an un-transcoded first file to an instance cache, performing asynchronous transcoding processing on the un-transcoded first file, and storing an ID (identity) and a transcoding state of the un-transcoded first file to a cache server;
and if the transcoding state is successful transcoding, updating the un-transcoded first file into a transcoded first file, and simultaneously saving the transcoded first file ID and the stored instance ID of the instance cache to the cache server.
8. The multi-instance document processing terminal according to claim 7, wherein before said step S1, said processor when executing said computer program further implements the steps of:
acquiring a first file uploading request, storing an un-transcoded first file to an instance cache, performing asynchronous transcoding processing on the un-transcoded first file, and storing an ID (identity) and a transcoding state of the un-transcoded first file to a cache server;
and if the transcoding state is successful transcoding, updating the un-transcoded first file into a transcoded first file, and simultaneously saving the transcoded first file ID and the stored instance ID of the instance cache to the cache server.
9. The multi-instance document processing terminal according to claim 8, wherein before said step S1, said processor when executing said computer program further implements the steps of:
acquiring a first file query request, querying a transcoding state corresponding to the un-transcoded first file ID from a caching server according to the un-transcoded first file ID, and if the transcoding state is successful, acquiring and returning the transcoded first file ID and the stored instance ID of the instance cache from the caching server.
10. The multi-instance file processing terminal according to any one of claims 6 to 9, wherein if the docker in which the instance cache is located includes only the first instance and the second instance, then step S2 specifically includes:
and judging whether the instance ID is the instance ID of the first instance on the local computer, if so, downloading and returning the transcoded first file from the instance cache of the first instance according to the transcoded file ID, otherwise, reading the transcoded first file from the instance cache of the second instance and returning the transcoded first file.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910774235.9A CN110597772A (en) | 2019-08-21 | 2019-08-21 | Multi-instance file processing method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910774235.9A CN110597772A (en) | 2019-08-21 | 2019-08-21 | Multi-instance file processing method and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110597772A true CN110597772A (en) | 2019-12-20 |
Family
ID=68855009
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910774235.9A Pending CN110597772A (en) | 2019-08-21 | 2019-08-21 | Multi-instance file processing method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110597772A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112035405A (en) * | 2020-08-29 | 2020-12-04 | 平安科技(深圳)有限公司 | Document transcoding method and device, scheduling server and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102882829A (en) * | 2011-07-11 | 2013-01-16 | 腾讯科技(深圳)有限公司 | Transcoding method and system |
CN104346150A (en) * | 2013-07-30 | 2015-02-11 | 华为技术有限公司 | Multiple instance business executable file generating method and device |
CN109753244A (en) * | 2018-12-29 | 2019-05-14 | 北京奥鹏远程教育中心有限公司 | A kind of application method of Redis cluster |
-
2019
- 2019-08-21 CN CN201910774235.9A patent/CN110597772A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102882829A (en) * | 2011-07-11 | 2013-01-16 | 腾讯科技(深圳)有限公司 | Transcoding method and system |
CN104346150A (en) * | 2013-07-30 | 2015-02-11 | 华为技术有限公司 | Multiple instance business executable file generating method and device |
CN109753244A (en) * | 2018-12-29 | 2019-05-14 | 北京奥鹏远程教育中心有限公司 | A kind of application method of Redis cluster |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112035405A (en) * | 2020-08-29 | 2020-12-04 | 平安科技(深圳)有限公司 | Document transcoding method and device, scheduling server and storage medium |
CN112035405B (en) * | 2020-08-29 | 2023-10-13 | 平安科技(深圳)有限公司 | Document transcoding method and device, scheduling server and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102984286B (en) | Method and device and system of domain name server (DNS) for buffering updating | |
US10764202B2 (en) | Container-based mobile code offloading support system in cloud environment and offloading method thereof | |
US8713186B2 (en) | Server-side connection resource pooling | |
US9117002B1 (en) | Remote browsing session management | |
JP2021511588A (en) | Data query methods, devices and devices | |
CN110730196B (en) | Network resource access method, computer equipment and storage medium | |
CN113010818B (en) | Access current limiting method, device, electronic equipment and storage medium | |
US8296774B2 (en) | Service-based endpoint discovery for client-side load balancing | |
US9088461B2 (en) | Common web accessible data store for client side page processing | |
US20230254312A1 (en) | Service processing method and device | |
WO2018035799A1 (en) | Data query method, application and database servers, middleware, and system | |
EP2011029B1 (en) | Managing network response buffering behavior | |
CN111371585A (en) | Configuration method and device for CDN node | |
CN110597772A (en) | Multi-instance file processing method and terminal | |
US10341454B2 (en) | Video and media content delivery network storage in elastic clouds | |
CN115576654B (en) | Request processing method, device, equipment and storage medium | |
Yuan et al. | Towards efficient deployment of cloud applications through dynamic reverse proxy optimization | |
US7613710B2 (en) | Suspending a result set and continuing from a suspended result set | |
CN110865845B (en) | Method for improving interface access efficiency and storage medium | |
US11755534B2 (en) | Data caching method and node based on hyper-converged infrastructure | |
CN113849255B (en) | Data processing method, device and storage medium | |
CN110247939A (en) | The high-performance combination frame realized using multi-level buffer technology | |
WO2023029610A1 (en) | Data access method and device, and storage medium | |
CN115225712B (en) | Interface arrangement method and terminal | |
CN116795877B (en) | Method and device for pre-reading database, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20191220 |
|
WD01 | Invention patent application deemed withdrawn after publication |