CN110753084B - Uplink data reading method, cache server and computer readable storage medium - Google Patents

Uplink data reading method, cache server and computer readable storage medium Download PDF

Info

Publication number
CN110753084B
CN110753084B CN201910844248.9A CN201910844248A CN110753084B CN 110753084 B CN110753084 B CN 110753084B CN 201910844248 A CN201910844248 A CN 201910844248A CN 110753084 B CN110753084 B CN 110753084B
Authority
CN
China
Prior art keywords
uplink data
uplink
data
backup
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910844248.9A
Other languages
Chinese (zh)
Other versions
CN110753084A (en
Inventor
杨小彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Puhui Enterprise Management Co Ltd
Original Assignee
Ping An Puhui Enterprise Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Puhui Enterprise Management Co Ltd filed Critical Ping An Puhui Enterprise Management Co Ltd
Priority to CN201910844248.9A priority Critical patent/CN110753084B/en
Publication of CN110753084A publication Critical patent/CN110753084A/en
Application granted granted Critical
Publication of CN110753084B publication Critical patent/CN110753084B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application is applicable to the technical field of data processing, and provides a cochain data reading method, a cache server and a computer readable storage medium, wherein the method comprises the following steps: receiving an uplink task, and backing up uplink data in the uplink task; taking the backup uplink data as first uplink data, and sending the first uplink data to a block chain system; when a data query instruction is received, searching second uplink data corresponding to the data query instruction from the block chain system or the backup uplink data according to the data query instruction; and returning the second uplink data to the issuer of the data query instruction. The method and the device can solve the problems that in the existing uplink data reading mode, a user can read uplink data from a block chain only after waiting for the uplink data to be recorded by the block chain, waiting time is long, and user experience is poor.

Description

Uplink data reading method, cache server and computer readable storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method for reading uplink data, a cache server, and a computer-readable storage medium.
Background
With the development of blockchain technology, blockchains are increasingly used in various fields, such as data storage, intelligent contracts, and the like.
When the block chain is used for storing data, the data needs to be stored into the block, but a certain time is needed for generating a new block by the block chain system, and the block-out time of the current block chain system generally needs 0.5 to 2 seconds. Moreover, the amount of data that can be stored in each block is limited, and when the amount of uplink data is large, a part of data needs to wait for a long time to be recorded by the block chain, and the uplink time is long.
In some application scenarios, after submitting uplink data to the blockchain system, a user needs to read the uplink data immediately, but the uplink data is not recorded by the blockchain temporarily, and the user needs to wait for a period of time until the uplink data is recorded by the blockchain, and the user experience is very poor.
In summary, in the conventional uplink data reading mode, the user needs to wait for the uplink data to be recorded by the block chain before reading the uplink data from the block chain, and the waiting time is long and the user experience is poor.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a method for reading uplink data, a cache server and a computer readable storage medium to solve the problems of long waiting time and poor user experience in the conventional uplink data reading mode that a user needs to wait for uplink data to be recorded by a block chain before reading the uplink data from the block chain.
A first aspect of an embodiment of the present application provides a method for reading uplink data, including:
receiving an uplink task, and backing up uplink data in the uplink task;
taking the backup uplink data as first uplink data, and sending the first uplink data to a block chain system;
when a data query instruction is received, searching second uplink data corresponding to the data query instruction from the block chain system or the backup uplink data according to the data query instruction;
and returning the second uplink data to the issuer of the data query instruction.
A second aspect of an embodiment of the present application provides a cache server, including:
the task backup module is used for receiving an uplink task and backing up uplink data in the uplink task;
the data uplink module is used for taking the backup uplink data as first uplink data and sending the first uplink data to the block chain system;
the data query module is used for searching second uplink data corresponding to a data query instruction from the block chain system or the backup uplink data according to the data query instruction when the data query instruction is received;
and the data feedback module is used for returning the second uplink data to the issuer of the data query instruction.
A third aspect of the embodiments of the present application provides a cache server, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, implements the steps of the method as described above.
Compared with the prior art, the embodiment of the application has the advantages that:
according to the uplink data reading method, after a user submits an uplink task, the cache server backs up uplink data in the uplink task, when the user wants to inquire the uplink data, corresponding uplink data can be searched from a block chain system or the backed uplink data, even if the uplink data is not recorded by the block chain temporarily, the corresponding uplink data can be searched from the backed uplink data, the uplink data do not need to be waited for successfully linking, and the problems that in an existing uplink data reading mode, the user can read the uplink data from the block chain after waiting for the uplink data to be recorded by the block chain, waiting time is long, and user experience is poor are solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of a system provided by an embodiment of the present application;
fig. 2 is a flowchart illustrating a method for reading uplink data according to an embodiment of the present application;
fig. 3 is a schematic diagram of a cache server according to an embodiment of the present application;
fig. 4 is a schematic diagram of another cache server provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical means described in the present application, the following description will be given by way of specific examples.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a schematic diagram of a system suitable for use in an embodiment of the present application, where the system includes: a client 101, a cache server 102 and a blockchain system 103; the client 101, cache server 102, and blockchain system 103 communicate over a wired and/or wireless network.
The client 101 may be a mobile phone (mobile phone), a desktop computer, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), and the like. The number of the clients is determined according to the actual application scenario, and may be one client or a plurality of clients.
The cache server 102 may be a server or a combination of servers, and the cache server 102 is configured to backup uplink data sent by the ue 101 and send the backed-up uplink data to the block chain system 103.
The block chain system 103 may be a server or a combination of servers, and the block chain system 103 is configured to receive uplink data and record the uplink data in a block chain.
The present application provides a method for reading uplink data, a cache server, and a computer-readable storage medium, so as to solve the problems of long waiting time and poor user experience that a user needs to wait for the uplink data to be recorded by a block chain before reading the uplink data from the block chain in the existing uplink data reading mode.
The first embodiment is as follows:
referring to fig. 2, a method for reading uplink data provided in an embodiment of the present application is described below, where the method for reading uplink data in the embodiment of the present application includes:
step S201, receiving an uplink task, and backing up uplink data in the uplink task;
when the ue needs to record some data in the blockchain, it may submit an uplink task to the cache server, where the uplink task includes uplink data that needs to be uplink.
And after receiving the uplink task, the cache server backs up uplink data in the uplink task.
Step S202, taking the backup uplink data as a first uplink data, and sending the first uplink data to a block chain system;
after the cache server backs up the uplink data, the backed up uplink data is used as first uplink data, the first uplink data is sent to the block chain system, and the uplink data which is not backed up waits for backup.
Step S203, when a data query instruction is received, searching for second uplink data corresponding to the data query instruction from the block chain system or the backup uplink data according to the data query instruction;
when the ue needs to query the uplink data, it can send a data query command to the cache server.
When the cache server receives the data query instruction, the cache server may search the second uplink data corresponding to the data query instruction from the block chain system or the backup uplink data.
Step S204, returning the second uplink data to the issuer of the data query instruction.
After the second uplink data is inquired, the second uplink data is returned to the issuer of the data inquiry command.
Because the uplink data is backed up, even if the uplink data is not recorded by the block chain temporarily, the second uplink data can be searched from the uplink data locally backed up by the cache server, so that the uplink data does not need to be waited in the user inquiry process, the user time is saved, and the user experience is improved.
For example, part of software uploads the operation data of a user to a block chain, when the user exits the software, the software uploads the operation data to the block chain, when the user enters the software, the software reads the last operation data from the block chain and restores the operation progress of the user, in a traditional block chain uplink mode, after the user exits the software, if the user needs to restart the software immediately to operate, as the last operation data is not recorded by the block chain, the user needs to wait for a longer time and can read the last operation data from the block chain after the operation data is recorded by the block chain, the operation progress of the user is restored, the waiting time is long, and the user experience is very poor; in the uplink data reading method of this embodiment, when the user exits and immediately restarts the software, the last operation data is not recorded by the block chain, but the software can acquire the last operation data of the user from the uplink data backed up in the cache server, so as to quickly restore the operation process of the user, reduce the waiting time of the user, and improve the user experience.
It should be understood that the cache server in this embodiment may be one server, or may be multiple servers. The cache server may be a server separately disposed, and may be integrated into the client or the block link point, for example, in some embodiments, a separate computer may be used as the cache server, in other embodiments, the client may give consideration to the functions of the cache server, and a specific implementation of the cache server may be selected according to actual needs.
Further, the receiving an uplink task, the backing up uplink data in the uplink task specifically includes:
a1, receiving an uplink task through a task queue;
when the cache server receives a large number of uplink tasks in a short time, the cache server may cause server blocking and affect uplink efficiency, therefore, when the cache server receives the uplink tasks, the cache server can receive the uplink tasks through the task queues, then the cache server sequentially extracts the uplink tasks from the task queues to execute uplink operation, the orderly processing of the uplink tasks is guaranteed, the server blocking caused by the fact that the uplink tasks are too many can be avoided, and the uplink efficiency is improved.
And A2, storing uplink data in the uplink task into a designated storage area for backup.
After the cache server extracts the uplink task from the task queue, uplink data in the uplink task can be stored in a designated storage area for backup.
The designated storage area can be set according to actual requirements, can be set as a cache area, and can also be set as a persistent storage area.
It should be understood that the task queue belongs to a part of the cache server, and the storing of the uplink data in the storage area corresponding to the task queue and the storing of the uplink data in the designated storage area can be regarded as backup of the uplink data. When searching the second uplink data from the backup uplink data, the second uplink data can be searched from the task queue or from the designated storage area.
Further, after the backup uplink data is used as the first uplink data and the first uplink data is sent to the block chain system, the method further includes:
b1, judging whether the first uplink data is successfully uplink;
after the cache server sends the first uplink data to the block chain system, the cache server can also monitor the uplink state of the first uplink data to determine whether the first uplink data is successfully uplink.
And B2, if the first uplink data is successfully uplink, deleting the backup uplink data which is the same as the first uplink data.
If the first uplink data is successfully uplink-linked, the first uplink data is indicated to have been recorded by the block chain, and the first uplink data can be searched from the block chain system, at this time, in order to save the storage space of the cache server, the uplink data which is backed up in the cache server and is the same as the first uplink data can be deleted, the deletion mode can be set as immediate deletion or periodic deletion, when the deletion mode is set as immediate deletion, once the uplink data is successfully detected, the deletion operation is executed, and when the deletion mode is set as periodic deletion, the deletion operation can be executed once every preset time length.
Further, after the determining whether the first uplink data is successfully uplink, the method further includes:
c1, if the first uplink data fails to uplink, adding the uplink task corresponding to the first uplink data into the task queue again.
If the uplink of the first uplink data fails, the uplink task corresponding to the first uplink data is added into the task queue again to be queued, the re-uplink is automatically tried, the operation of the client can be simplified through an uplink failure retry mechanism arranged on the cache server, and the client does not need to manually restart the uplink process after the uplink data fails.
Further, when a data query instruction is received, searching for second uplink data corresponding to the data query instruction from the block chain system or the backup uplink data according to the data query instruction specifically includes:
d1, when a data query instruction is received, judging whether second uplink data corresponding to the data query instruction exists in the backup uplink data or not according to the data query instruction;
when the cache server receives the data query instruction, the cache server can search in the uplink data of the local backup according to the data query instruction, and judge whether the uplink data of the backup has the second uplink data corresponding to the data query instruction.
D2, if the second uplink data exists in the backup uplink data, acquiring the second uplink data;
when the second uplink data exists in the backup uplink data, the second uplink data can be directly acquired.
D3, if the second uplink data does not exist in the backup uplink data, searching the second uplink data from the block chain system.
When the second uplink data does not exist in the backup uplink data, indicating that the second uplink data has been successfully uplink, the second uplink data can be searched from the block chain system.
In addition, in the actual application process, the second uplink data may be searched in the block chain system, and if the second uplink data cannot be searched in the block chain system, the second uplink data is searched in the uplink data locally backed up by the cache server.
In the uplink data reading method provided in this embodiment, after a user submits an uplink task, a cache server backs up uplink data in the uplink task, and when the user wants to query the uplink data, the cache server can search corresponding uplink data from a block chain system or the backed uplink data, and even if the uplink data is not recorded by the block chain temporarily, the cache server can search from the backed uplink data without waiting for successful uplink of the uplink data, thereby solving the problems that in the existing uplink data reading mode, the user can read the uplink data from the block chain after waiting for the uplink data to be recorded by the block chain, the waiting time is long, and the user experience is poor.
The cache server can receive the uplink tasks through the task queue, and sequentially take out and process the uplink tasks from the task queue, so that the server blockage caused by excessive uplink tasks is prevented, and the uplink efficiency is prevented from being influenced.
After the first uplink data is sent to the block chain system, the uplink state of the first uplink data is monitored, the uplink data which is the same as the first uplink data successfully linked in the backup uplink data is deleted, and the storage pressure of the cache server is relieved.
When the first uplink data fails to uplink, the cache server can directly rejoin the uplink task corresponding to the first uplink data which fails to uplink into the task queue, and the user does not need to manually control to re-uplink, thereby reducing the operation of the user.
When the second uplink data is inquired, the cache server can search the second uplink data locally, if the second uplink data is found, the second uplink data is directly obtained, and if the second uplink data is not found, the second uplink data is searched from the block chain system.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by functions and internal logic of the process, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example two:
the second embodiment of the present application provides a cache server, which is only shown in relevant parts for convenience of description, and as shown in fig. 3, the cache server includes,
a task backup module 301, configured to receive an uplink task and backup uplink data in the uplink task;
a data uplink module 302, configured to send the first uplink data to a block chain system by using the backed up uplink data as the first uplink data;
a data query module 303, configured to, when a data query instruction is received, search, according to the data query instruction, second uplink data corresponding to the data query instruction from the block chain system or the backup uplink data;
a data feedback module 304, configured to return the second uplink data to the issuer of the data query instruction.
Further, the task backup module 301 specifically includes:
a receiving submodule, configured to receive an uplink task through a task queue;
and the storage submodule is used for storing uplink data in the uplink task into a specified storage area for backup.
Further, the server further includes:
a state obtaining module, configured to determine whether the first uplink data is successfully uplink;
and the backup deleting module is used for deleting the backup uplink data which are the same as the first uplink data if the first uplink data are successfully uplink.
Further, the server further includes:
and the failure restarting module is used for adding the uplink task corresponding to the first uplink data into the task queue again if the uplink of the first uplink data fails.
Further, the data query module 303 specifically includes:
the instruction submodule is used for judging whether second uplink data corresponding to the data query instruction exists in the backup uplink data or not according to the data query instruction when the data query instruction is received;
the backup submodule is used for acquiring the second uplink data if the second uplink data exists in the backup uplink data;
the searching submodule is configured to search the second uplink data from the block chain system if the second uplink data does not exist in the backup uplink data.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Example three:
fig. 4 is a schematic diagram of a cache server according to a third embodiment of the present application. As shown in fig. 4, the cache server 4 of this embodiment includes: a processor 40, a memory 41 and a computer program 42 stored in said memory 41 and executable on said processor 40. The processor 40 executes the computer program 42 to implement the steps in the above-mentioned uplink data reading method embodiment, such as steps S201 to S204 shown in fig. 2. Alternatively, the processor 40, when executing the computer program 42, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 301 to 304 shown in fig. 3.
Illustratively, the computer program 42 may be partitioned into one or more modules/units that are stored in the memory 41 and executed by the processor 40 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 42 in the cache server 4. For example, the computer program 42 may be divided into a task backup module, a data uplink module, a data query module, and a data feedback module, and each module has the following functions:
the task backup module is used for receiving a uplink task and backing up uplink data in the uplink task;
the data uplink module is used for taking the backup uplink data as first uplink data and sending the first uplink data to the block chain system;
the data query module is used for searching second uplink data corresponding to a data query instruction from the block chain system or the backup uplink data according to the data query instruction when the data query instruction is received;
and the data feedback module is used for returning the second uplink data to the issuer of the data query instruction.
The cache server 4 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The cache server may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is merely an example of a cache server 4, and does not constitute a limitation of the cache server 4, and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the cache server may also include input output devices, network access devices, buses, etc.
The Processor 40 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 41 may be an internal storage unit of the cache server 4, such as a hard disk or a memory of the cache server 4. The memory 41 may also be an external storage device of the cache server 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the cache server 4. Further, the memory 41 may also include both an internal storage unit and an external storage device of the cache server 4. The memory 41 is used for storing the computer programs and other programs and data required by the cache server. The memory 41 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/cache server and method may be implemented in other ways. For example, the above-described apparatus/cache server embodiments are merely illustrative, and for example, the division of the modules or units is only one logical function division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (9)

1. A method for reading uplink data, comprising:
receiving an uplink task, and backing up uplink data in the uplink task;
taking the uplink data after backup as first uplink data, sending the first uplink data to a block chain system, and waiting for backup of the uplink data which is not backed up;
when a data query instruction is received, searching second uplink data corresponding to the data query instruction from the block chain system or the backup uplink data according to the data query instruction, wherein the searching comprises: when a data query instruction is received, judging whether second uplink data corresponding to the data query instruction exists in the backup uplink data according to the data query instruction; if the second uplink data exists in the backup uplink data, acquiring the second uplink data; if the second uplink data does not exist in the backup uplink data, searching the second uplink data from the block chain system;
and returning the second uplink data to the issuer of the data query instruction.
2. The method of claim 1, wherein the receiving the uplink task and the backup of the uplink data in the uplink task comprises:
receiving an uplink task through a task queue;
and storing uplink data in the uplink task into a designated storage area for backup.
3. The method of claim 2, wherein after the backup uplink data is used as the first uplink data and the first uplink data is sent to the block chain system, the method further comprises:
judging whether the first uplink data is successfully uplink;
and if the first uplink data is successfully uplink, deleting the backup uplink data which is the same as the first uplink data.
4. The method of claim 3, wherein after said determining whether said first uplink data is successfully uplink transmitted, further comprising:
and if the first uplink data is failed to uplink, adding the uplink task corresponding to the first uplink data into the task queue again.
5. A cache server, comprising:
the task backup module is used for receiving a uplink task and backing up uplink data in the uplink task;
the data uplink module is used for taking the backup uplink data as first uplink data, sending the first uplink data to the block chain system, and waiting for backup of the uplink data which is not backed up;
the data query module is used for searching second uplink data corresponding to a data query instruction from the block chain system or the backup uplink data according to the data query instruction when the data query instruction is received;
a data feedback module, configured to return the second uplink data to the issuer of the data query instruction;
the data query module specifically comprises:
the instruction submodule is used for judging whether second uplink data corresponding to the data query instruction exists in the backup uplink data or not according to the data query instruction when the data query instruction is received;
a backup sub-module, configured to obtain the second uplink data if the second uplink data exists in the backup uplink data;
the searching submodule is configured to search the second uplink data from the block chain system if the second uplink data does not exist in the backup uplink data.
6. The cache server of claim 5, wherein the task backup module specifically comprises:
a receiving submodule, configured to receive an uplink task through a task queue;
and the storage submodule is used for storing uplink data in the uplink task into a specified storage area for backup.
7. The cache server of claim 5, wherein the server further comprises:
a state obtaining module, configured to determine whether the first uplink data is successfully uplink;
and the backup deleting module is used for deleting the backup uplink data which is the same as the first uplink data if the first uplink data is successfully uplink.
8. A cache server comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor implements the steps of the method according to any of claims 1 to 4 when executing said computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN201910844248.9A 2019-09-06 2019-09-06 Uplink data reading method, cache server and computer readable storage medium Active CN110753084B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910844248.9A CN110753084B (en) 2019-09-06 2019-09-06 Uplink data reading method, cache server and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910844248.9A CN110753084B (en) 2019-09-06 2019-09-06 Uplink data reading method, cache server and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110753084A CN110753084A (en) 2020-02-04
CN110753084B true CN110753084B (en) 2023-04-07

Family

ID=69276214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910844248.9A Active CN110753084B (en) 2019-09-06 2019-09-06 Uplink data reading method, cache server and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110753084B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274258A (en) * 2020-02-10 2020-06-12 刘翱天 Block chain data uplink method
CN111309783A (en) * 2020-02-10 2020-06-19 刘翱天 Cochain data reading system
CN114625767A (en) * 2020-11-10 2022-06-14 支付宝(杭州)信息技术有限公司 Data query method, device, equipment and readable medium
CN112328690A (en) * 2020-11-12 2021-02-05 星矿科技(北京)有限公司 Efficient block chain access method
CN112612816B (en) * 2020-12-01 2023-06-30 网易(杭州)网络有限公司 Service result query method, device, equipment and medium of Ethernet alliance chain
CN114881760B (en) * 2022-04-29 2023-04-07 深圳市智策科技有限公司 Data management method and system based on block chain

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985757B (en) * 2017-11-27 2021-03-30 京东数字科技控股有限公司 Information processing method, device and system, storage medium and electronic equipment
CN109086398A (en) * 2018-07-26 2018-12-25 深圳前海微众银行股份有限公司 Asynchronous cochain method, equipment and computer readable storage medium
CN109542945B (en) * 2018-10-19 2023-09-22 平安科技(深圳)有限公司 Block chain data statistical analysis method, device and storage medium

Also Published As

Publication number Publication date
CN110753084A (en) 2020-02-04

Similar Documents

Publication Publication Date Title
CN110753084B (en) Uplink data reading method, cache server and computer readable storage medium
US11513926B2 (en) Systems and methods for instantiation of virtual machines from backups
US11397648B2 (en) Virtual machine recovery method and virtual machine management device
US10152382B2 (en) Method and system for monitoring virtual machine cluster
US8380678B2 (en) Tracking files which have been processed by a backup or a restore operation
US9817879B2 (en) Asynchronous data replication using an external buffer table
US20120047115A1 (en) Extent reference count update system and method
CN110968478B (en) Log acquisition method, server and computer storage medium
CN111880967A (en) File backup method, device, medium and electronic equipment in cloud scene
CN111338834B (en) Data storage method and device
US9805038B2 (en) Efficient conflict resolution among stateless processes
CN111459629A (en) Azkaban-based project operation method and device and terminal equipment
CN114238236A (en) Shared file access method, electronic device and computer readable storage medium
US11169714B1 (en) Efficient file replication
CN108491160B (en) Data writing method and device
CN109034668B (en) ETL task scheduling method, ETL task scheduling device, computer equipment and storage medium
CN108121514B (en) Meta information updating method and device, computing equipment and computer storage medium
CN114328007B (en) Container backup and restoration method, device and medium thereof
CN108959405B (en) Strong consistency reading method of data and terminal equipment
CN111045983B (en) Nuclear power station electronic file management method, device, terminal equipment and medium
US10860668B1 (en) Querying system and method
CN107209882B (en) Multi-stage de-registration for managed devices
US11379147B2 (en) Method, device, and computer program product for managing storage system
US10853184B1 (en) Granular restore view using out-of-band continuous metadata collection
CN117255101B (en) Data processing method, device, equipment and medium of distributed storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant