CN117056971A - Data storage method, device, electronic equipment and readable storage medium - Google Patents

Data storage method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117056971A
CN117056971A CN202311028498.8A CN202311028498A CN117056971A CN 117056971 A CN117056971 A CN 117056971A CN 202311028498 A CN202311028498 A CN 202311028498A CN 117056971 A CN117056971 A CN 117056971A
Authority
CN
China
Prior art keywords
cache
end interface
page information
annotation
back end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311028498.8A
Other languages
Chinese (zh)
Other versions
CN117056971B (en
Inventor
陈小龙
刘飞
李跃红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Fangduoduo Information Technology Co ltd
Original Assignee
Beijing Fangduoduo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Fangduoduo Information Technology Co ltd filed Critical Beijing Fangduoduo Information Technology Co ltd
Priority to CN202311028498.8A priority Critical patent/CN117056971B/en
Publication of CN117056971A publication Critical patent/CN117056971A/en
Application granted granted Critical
Publication of CN117056971B publication Critical patent/CN117056971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • G06F21/6263Protecting personal data, e.g. for financial or medical purposes during internet communication, e.g. revealing personal data from cookies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention provides a data storage method, a data storage device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: responding to an audit request of a client for a target page, determining a cache back-end interface corresponding to the target page, acquiring page information corresponding to the cache back-end interface, and returning to the client; configuring a first cache annotation for the cache back-end interface; caching page information corresponding to the cache back end interface configured with the first cache annotation; responding to an audit confirmation request submitted by the client for page information corresponding to the cache back-end interface, and determining the cache back-end interface as a storage back-end interface; configuring behavior annotation for the storage back-end interface; and storing page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation. When the embodiment of the invention needs to trace back the audited page information, the reliability of the traced page information is high.

Description

Data storage method, device, electronic equipment and readable storage medium
Technical Field
Embodiments of the present invention relate to the field of internet technologies, and in particular, to a data storage method, a data storage device, an electronic apparatus, and a computer readable storage medium.
Background
With the development of internet technology, users can browse page information in pages through terminal devices such as smart phones and computers, so that house transactions, online shopping, work finding and the like are performed based on the page information, convenience of life of people is greatly improved, and life diversity is enriched.
In practical application, in order to ensure that page information in a page is truly and reliably displayed, the text description is accurate, the picture is clear, no violation and the like are required, and auditing personnel need to audit the page information in the page.
In order to be convenient for backtracking the auditing behavior of the auditing personnel, the page information in the auditing page of the auditing personnel at the time needs to be restored. Specifically, the method includes that firstly, page information corresponding to each back-end interface in a current page needs to be traversed, and is audited by auditors, then, after the auditors confirm that the page information is audited, the auditors need to store the page information corresponding to each back-end interface in the current page, and at the moment, each back-end interface in the current page needs to be traversed again to acquire the corresponding page information. However, there is a risk in re-traversing to obtain the page information of the back-end interface, the page information corresponding to the back-end interface may change, and the auditor does not know the page information before the change, so that the back-traced page information is unreliable.
Disclosure of Invention
The embodiment of the invention provides a data storage method, a data storage device, electronic equipment and a computer readable storage medium, which are used for solving the problem that backtracking page information is unreliable due to the fact that stored page information is different from page information when auditors audit.
The embodiment of the invention discloses a data storage method which is applied to a server, and comprises the following steps:
responding to an audit request of a client for a target page, determining a cache back-end interface corresponding to the target page, acquiring page information corresponding to the cache back-end interface, and returning to the client;
configuring a first cache annotation for the cache back-end interface; the first cache annotation is used for indicating the server to cache page information corresponding to the cache back end interface;
caching page information corresponding to the cache back end interface configured with the first cache annotation;
responding to an audit confirmation request submitted by the client for page information corresponding to the cache back-end interface, and determining the cache back-end interface as a storage back-end interface;
configuring behavior annotation for the storage back-end interface; the behavior annotation is used for indicating the server to store page information corresponding to the storage back-end interface;
And storing page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation, so as to ensure that the stored page information is the page information after the client submits an audit confirmation request.
Optionally, the server is provided with a buffer area, and the buffer is configured with page information corresponding to the buffer back end interface of the first buffer annotation, including:
acquiring an entry parameter corresponding to the cache back end interface from the first cache annotation, and generating a unique identifier of the cache back end interface according to the entry parameter corresponding to the cache back end interface;
according to the unique identifier of the cache back end interface, caching page information corresponding to the cache back end interface, configured with the first cache annotation, in the cache region;
the storing the page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation includes:
acquiring an entry parameter corresponding to the storage back-end interface from the behavior annotation, and generating a unique identifier of the storage back-end interface according to the entry parameter corresponding to the storage back-end interface;
And storing page information corresponding to the unique identifier of the cache back end interface matched with the unique identifier of the storage back end interface configured with the behavior annotation in the cache region.
Optionally, after the storing of the page information corresponding to the cache back-end interface matched with the storage back-end interface configured with the behavior annotation, the method further includes:
responding to a backtracking request of the client for a target page, determining a cache back end interface corresponding to the target page, and configuring a second cache annotation for the cache back end interface; the second cache annotation is used for indicating the server to acquire page information corresponding to the cache back end interface;
and acquiring page information corresponding to the storage back end interface matched with the cache back end interface configured with the second cache annotation, serving as page information corresponding to the cache back end interface, and returning to the client for backtracking at the client according to the page information of the target page.
Optionally, the obtaining, as the page information corresponding to the cache back end interface, page information corresponding to the storage back end interface matched with the cache back end interface configured with the second cache annotation, and returning to the client includes:
Obtaining an entry parameter corresponding to the cache back end interface from the second cache annotation, and generating a unique identifier of the cache back end interface according to the entry parameter corresponding to the cache back end interface;
and returning page information corresponding to the unique identifier of the storage back-end interface matched with the unique identifier of the cache back-end interface to the client.
Optionally, the storing the page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation includes:
acquiring a client audit identifier in the first cache annotation; the client auditing identification is correspondingly generated when the client audits the page information each time;
and storing page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation according to the client audit identifier.
Optionally, after the obtaining the page information corresponding to the cache back end interface matched with the cache back end interface configured with the second cache annotation, the method further includes:
And when more than two pieces of page information with different client audit identifications and the same cache back end interface exist, marking the difference content between the page information of the cache back end interface.
Alternatively, the process may be carried out in a single-stage,
the unique identification is calculated by a message digest algorithm;
the storage is a persistent storage which is realized by adopting a storage snapshot technology;
the server is provided with an interceptor through tangent plane programming, and the interceptor is used for intercepting the cache back-end interface configured with the first cache annotation and the second cache annotation and intercepting the storage back-end interface configured with the behavior annotation.
The embodiment of the invention also discloses a data storage device which is applied to the server, and the device comprises:
the cache back end interface determining module is used for responding to an audit request of a client for a target page, determining a cache back end interface corresponding to the target page, acquiring page information corresponding to the cache back end interface and returning to the client;
the cache annotation configuration module is used for configuring a first cache annotation for the cache back-end interface; the first cache annotation is used for indicating the server to cache page information corresponding to the cache back end interface;
The caching module is used for caching page information corresponding to the cache back end interface configured with the first cache annotation;
the storage back end interface determining module is used for responding to an audit confirmation request submitted by the client for page information corresponding to the cache back end interface, and determining the cache back end interface as a storage back end interface;
the behavior annotation configuration module is used for configuring behavior annotations for the storage back-end interface; the behavior annotation is used for indicating the server to store page information corresponding to the storage back-end interface;
the storage module is used for storing page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation so as to ensure that the stored page information is page information after submitting an audit confirmation request for the client.
Optionally, the server is provided with a buffer area, and the buffer module is specifically configured to:
acquiring an entry parameter corresponding to the cache back end interface from the first cache annotation, and generating a unique identifier of the cache back end interface according to the entry parameter corresponding to the cache back end interface;
According to the unique identifier of the buffer back end interface, page information corresponding to the buffer back end interface, configured with the buffer first storage annotation, is buffered in the buffer area;
the storage module is specifically configured to:
acquiring an entry parameter corresponding to the storage back-end interface from the behavior annotation, and generating a unique identifier of the storage back-end interface according to the entry parameter corresponding to the storage back-end interface;
and storing page information corresponding to the unique identifier of the cache back end interface matched with the unique identifier of the storage back end interface configured with the behavior annotation in the cache region.
Optionally, the apparatus further comprises: backtracking module for:
responding to a backtracking request of the client for a target page, determining a cache back end interface corresponding to the target page, and configuring a second cache annotation for the cache back end interface; the second cache annotation is used for indicating the server to acquire page information corresponding to the cache back end interface;
and acquiring page information corresponding to the storage back end interface matched with the cache back end interface configured with the second cache annotation, serving as page information corresponding to the cache back end interface, and returning to the client for backtracking at the client according to the page information of the target page.
Optionally, the backtracking module is specifically configured to:
obtaining an entry parameter corresponding to the cache back end interface from the second cache annotation, and generating a unique identifier of the cache back end interface according to the entry parameter corresponding to the cache back end interface;
and returning page information corresponding to the unique identifier of the storage back-end interface matched with the unique identifier of the cache back-end interface to the client.
Optionally, the storage module is specifically configured to:
acquiring a client audit identifier in the first cache annotation; the client auditing identification is correspondingly generated when the client audits the page information each time;
and storing page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation according to the client audit identifier.
Optionally, the apparatus further comprises: a differentiation marking module for:
and when more than two pieces of page information with different client audit identifications and the same cache back end interface exist, marking the difference content between the page information of the cache back end interface.
Optionally, the unique identifier is calculated by a message digest algorithm; the storage is a persistent storage which is realized by adopting a storage snapshot technology; the storage is a persistent storage which is realized by adopting a storage snapshot technology; the server is provided with an interceptor through tangent plane programming, and the interceptor is used for intercepting the cache back-end interface configured with the first cache annotation and the second cache annotation and intercepting the storage back-end interface configured with the behavior annotation.
The embodiment of the invention also discloses electronic equipment, which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method according to the embodiment of the present invention when executing the program stored in the memory.
The embodiment of the invention also discloses a computer program product which is stored in a storage medium and is executed by at least one processor to realize the method according to the embodiment of the invention.
Embodiments of the present invention also disclose a computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the method according to the embodiments of the present invention.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, a server determines a cache back-end interface corresponding to a target page in response to an audit request of a client for the target page, acquires page information corresponding to the cache back-end interface, returns to the client for audit of the page information, configures a first cache annotation for the cache back-end interface, caches page information corresponding to the cache back-end interface configured with the first cache annotation, and then determines the cache back-end interface as a storage back-end interface and configures behavior annotation for the storage back-end interface in response to an audit confirmation request submitted by the client for the page information corresponding to the cache back-end interface, wherein the behavior annotation is used for indicating the server to store the page information corresponding to the storage back-end interface, and stores the page information corresponding to the cache back-end interface matched with the storage back-end interface configured with the behavior annotation so as to ensure that the stored page information is the page information after the client submits the audit confirmation request. When the auditor acquires the page information of the target page, the first cache annotation can be configured on the cache back end interface corresponding to the target page, the server can intercept and cache the page information corresponding to the target back end interface configured with the first cache annotation, when the auditor audits the page information of the target page, the audited cache back end interface can be determined to be a storage back end interface, and the behavior annotation can be configured, the server can intercept the storage back end interface configured with the behavior annotation, so that the page information stored on the server is the same as the page information required to be audited by the auditor, and the retrospective page information is reliable when the auditor needs to retrospectively audit the page information.
Drawings
FIG. 1 is a schematic illustration of an application environment provided in an embodiment of the present invention;
FIG. 2 is a flow chart of steps of a data storage method provided in an embodiment of the present invention;
FIG. 3 is a timing diagram for auditing a target page according to an embodiment of the present invention;
FIG. 4 is a block diagram of a target page auditing flow according to an embodiment of the present invention;
FIG. 5 is a block diagram of a data storage device provided in an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
First, some terms related to the embodiments of the present invention will be described:
storing the snapshot: a snapshot (snapshot) is stored, persisting data for a certain time.
md5: md5 (Message-Digest Algorithm 5, fifth edition) is a widely used cryptographic hash function, which is an encryption Algorithm that generates a 128-bit (16-byte) hash value to ensure that the information transfer is completely consistent.
Cutting: cut plane (AOP, aspect Oriented Programming), also commonly referred to as cut plane oriented programming or cut plane programming, is a technique that enables unified maintenance of program functions by precompilation and run-time dynamic agents.
Annotation: annotation (Annomination), also known as Java Annotation, is an Annotation mechanism introduced by JDK 5.0.
And (3) caching: in the embodiment of the present invention, the data is temporarily stored in a designated buffer area.
And (3) storing: in the embodiment of the invention, the data is stored in a persistent manner, for example, the data is stored in a storage snapshot manner.
The data storage method provided by the embodiment of the invention can be applied to an application environment shown in figure 1. The server 101 is provided with a server, the electronic device 102 is provided with a client, the server and the client can communicate through a network, specifically, the server responds to an audit request of the client for a target page, determines a cache back-end interface corresponding to the target page, acquires page information corresponding to the cache back-end interface, and returns the page information to the client; configuring a first cache annotation for a cache back-end interface; the first cache annotation is used for indicating page information corresponding to the interface of the rear end of the server cache; caching page information corresponding to a cache back end interface configured with a first cache annotation; responding to an audit confirmation request submitted by the client for page information corresponding to the cache back-end interface, and determining the cache back-end interface as a storage back-end interface; configuring behavior annotation for a storage back-end interface; the behavior annotation is used for indicating the server to store page information corresponding to the back-end interface; and storing page information corresponding to a cache back end interface matched with the storage back end interface configured with the behavior annotation, so as to ensure that the stored page information is the page information after the client submits the verification request.
In practical applications, the electronic device 102 may include, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, vehicle terminals, portable wearable devices, or servers, which may be servers running independently or a server cluster composed of a plurality of servers, where the servers may be cloud servers.
Referring to fig. 2, a flowchart of steps of a data storage method provided in an embodiment of the present invention is shown and applied to a server, where the method specifically may include the following steps:
step 201, responding to an audit request of a client for a target page, determining a cache back-end interface corresponding to the target page, acquiring page information corresponding to the cache back-end interface, and returning to the client.
In a specific implementation, each page has a back-end interface (the back-end interface may also be referred to as a class method) of its corresponding service end (back-end), and the service end may obtain corresponding page information from the back-end interface. For example, assuming that the page is a page for displaying the room source information of the room source, the page may correspond to three back-end interfaces, and the server may obtain the room source name data, the room source location data, and the room source detail data in the room source information through the three back-end interfaces, respectively.
The target page may refer to a page that is currently audited by an auditor. Specifically, when an auditor at the client (front end) prepares to audit the target page, an audit request for the target page can be initiated on the client, the server (rear end) can determine one or more cache rear end interfaces corresponding to the target page in response to the audit request of the client for the target page, then after corresponding page information is acquired through the cache rear end interfaces, the page information corresponding to the cache rear end interfaces can be returned to the client, so that the auditor can audit the page information corresponding to the cache rear end interfaces, and further confirm that the corresponding target page passes or is audited based on the page information.
Step 202, configuring a first cache annotation for the cache back end interface; the first cache annotation is used for indicating the server to cache page information corresponding to the cache back end interface.
Step 203, caching page information corresponding to the cache back end interface configured with the first cache annotation.
In the embodiment of the invention, for page information corresponding to a cache back end interface which needs to be stored so as to facilitate subsequent backtracking, a first cache annotation (snapphastatche) can be respectively configured for the cache back end interface, wherein the first cache annotation is used for representing that the page information corresponding to the cache back end interface needs to be cached, the first cache annotation can comprise params, order and page, the params represents an entry parameter (entry parameter) of the queried cache back end interface, and a params parameter set is a unique parameter; order represents a client audit identifier, and page information of a target page can be traced back; the page indicates whether the target page is paged.
Specifically, the server is provided with an interceptor through facet programming, which may be configured to intercept and process backend interfaces with specified annotations. In the embodiment of the invention, the interceptor can intercept the page information corresponding to the interface of the back end of the cache configured with the first cache annotation and cache the page information in the server.
And 204, responding to an audit confirmation request submitted by the client for page information corresponding to the cache back-end interface, and determining the cache back-end interface as a storage back-end interface.
Step 205, configuring behavior annotation for the storage back-end interface; the behavior annotation is used for indicating the server to store page information corresponding to the storage back-end interface.
And 206, storing page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation, so as to ensure that the stored page information is page information after submitting an audit confirmation request for the client.
In the embodiment of the present invention, after an auditor completes auditing the page information corresponding to the cache back end interface, an audit confirmation request that the page information passes or is audited for rejection may be initiated, and the server may determine the cache back end interface confirmed by the audit confirmation request as a storage back end interface in response to the audit confirmation request, for example, if the target page includes the cache back end interfaces A, B and C, if the auditor completes auditing the page information corresponding to the cache back end interfaces A, B and C, the cache back end interfaces A, B and C may be determined as storage back end interfaces. Then, behavior annotations (snapphotactions) may be configured for the storage backend interfaces respectively, where the behavior annotations are used to characterize that page information corresponding to the storage backend interfaces needs to be stored, and the behavior annotations may include params, where cache backend interfaces that need to be stored, such as A, B and C, are enumerated in the params.
In the embodiment of the invention, the interceptor based on the server can intercept the page information corresponding to the storage back end interface configured with the behavior annotation and store the page information in the server. Alternatively, the storage is a persistent storage, which is implemented using techniques that store snapshots.
In the embodiment of the invention, the cached page information and the page information audited by the auditor are the same, for example, the cached page information is the page information with the time of 202207052005, and then, after the auditor completes the audit of the page information, the page information with the time of 202207052005 is stored, so that if the audited page information needs to be traced back, the traced page information is the same as the page information audited by the auditor and is the page information at the same moment.
As an alternative embodiment of the present invention, the target page of the portion may be a page containing multiple pages. In the embodiment of the invention, when the server receives the auditing request for the target page sent by the client, the server can judge whether the target page is a page with paging, if the target page is a page with paging, the cache back end interfaces corresponding to all the paging in the target page can be determined, the page information corresponding to the cache back end interfaces of all the paging can be acquired, and the client is returned to enable the auditing personnel to audit, so that the auditing personnel can audit the page information of all the paging in the target page.
Further, in the embodiment of the present invention, corresponding first cache annotations are configured for all the paged cache backend interfaces, where the first cache annotations may include pages, where the pages represent whether the target page is paged, so that when the server intercepts page information corresponding to the cache backend interfaces configured with the first cache annotations, the server intercepts page information of all the paged cache backend interfaces in the target page and caches the page information in the server. After auditing personnel of the client end complete auditing of page information of all pages in the target page, the server end can initiate an auditing confirmation request that the page information passes or is audited for rejection, the server end responds to the auditing confirmation request, the cache back end interfaces of all the pages confirmed by the auditing confirmation request can be determined to be storage back end interfaces, behavior annotation is respectively configured for the storage back end interfaces, and the server end intercepts the page information corresponding to the storage back end interfaces of all the pages configured with the behavior annotation and stores the page information in the server end. For a target page with multiple pages, corresponding page numbers, such as page 1 and page 2 … …, may be configured for each page, so that page information of the pages may be stored according to the page correspondence during storage.
In the above exemplary embodiment, for a target page including multiple pages, page information corresponding to a cache back end interface of each page in the target page may be obtained, and corresponding first cache annotations may be configured respectively, so that page information corresponding to the cache back end interfaces of all pages may be cached.
It should be noted that, in the embodiment of the present invention, the back end interface of the target page is configured with a simple annotation, so that the server can intercept the back end interface configured with the annotation, so as to cache or store the page information corresponding to the back end interface configured with the annotation, and the original service code is not invaded and is not affected by the way of configuring the annotation.
In the embodiment of the invention, a server determines a cache back-end interface corresponding to a target page in response to an audit request of a client for the target page, acquires page information corresponding to the cache back-end interface, returns to the client for audit of the page information by an auditor, configures a first cache annotation for the cache back-end interface, caches page information corresponding to the cache back-end interface configured with the first cache annotation, and then determines the cache back-end interface as a storage back-end interface and configures behavior annotation for the storage back-end interface in response to an audit confirmation request submitted by the client for the page information corresponding to the cache back-end interface, wherein the behavior annotation is used for indicating the server to store the page information corresponding to the storage back-end interface, and stores the page information corresponding to the cache back-end interface matched with the storage back-end interface configured with the behavior annotation so as to ensure that the stored page information is the page information after the client submits the audit confirmation request. When the auditor acquires the page information of the target page, the first cache annotation can be configured on the cache back end interface corresponding to the target page, the server can intercept and cache the page information corresponding to the target back end interface configured with the first cache annotation, when the auditor audits the page information of the target page, the audited cache back end interface can be determined to be a storage back end interface, and the behavior annotation can be configured, the server can intercept the storage back end interface configured with the behavior annotation, so that the page information stored on the server is the same as the page information required to be audited by the auditor, and the retrospective page information is reliable when the auditor needs to retrospectively audit the page information.
On the basis of the above embodiments, modified embodiments of the above embodiments are proposed, and it is to be noted here that only the differences from the above embodiments are described in the modified embodiments for the sake of brevity of description.
In an exemplary embodiment, the server may be provided with a buffer, and the step 203 of buffering page information corresponding to the buffer backend interface configured with the first buffer annotation includes:
acquiring an entry parameter corresponding to the cache back end interface from the first cache annotation, and generating a unique identifier of the cache back end interface according to the entry parameter corresponding to the cache back end interface;
according to the unique identifier of the cache back end interface, caching page information corresponding to the cache back end interface, configured with the first cache annotation, in the cache region;
the step 206 of storing page information corresponding to the cache back-end interface matched with the storage back-end interface configured with the behavior annotation includes:
acquiring an entry parameter corresponding to the storage back-end interface from the behavior annotation, and generating a unique identifier of the storage back-end interface according to the entry parameter corresponding to the storage back-end interface;
And storing page information corresponding to the unique identifier of the cache back end interface matched with the unique identifier of the storage back end interface configured with the behavior annotation in the cache region.
The unique identifier is a numerical value which can play a role in the unique identifier based on a message digest algorithm or other encryption algorithms, for example, the unique identifier can be uniquely identified to a back-end interface based on a displacement identification code. As an alternative example, the message digest algorithm may be an md5 encryption algorithm, and the md5 value calculated based on the md5 encryption algorithm is the unique identifier.
In the embodiment of the invention, when an auditor acquires page information of a target page, acquiring entry parameters corresponding to a cache back end interface in each first cache annotation of the target page, generating unique identifiers of the cache back end interface according to the entry parameters corresponding to each cache back end interface, for example, the unique identifiers of the cache back end interface can be md5 1 and md5 2, caching page information corresponding to the first cache annotation in a cache region, then, when the auditor finishes auditing the page information of the target page, acquiring the entry parameters corresponding to the storage back end interface in each behavior annotation of the target page from the storage back end interface required to be stored, generating the unique identifiers of the storage back end interface according to the entry parameters corresponding to each storage back end interface, for example, the unique identifiers md5 1 and md5 2 of the storage back end interface can be positioned to the matched page information corresponding to the unique identifiers md5 1 and md5 2 of the cache back end interface in the cache region, and lasting the page information in service end is stored.
Optionally, the entry parameter may be used as a primary key to store the corresponding page information, but if the entry parameter is more, the index value of the primary key is excessively large, which results in more time consuming in querying (including querying the page information in the buffer and querying the persistently stored page information in the server), so in the embodiment of the present invention, the md5 value calculated based on the entry parameter may be preferentially selected as the unique identifier of the page information, and the unique identifier is used as the primary key, so that the time consumed in querying the page information may be reduced.
In the above exemplary embodiment, the same cache back end interface in the same target page is utilized, and the entry parameters of the cache back end interfaces (storage back end interfaces) before and during the audit are the same, so that the unique identifier is calculated by using the entry parameters, the corresponding page information can be accurately and uniquely located in the cache region and stored, and in addition, if the unique identifier is calculated by using the md5 encryption algorithm, the time consumed in querying the page information can be reduced.
In an exemplary embodiment, after storing page information corresponding to the cache back-end interface that matches the storage back-end interface configured with the behavior annotation in step 206, the method further includes:
Responding to a backtracking request of the client for a target page, determining a cache back end interface corresponding to the target page, and configuring a second cache annotation for the cache back end interface; the second cache annotation is used for indicating the server to acquire page information corresponding to the cache back end interface;
and acquiring page information corresponding to the storage back end interface matched with the cache back end interface configured with the second cache annotation, serving as page information corresponding to the cache back end interface, and returning to the client for backtracking at the client according to the page information of the target page.
In a specific implementation, for a target page which has completed auditing, an auditing person can trace back the target page to acquire page information corresponding to the target page when the auditing person audits.
In the embodiment of the invention, when an auditor at a client side prepares to trace back a target page, a trace back request for the target page can be initiated through the client side, a server side can respond to the trace back request of the client side for the target page, one or more buffer rear end interfaces corresponding to the target page can be determined, second buffer annotation (Snapshaphache) is respectively configured at the buffer rear end interfaces, then the server side intercepts the buffer rear end interfaces configured with the second buffer annotation, acquires page information corresponding to the buffer rear end interfaces matched with the buffer rear end interfaces from stored page information, and returns to the client side for the auditor to check, so that trace back of the target page is realized.
The obtaining, by the client, page information corresponding to the storage backend interface that matches the cache backend interface configured with the second cache annotation, where the page information corresponds to the storage backend interface and is used as page information corresponding to the cache backend interface, and returning the page information to the client, where the obtaining includes:
obtaining an entry parameter corresponding to the cache back end interface from the second cache annotation, and generating a unique identifier of the cache back end interface according to the entry parameter corresponding to the cache back end interface;
and returning page information corresponding to the unique identifier of the storage back-end interface matched with the unique identifier of the cache back-end interface to the client.
In the embodiment of the invention, the server side stores the corresponding page information according to the unique identifier obtained by calculation based on the entry parameter of the storage back end interface, so that when the entry parameter corresponding to the cache back end interface configured with the second cache annotation is obtained, the entry parameter corresponding to the cache back end interface can be obtained from the second cache annotation, the unique identifier of the cache back end interface is generated according to the entry parameter, and then the page information corresponding to the unique identifier of the cache back end interface matched with the unique identifier of the cache back end interface is used as the page information corresponding to the cache back end interface and returned to the client side, so that an auditor can view the page information of the target page on the client side.
In the above-mentioned exemplary embodiment, in the related art, when the page information of the target page stored at the server needs to be traced back, the page information of each back-end interface of the target page needs to be obtained again, however, each back-end interface of the target page needs to determine and implement the respective logic.
In an exemplary embodiment, the step 206 of storing page information corresponding to the cache back-end interface matched with the storage back-end interface configured with the behavior annotation includes:
acquiring a client audit identifier in the first cache annotation; the client auditing identification is correspondingly generated when the client audits the page information each time;
And storing page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation according to the client audit identifier.
The client audit identifier (order) may be a name or number, among others. For example, order1, order2 … …. The client auditing identification in the embodiment of the invention is correspondingly generated when the client audits the page information each time.
In the embodiment of the invention, the first cache annotation configured on the back end interface of the cache comprises a client audit identifier, after page information of the back end interface is stored in the server, the client audit identifier can be obtained from the first cache annotation on the back end interface of the storage, and then page information corresponding to the back end interface of the cache matched with the back end interface of the storage is stored in the server according to the client audit identifier.
Optionally, a certain audit of the target page can be traced back through the client audit identifier, that is, based on the client audit identifier, the corresponding page information can be obtained from the server. If one target page is rejected, page information pageNdate1 of the target page corresponding to the client audit identifier order1 is stored in the server, and if the page is audited again and passed, page information pageNdate2 of the target page corresponding to the client audit identifier order2 is stored in the server, so that when the same target page is audited for a plurality of times, the situation that the page information of the target page is calculated based on the entry parameter of the back end interface to obtain the unique identifier storage can be avoided, and page information required to be traced back by an actual auditor can not be accurately obtained.
In the above exemplary embodiment, for the page information of the target page, the page information may be correspondingly stored in the server according to the client audit identifier, so that when the same target page is audited for multiple times, the page information actually desired to be traced back by the auditor may be accurately located based on the client audit identifier.
In an exemplary embodiment, after the obtaining the page information corresponding to the cache back end interface matched with the cache back end interface configured with the second cache annotation, the method may further include:
and when more than two pieces of page information with different client audit identifications and the same cache back end interface exist, marking the difference content between the page information of the cache back end interface.
In some auditing scenarios, when auditing personnel audit the page information of the target page, the auditing personnel may reject the page information of the target page submitted by the user, and at this time, the user needs to submit the page information again to the auditing personnel for re-auditing, so that more than two page information of the target page may be stored in the server, and because the page information may be correspondingly stored according to the client auditing identification, in the embodiment of the invention, in order to facilitate auditing of the auditing personnel, when more than two client auditing identifications are different and the cache rear end interfaces have the same page information, the auditing personnel can mark the difference content between the page information of the cache rear end interfaces, so that the auditing personnel can quickly know which page information changes, and the auditing efficiency of the auditing personnel is improved. For example, if a picture in the property source detail data of the property source in the target page is replaced, the picture may be marked. The manner of marking may include, but is not limited to, highlighting, framing, and the like.
In the above exemplary embodiment, the client audit identifier may query a plurality of page information corresponding to the target page, for example, page information stored in the last two histories, so that the difference comparison between the re-checked page information and the last checked page information may be realized, and the audit efficiency of the auditor is improved.
In order to enable those skilled in the art to better understand the embodiments of the present invention, a specific example is described below.
Referring to FIG. 3, a timing diagram for auditing a target page according to an embodiment of the present invention is shown. Assuming that the target page to be audited corresponds to three back-end interfaces A, B and C, the server is provided with a Snapshot interceptor, wherein the audit process mainly comprises three parts, namely an audit information checking stage, an audit confirmation stage and a audit backtracking stage. Wherein:
in the stage of checking the audit information: the front-end initiator (auditor) initiates an audit request (request) of the target page through the audit detail page, and can respectively acquire audit information (namely page information) a, audit information b and audit information C through three back-end interfaces A, B and C of the target page, and return the audit information to the audit detail page for display through response messages (response) respectively, so that the front-end initiator can audit the audit data. Meanwhile, since the server has configured annotations to the backend interfaces A, B and C, the Snapshot interceptor intercepts the audit information a, audit information b, and audit information C, and caches these audit information in the cache.
In the verification and audit stage, the front end initiator initiates an audit passing request (request) through an audit detail page, and the audit passing interface returns a response to the audit detail page for display, and meanwhile, as the server end has configured notes on the rear end interfaces A, B and C in the audit passing request response, the Snapshot interceptor intercepts the rear end interfaces A, B and C in the audit passing request response, acquires audit information a, audit information b and audit information C corresponding to the rear end interfaces A, B and C respectively from the cache region and stores the audit information a, audit information b and audit information C in a lasting manner.
In the backtracking audit information stage, the front end initiator initiates a backtracking audit information request (request) through the audit detail page, and because the server has configured annotations for the back end interfaces A, B and C in the backtracking audit information request response, the Snapshot interceptor intercepts the back end interfaces A, B and C in the backtracking audit information request response, acquires audit information a, audit information b and audit information C respectively corresponding to the back end interfaces A, B and C from the persistently stored page information, and returns the audit information to the audit detail page for display through response messages (response) respectively.
Referring to fig. 4, a block diagram of an audit process of a target page according to an embodiment of the present invention is shown, specifically:
In the stage of checking the audit information: when an auditor checks audit information (interface data) of a target page, calling an interface (back-end interface) A, B, C corresponding to the audit information target page, and configuring a cache annotation Snapshot cache for the interfaces, wherein the Snapshot cache comprises params, orders and pages, the cache annotation can comprise params, orders and pages, the params represents an entry parameter (an entry parameter) of the queried cache back-end interface, and a params parameter set is a unique parameter; order represents a client audit identifier, and page information of a target page can be traced back; the page indicates whether the target page is paged. The Snapshot interceptor intercepts an interface with a Snapshot cache annotation, generates an index value (namely a unique identification code) based on an entry parameter in the Snapshot cache by adopting an md5 algorithm, judges whether a target page has a paging page, stores page data of each paging in the target page into a corresponding page-cache (paging cache area) if the target page has the paging page, stores the page data of the target page into the cache (page cache area) if the target page does not have the paging page, and finally caches the page data of the target page into the cache area.
When the auditing information is confirmed, and auditing personnel confirm the auditing information of the target page, the Snapshot interceptor is required to intercept the interfaces with the Snapshot action annotations for the confirmed interfaces, acquire the interfaces enumerated in the Snapshot actions, acquire interface data corresponding to the interfaces from the cache based on the enumerated interfaces, and then store the interface data in a lasting mode.
In the stage of backtracking audit information, when an auditor backtracks audit information of a target page, calling interfaces A, B, C corresponding to the audit information target page, and configuring cache annotation Snapshot cache for the interfaces, wherein the Snapshot cache comprises params, orders and pages, the cache annotation can comprise params, orders and pages, the params represents entry parameters (entering parameters) of a searched cache back end interface, and a parameter set of the params is a unique parameter; order represents a client audit identifier, and page information of a target page can be traced back; the page indicates whether the target page is paged. The snappshot interceptor intercepts the interface with snapphotcache annotation, generates an index value (namely a unique identification code) based on the entry parameter in the snapphotcache by adopting an md5 algorithm, and then can acquire interface data corresponding to the interface A, B, C from the persistently stored interface data based on the index value.
In summary, by introducing notes, the embodiment of the invention greatly reduces the code development amount, can realize the functions of auditing, caching and storing through simple configuration, and optimizes the efficiency of the back-end interface under the condition of ensuring real-time data. In addition, the embodiment of the invention can also trace back the latest page information through the client audit mark, thereby realizing the differential comparison of the re-audit page information and the last audit page information and improving the audit efficiency of auditors.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 5, a block diagram of a data storage device provided in an embodiment of the present invention is shown, where the block diagram is applied to a server, and the device may specifically include the following modules:
the cache back end interface determining module 501 is configured to determine a cache back end interface corresponding to a target page in response to an audit request of a client for the target page, obtain page information corresponding to the cache back end interface, and return to the client;
a cache annotation configuration module 502, configured to configure a first cache annotation for the cache back-end interface; the first cache annotation is used for indicating the server to cache page information corresponding to the cache back end interface;
A caching module 503, configured to cache page information corresponding to the cache back end interface configured with the first cache annotation;
the storage back-end interface determining module 504 is configured to determine, as a storage back-end interface, the cache back-end interface in response to an audit confirmation request submitted by the client for page information corresponding to the cache back-end interface;
a behavior annotation configuration module 505, configured to configure behavior annotations for the storage backend interface; the behavior annotation is used for indicating the server to store page information corresponding to the storage back-end interface;
and the storage module 506 is configured to store page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation, so as to ensure that the stored page information is page information after submitting an audit confirmation request for the client.
In an exemplary embodiment, the server side is provided with a buffer, and the buffer module 503 is specifically configured to:
acquiring an entry parameter corresponding to the cache back end interface from the first cache annotation, and generating a unique identifier of the cache back end interface according to the entry parameter corresponding to the cache back end interface;
According to the unique identifier of the buffer back end interface, page information corresponding to the buffer back end interface, configured with the buffer first storage annotation, is buffered in the buffer area;
the storage module 506 is specifically configured to:
acquiring an entry parameter corresponding to the storage back-end interface from the behavior annotation, and generating a unique identifier of the storage back-end interface according to the entry parameter corresponding to the storage back-end interface;
and storing page information corresponding to the unique identifier of the cache back end interface matched with the unique identifier of the storage back end interface configured with the behavior annotation in the cache region.
In an exemplary embodiment, the apparatus further comprises: backtracking module for:
responding to a backtracking request of the client for a target page, determining a cache back end interface corresponding to the target page, and configuring a second cache annotation for the cache back end interface; the second cache annotation is used for indicating the server to acquire page information corresponding to the cache back end interface;
and acquiring page information corresponding to the storage back end interface matched with the cache back end interface configured with the second cache annotation, serving as page information corresponding to the cache back end interface, and returning to the client for backtracking at the client according to the page information of the target page.
In an exemplary embodiment, the backtracking module is specifically configured to:
obtaining an entry parameter corresponding to the cache back end interface from the second cache annotation, and generating a unique identifier of the cache back end interface according to the entry parameter corresponding to the cache back end interface;
and returning page information corresponding to the unique identifier of the storage back-end interface matched with the unique identifier of the cache back-end interface to the client.
In an exemplary embodiment, the storage module 506 is specifically configured to:
acquiring a client audit identifier in the first cache annotation; the client auditing identification is correspondingly generated when the client audits the page information each time;
and storing page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation according to the client audit identifier.
In an exemplary embodiment, the apparatus further comprises: a differentiation marking module for:
and when more than two pieces of page information with different client audit identifications and the same cache back end interface exist, marking the difference content between the page information of the cache back end interface.
In an exemplary embodiment, the unique identification is calculated using a message digest algorithm; the storage is a persistent storage which is realized by adopting a storage snapshot technology; the storage is a persistent storage which is realized by adopting a storage snapshot technology; the server is provided with an interceptor through tangent plane programming, and the interceptor is used for intercepting the cache back-end interface configured with the first cache annotation and the second cache annotation and intercepting the storage back-end interface configured with the behavior annotation.
When the auditor acquires the page information of the target page, the first cache annotation can be configured on the cache back end interface corresponding to the target page, the server can intercept and cache the page information corresponding to the target back end interface configured with the first cache annotation, when the auditor audits the page information of the target page, the audited cache back end interface can be determined to be a storage back end interface, and the behavior annotation can be configured, the server can intercept the storage back end interface configured with the behavior annotation, so that the page information stored on the server is the same as the page information required to be audited by the auditor, and the retrospective page information is reliable when the auditor needs to retrospectively audit the page information.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In addition, the embodiment of the invention also provides electronic equipment, which comprises: the processor, the memory, store the computer program on the memory and can run on the processor, this computer program realizes each process of the above-mentioned data storage method embodiment when being carried out by the processor, and can reach the same technical result, in order to avoid repetition, will not be repeated here.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, realizes the processes of the above data storage method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
Embodiments of the present invention also provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above-described data storage method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 600 includes, but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, input unit 604, sensor 605, display unit 606, user input unit 607, interface unit 608, memory 609, processor 610, and power supply 611. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 6 is not limiting of the electronic device and that the electronic device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In the embodiment of the invention, the electronic equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the downlink data with the processor 610; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 601 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 602, such as helping the user to send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the electronic device 600. The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used for receiving audio or video signals. The input unit 604 may include a graphics processor (Graphics Processing Unit, GPU) 6041 and a microphone 6042, the graphics processor 6041 processing image data of still pictures or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphics processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. Microphone 6042 may receive sound and can process such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 601 in the case of a telephone call mode.
The electronic device 600 also includes at least one sensor 605, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and the proximity sensor can turn off the display panel 6061 and/or the backlight when the electronic device 600 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 605 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 606 is used to display information input by a user or information provided to the user. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on touch panel 6071 or thereabout using any suitable object or accessory such as a finger, stylus, or the like). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 610, and receives and executes commands sent from the processor 610. In addition, the touch panel 6071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein.
Further, the touch panel 6071 may be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 610 to determine a type of a touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although in fig. 6, the touch panel 6071 and the display panel 6061 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 608 is an interface to which an external device is connected to the electronic apparatus 600. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 600 or may be used to transmit data between the electronic apparatus 600 and an external device.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a storage program area that may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 609, and calling data stored in the memory 609, thereby performing overall monitoring of the electronic device. The processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The electronic device 600 may also include a power supply 611 (e.g., a battery) for powering the various components, and preferably the power supply 611 may be logically coupled to the processor 610 via a power management system that performs functions such as managing charging, discharging, and power consumption.
In addition, the electronic device 600 includes some functional modules, which are not shown, and will not be described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. A data storage method, applied to a server, the method comprising:
responding to an audit request of a client for a target page, determining a cache back-end interface corresponding to the target page, acquiring page information corresponding to the cache back-end interface, and returning to the client;
configuring a first cache annotation for the cache back-end interface; the first cache annotation is used for indicating the server to cache page information corresponding to the cache back end interface;
caching page information corresponding to the cache back end interface configured with the first cache annotation;
responding to an audit confirmation request submitted by the client for page information corresponding to the cache back-end interface, and determining the cache back-end interface as a storage back-end interface;
configuring behavior annotation for the storage back-end interface; the behavior annotation is used for indicating the server to store page information corresponding to the storage back-end interface;
and storing page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation, so as to ensure that the stored page information is the page information after the client submits an audit confirmation request.
2. The method according to claim 1, wherein the server is provided with a buffer, and the buffering is configured with page information corresponding to the buffer backend interface of the first buffer annotation, including:
acquiring an entry parameter corresponding to the cache back end interface from the first cache annotation, and generating a unique identifier of the cache back end interface according to the entry parameter corresponding to the cache back end interface;
according to the unique identifier of the cache back end interface, caching page information corresponding to the cache back end interface, configured with the first cache annotation, in the cache region;
the storing the page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation includes:
acquiring an entry parameter corresponding to the storage back-end interface from the behavior annotation, and generating a unique identifier of the storage back-end interface according to the entry parameter corresponding to the storage back-end interface;
and storing page information corresponding to the unique identifier of the cache back end interface matched with the unique identifier of the storage back end interface configured with the behavior annotation in the cache region.
3. The method of claim 1, wherein after storing page information corresponding to the cache back-end interface that matches the storage back-end interface configured with the behavior annotation, the method further comprises:
responding to a backtracking request of the client for a target page, determining a cache back end interface corresponding to the target page, and configuring a second cache annotation for the cache back end interface; the second cache annotation is used for indicating the server to acquire page information corresponding to the cache back end interface;
and acquiring page information corresponding to the storage back end interface matched with the cache back end interface configured with the second cache annotation, serving as page information corresponding to the cache back end interface, and returning to the client for backtracking at the client according to the page information of the target page.
4. The method according to claim 3, wherein the obtaining page information corresponding to the storage backend interface that matches the cache backend interface configured with the second cache annotation as page information corresponding to the cache backend interface and returning to the client includes:
Obtaining an entry parameter corresponding to the cache back end interface from the second cache annotation, and generating a unique identifier of the cache back end interface according to the entry parameter corresponding to the cache back end interface;
and returning page information corresponding to the unique identifier of the storage back-end interface matched with the unique identifier of the cache back-end interface to the client.
5. The method of claim 3, wherein the storing page information corresponding to the cache back-end interface that matches the storage back-end interface configured with the behavior annotation comprises:
acquiring a client audit identifier in the first cache annotation; the client auditing identification is correspondingly generated when the client audits the page information each time;
and storing page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation according to the client audit identifier.
6. The method of claim 5, wherein after the obtaining the page information corresponding to the cache back-end interface that matches the cache back-end interface configured with the second cache annotation as the page information corresponding to the cache back-end interface, the method further comprises:
And when more than two pieces of page information with different client audit identifications and the same cache back end interface exist, marking the difference content between the page information of the cache back end interface.
7. The method according to any one of claim 1 to 6, wherein,
the unique identification is calculated by a message digest algorithm;
the storage is a persistent storage which is realized by adopting a storage snapshot technology; the server is provided with an interceptor through tangent plane programming, and the interceptor is used for intercepting the cache back-end interface configured with the first cache annotation and the second cache annotation and intercepting the storage back-end interface configured with the behavior annotation.
8. A data storage device for application to a server, the device comprising:
the cache back end interface determining module is used for responding to an audit request of a client for a target page, determining a cache back end interface corresponding to the target page, acquiring page information corresponding to the cache back end interface and returning to the client;
the cache annotation configuration module is used for configuring a first cache annotation for the cache back-end interface; the first cache annotation is used for indicating the server to cache page information corresponding to the cache back end interface;
The caching module is used for caching page information corresponding to the cache back end interface configured with the first cache annotation;
the storage back end interface determining module is used for responding to an audit confirmation request submitted by the client for page information corresponding to the cache back end interface, and determining the cache back end interface as a storage back end interface;
the behavior annotation configuration module is used for configuring behavior annotations for the storage back-end interface; the behavior annotation is used for indicating the server to store page information corresponding to the storage back-end interface;
the storage module is used for storing page information corresponding to the cache back end interface matched with the storage back end interface configured with the behavior annotation so as to ensure that the stored page information is page information after submitting an audit confirmation request for the client.
9. An electronic device comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method according to any one of claims 1-7 when executing a program stored on a memory.
10. A computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the method of any of claims 1-7.
CN202311028498.8A 2023-08-15 2023-08-15 Data storage method, device, electronic equipment and readable storage medium Active CN117056971B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311028498.8A CN117056971B (en) 2023-08-15 2023-08-15 Data storage method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311028498.8A CN117056971B (en) 2023-08-15 2023-08-15 Data storage method, device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN117056971A true CN117056971A (en) 2023-11-14
CN117056971B CN117056971B (en) 2024-04-30

Family

ID=88662135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311028498.8A Active CN117056971B (en) 2023-08-15 2023-08-15 Data storage method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117056971B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100030891A1 (en) * 2008-07-30 2010-02-04 Electronics And Telecommunications Research Institute Web-based traceback system and method using reverse caching proxy
CN106776909A (en) * 2016-11-28 2017-05-31 努比亚技术有限公司 The creation method and device of the page
US9716861B1 (en) * 2014-03-07 2017-07-25 Steelcase Inc. Method and system for facilitating collaboration sessions
CN108280111A (en) * 2017-06-13 2018-07-13 广州市动景计算机科技有限公司 page processing method, device, user terminal and storage medium
CN108985092A (en) * 2017-06-05 2018-12-11 北京京东尚科信息技术有限公司 Submit filter method, device, electronic equipment and the storage medium of request
CN109614559A (en) * 2018-11-16 2019-04-12 泰康保险集团股份有限公司 Data processing method and device
CN111949572A (en) * 2020-08-24 2020-11-17 海光信息技术有限公司 Page table entry merging method and device and electronic equipment
CN112035118A (en) * 2020-08-28 2020-12-04 江苏徐工信息技术股份有限公司 Method for automatically realizing interface idempotency based on annotation
CN114035841A (en) * 2021-11-16 2022-02-11 平安健康保险股份有限公司 Interface configuration information updating method, system, computer device and storage medium
CN114679763A (en) * 2020-12-24 2022-06-28 中国电信股份有限公司 NB-IoT-based base station energy consumption monitoring system, method and computer-readable storage medium
CN116028047A (en) * 2023-02-16 2023-04-28 浪潮软件科技有限公司 Page rapid generation method based on custom annotation
CN116451191A (en) * 2022-01-07 2023-07-18 腾讯科技(深圳)有限公司 Information auditing method, device, electronic equipment and computer readable storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100030891A1 (en) * 2008-07-30 2010-02-04 Electronics And Telecommunications Research Institute Web-based traceback system and method using reverse caching proxy
US9716861B1 (en) * 2014-03-07 2017-07-25 Steelcase Inc. Method and system for facilitating collaboration sessions
CN106776909A (en) * 2016-11-28 2017-05-31 努比亚技术有限公司 The creation method and device of the page
CN108985092A (en) * 2017-06-05 2018-12-11 北京京东尚科信息技术有限公司 Submit filter method, device, electronic equipment and the storage medium of request
CN108280111A (en) * 2017-06-13 2018-07-13 广州市动景计算机科技有限公司 page processing method, device, user terminal and storage medium
CN109614559A (en) * 2018-11-16 2019-04-12 泰康保险集团股份有限公司 Data processing method and device
CN111949572A (en) * 2020-08-24 2020-11-17 海光信息技术有限公司 Page table entry merging method and device and electronic equipment
CN112035118A (en) * 2020-08-28 2020-12-04 江苏徐工信息技术股份有限公司 Method for automatically realizing interface idempotency based on annotation
CN114679763A (en) * 2020-12-24 2022-06-28 中国电信股份有限公司 NB-IoT-based base station energy consumption monitoring system, method and computer-readable storage medium
CN114035841A (en) * 2021-11-16 2022-02-11 平安健康保险股份有限公司 Interface configuration information updating method, system, computer device and storage medium
CN116451191A (en) * 2022-01-07 2023-07-18 腾讯科技(深圳)有限公司 Information auditing method, device, electronic equipment and computer readable storage medium
CN116028047A (en) * 2023-02-16 2023-04-28 浪潮软件科技有限公司 Page rapid generation method based on custom annotation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙晨熙: "基于微信公众平台的餐饮管理系统的设计与实现", 《万方数据学位论文库》, 14 October 2020 (2020-10-14), pages 1 - 88 *
蒋竞;吕江枫;张莉;: "中文软件问答社区主题分析研究", 软件学报, no. 04, 15 April 2020 (2020-04-15) *

Also Published As

Publication number Publication date
CN117056971B (en) 2024-04-30

Similar Documents

Publication Publication Date Title
US11290447B2 (en) Face verification method and device
CN111143005B (en) Application sharing method, electronic equipment and computer readable storage medium
WO2018161540A1 (en) Fingerprint registration method and related product
CN109271779A (en) A kind of installation packet inspection method, terminal device and server
CN116070114A (en) Data set construction method and device, electronic equipment and storage medium
CN111131607A (en) Information sharing method, electronic equipment and computer readable storage medium
CN110796552A (en) Risk prompting method and device
CN107577933B (en) Application login method and device, computer equipment and computer readable storage medium
CN111209031B (en) Data acquisition method, device, terminal equipment and storage medium
CN110225040B (en) Information processing method and terminal equipment
CN109451011B (en) Information storage method based on block chain and mobile terminal
CN117056971B (en) Data storage method, device, electronic equipment and readable storage medium
CN110599158A (en) Virtual card combination method, virtual card combination device and terminal equipment
CN115167764A (en) Data read-write processing method and device, electronic equipment and storage medium
CN110442361B (en) Gray release method and device and electronic equipment
CN109257441B (en) Wireless local area network position acquisition method and device
CN106610971A (en) Identifier determination method and apparatus for ZIP files
CN111581223A (en) Data updating method and device, terminal equipment and storage medium
CN110795701A (en) Re-signature detection method and device, terminal equipment and storage medium
CN109168154B (en) User behavior information collection method and device and mobile terminal
CN115905160B (en) Verification method and device for data storage, electronic equipment and storage medium
CN112307480B (en) Risk analysis method and device for equipment where application software is located
CN111596820B (en) Head portrait setting method and device
CN116362533A (en) Service request verification method and device, electronic equipment and storage medium
CN115509603A (en) Server-side interface processing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant