CN116860862B - Front-end caching method of low-code platform and related equipment - Google Patents

Front-end caching method of low-code platform and related equipment Download PDF

Info

Publication number
CN116860862B
CN116860862B CN202311136627.5A CN202311136627A CN116860862B CN 116860862 B CN116860862 B CN 116860862B CN 202311136627 A CN202311136627 A CN 202311136627A CN 116860862 B CN116860862 B CN 116860862B
Authority
CN
China
Prior art keywords
data
cache
editing
viewing
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311136627.5A
Other languages
Chinese (zh)
Other versions
CN116860862B8 (en
CN116860862A (en
Inventor
白杨
常嘉琪
戚雨
姜楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Better Cloud Technology Co ltd
Original Assignee
Beijing Better Cloud Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Better Cloud Technology Co ltd filed Critical Beijing Better Cloud Technology Co ltd
Priority to CN202311136627.5A priority Critical patent/CN116860862B8/en
Publication of CN116860862A publication Critical patent/CN116860862A/en
Publication of CN116860862B publication Critical patent/CN116860862B/en
Application granted granted Critical
Publication of CN116860862B8 publication Critical patent/CN116860862B8/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/252Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/248Presentation of query results
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a front-end caching method of a low-code platform, electronic equipment, a computer readable storage medium and a computer program product. The method comprises the following steps: acquiring a viewing request of a form page, wherein the viewing request comprises a viewing identifier used as a viewing basis; matching the checking identification with the cache data in the cache region to obtain a matching result; based on the matching result, obtaining a display value corresponding to the viewing request, and displaying the display value by using a form page; the method for acquiring the cache data comprises the following steps: monitoring the editing operation of the form page to obtain editing data; and updating the storage value in the cache area by utilizing the editing data according to a preset cache rule to obtain cache data. The application realizes the efficient data query process by relying on the model capability and the form collocation capability of the low-code platform, thereby improving the data query performance under the low-code platform.

Description

Front-end caching method of low-code platform and related equipment
Technical Field
The present application relates to the field of low-code platforms, and in particular, to a front-end caching method for a low-code platform, an electronic device, a computer readable storage medium, and a computer program product.
Background
The traditional development mode forms a complex system, and professional IT technicians are required to develop and apply according to the flow and task division. The low-code application development system can enable a user to independently build service applications without the help of professional IT technicians.
The current low-code development focuses on how to enable a user to simply develop services, and the users can directly release and use the services after the development. However, in the software application after development, repeated calling of the interface occurs when the page is loaded, displayed and interacted, so that the data query process is inefficient.
Accordingly, there is a need to provide a front-end caching method for a low-code platform, an electronic device, a computer-readable storage medium, and a computer program product, so as to improve the problems existing in the prior art.
Disclosure of Invention
The application aims to provide a front-end caching method of a low-code platform, electronic equipment, a computer readable storage medium and a computer program product, and solves the technical problem that a data query process based on the low-code platform is low in efficiency.
The application adopts the following technical scheme:
in a first aspect, the present application provides a front-end caching method for a low-code platform, where the method includes:
Acquiring a viewing request of a form page, wherein the viewing request comprises a viewing identifier used as a viewing basis;
matching the checking identification with the cache data in the cache region to obtain a matching result;
based on the matching result, obtaining a display value corresponding to the viewing request, and displaying the display value by using the form page;
the method for obtaining the cache data comprises the following steps:
monitoring the editing operation of the form page to obtain editing data;
and updating the storage value in the cache region by utilizing the editing data according to a preset cache rule to obtain the cache data.
The beneficial effect of this technical scheme lies in: when a user initiates a viewing request of a form page, a viewing identifier in the request is acquired, and the identifier is used as a viewing basis. The view identifier is matched with the cached data in the cache region to determine whether cached data associated with the view request exists. The matching result can be successful matching or unsuccessful matching, and whether cache data which are successful in matching exist is judged according to the matching result. If the cache data successfully matched exist, continuing the next step; if the match fails, the relevant data may be obtained by other means. Based on the matching result, a display value corresponding to the viewing request is obtained from the cache data and is used for being displayed to the user on the form page so as to meet the viewing requirement of the user on the form page. Meanwhile, the editing operation of the form page by the user is monitored so as to capture the data change generated in the editing process. For example, these edit data are captured and recorded when the user modifies the values of the form fields. And comparing and matching the edited data with the stored values in the cache area according to a preset cache rule. If the stored value is inconsistent with the edited data or the stored value is outdated, the corresponding stored value in the cache area can be updated according to the edited data, so that the accuracy and the instantaneity of the cached data are ensured.
On the one hand, frequent requests of the view requests of the form pages to the back-end server are reduced in a cache mode, so that the loading speed of the pages is increased, and the user experience is improved. On the other hand, the same data is prevented from being repeatedly requested through the data caching, the number of interface requests is reduced, and the load pressure on the back-end server is reduced. On the other hand, the cache data of the cache region can be shared through the arrangement of the cache region, and the same data can be obtained between different components or pages for sharing the cache data of the cache region, so that the reusability and the efficiency of the data are improved. In yet another aspect, a degree of offline browsing capability is provided because the data is already cached at the front-end. Meanwhile, by carrying out data caching and displaying on the front end, the request to the rear end server is reduced, the load pressure of the rear end is reduced, and the overall performance and stability of the page during viewing are improved. In the method for acquiring the cache data, the consistency of updating the cache data is maintained, and the latest data is ensured to be acquired by a user when the user views the form page.
In summary, by effectively managing and utilizing the cached data, the front-end caching method improves the page loading speed, reduces the interface request times, realizes high efficiency of the processes of sharing data and offline browsing capability, simultaneously reduces the pressure of the rear end and improves the user experience during viewing. Meanwhile, the method monitors the editing data, updates the cache data by using the editing data, and updates the cache data in time, so that the accuracy and instantaneity of the data are maintained, and better user experience and data display effect are provided.
In some possible implementations, the obtaining, based on the matching result, a display value corresponding to the view request includes:
when the check mark is successfully matched with the stored value of the cache data, the successfully matched stored value is used as a display value;
and when the check identifier is not successfully matched with the cache data in the cache region, searching the data source of the back-end server of the form page according to the check identifier to acquire a storage value matched with the check identifier and serve as a display value.
The beneficial effect of this technical scheme lies in: firstly, comparing the check mark with the storage value of the cache data in the cache area, and if the matching is successful, taking the storage value which is successfully matched as the display value. If the check identifier does not match the stored value of the cache data in the cache region, that is, the cache data does not exist or is outdated, interaction can be performed with the data source of the back-end server of the form page according to the check identifier, and a request is sent to the back-end server through the check identifier to request to acquire the stored value matched with the check identifier. After receiving the request, the back-end server retrieves data from the database according to the viewing identifier, finds a stored value matched with the viewing identifier, responds to the obtained stored value, and takes the obtained stored value as a display value.
On the one hand, by using the front-end cache, frequent requests to the back-end server can be reduced, so that the load of the server is reduced, and the response speed and the overall performance of the low-code platform are improved. On the other hand, by caching the stored value and directly acquiring the display value during page loading, repeated interface requests can be avoided, the page loading speed is increased, and faster and smoother user experience is provided. On the other hand, as the stored value is cached at the front end, the communication frequency with the back end server is reduced, the network overhead and the data transmission cost are reduced, and the effect is more remarkable especially in a low-bandwidth or unstable network environment.
In summary, the latest display value is obtained by matching the cache data and the back-end data, so that the response speed and the overall performance of the low-code platform are improved.
In some possible implementations, when the matching between the viewing identifier and the cached data in the cache area is unsuccessful, the obtaining manner of the cached data further includes:
and updating the cache data in the cache area by using the storage value acquired from the data source of the back-end server.
The beneficial effect of this technical scheme lies in: and when the check identifier is not matched with the cache data in the cache region, sending a request to a back-end server to acquire a stored value matched with the check identifier. Retrieving data from a database or other data source based on the view identification, and finding a stored value that matches the view identification. The back-end server returns the acquired stored value as a response to the front-end system. And after receiving the response of the back-end server, the front-end system updates the cache data in the cache region by using the acquired storage value.
On the one hand, the latest storage value is obtained from the back-end server, and the cache data is updated, so that the consistency of the cache data and the back-end data is ensured, and the problem of inconsistent data caused by expiration or inaccuracy of the cache data can be avoided. On the other hand, the update of the stored value to the buffer data does not affect the viewing request progress of the user, and the user can immediately obtain the latest display value without waiting for the completion of data update, thereby being beneficial to improving the interaction experience and page loading speed of the user.
In summary, by acquiring the storage value from the back-end server and updating the cache data, the consistency of the data is ensured, the data updating efficiency is improved, and the waiting time of the user and the network transmission cost are reduced.
In some possible implementations, the viewing request further includes key information, the key information being used to authenticate the encrypted editing data; the buffer area comprises a plurality of buffer containers, each buffer container is used for storing and managing different buffer data, and the preset buffer rule comprises:
judging the data type of the editing data, wherein the data type comprises service level data and session level data;
When the data type of the editing data is business grade data, obtaining the corresponding relation between the editing data and a cache container;
storing the editing data into a cache container corresponding to the editing data according to the corresponding relation;
encrypting the editing data by using an encryption algorithm when the data type of the editing data is session-level data;
and storing the encrypted editing data into a cache container corresponding to the editing data according to the corresponding relation between the editing data and the cache container.
The beneficial effect of this technical scheme lies in: the viewing request comprises a viewing identifier and key information, the data to be viewed can be determined through the viewing identifier, and the encrypted data is subjected to identity verification and access control through the key information. When the data type of the editing data is business-level data, the corresponding relation between the editing data and the cache container is obtained according to a preset cache rule, and the editing data is stored in the corresponding cache container to update the cache data, so that the latest data can be obtained from the cache by a subsequent viewing request. When the data type of the editing data is session-level data, the editing data is encrypted by utilizing an encryption algorithm, and the encrypted data can protect the privacy and safety of the session-level data. According to the corresponding relation between the editing data and the cache container, the encrypted editing data is stored in the corresponding cache container, so that the confidentiality of the session-level data can be ensured while the cache data can be updated.
On the one hand, for session-level data, the editing data is encrypted through an encryption algorithm, so that the privacy and the security of the data can be protected. Only users with legal access rights can decrypt and view the data, thereby enhancing the confidentiality of the data. On the other hand, by updating the cached data, it can be ensured that the data viewed and edited by the user on different pages remains consistent. When the data changes, the cache can be updated in time, so that the problem of inconsistency of the data can be avoided.
In summary, by using the caching and encryption mechanism, the data access efficiency can be improved, the network overhead can be reduced, the user experience can be improved, and the confidentiality and consistency of the data can be ensured. This helps to improve the performance and security of the low code platform, providing better user experience and data protection mechanisms.
In some possible implementations, before the determining the data type of the edit data, the preset caching rule further includes: user authentication is carried out through the key information so as to obtain role authority of the user;
the judging the data type of the editing data comprises the following steps:
and taking the data corresponding to the role authority as editing data and judging the editing data to determine the data type of the editing data.
The beneficial effect of this technical scheme lies in: when the user operates, user verification is performed through the key information, role authority of the user is obtained, and different cache data acquisition strategies (namely cache rules) are set according to the role authority of the user. It is considered that all data changes can be monitored for an average user and stored in the corresponding cache container. Whereas for an administrator user (higher role authority than that of a normal user, higher data security requirement) part of the data change is monitored and stored in a proper cache container. When judging the data type of the editing data, taking the data corresponding to the user role authority as the editing data and using the data as the judging data type. In this way, it can be determined how to process and buffer data according to users with different role rights.
Therefore, users with different roles can enjoy personalized cache data acquisition strategies, data caching and updating are carried out according to the role demands, and the flexibility and the efficiency of the caching method are improved. The cache data acquisition strategy based on the user role authority can provide a safer, more flexible and more efficient data management scheme, meet the requirements of different user roles and ensure the safety and confidentiality of data.
In some possible implementations, the method further includes:
acquiring a data expiration policy, and deleting the expiration data in the cache region through the data expiration policy;
wherein the data expiration policy comprises:
respectively acquiring a preset clearing period corresponding to each cache data;
and deleting the cache data from the cache region according to the corresponding preset clearing period for each cache data.
The beneficial effect of this technical scheme lies in: a data expiration strategy is introduced in the front-end caching method and is used for deleting the expiration data in the cache region so as to ensure timely updating and effectiveness of the cache data. The preset clearing period corresponding to each cache data is obtained, and the preset clearing period can be considered to be the time length of storing the cache data in the cache region, and after the time length is exceeded, the data is considered to be outdated. And judging whether the expiration time is reached or not according to the corresponding preset clearing period for each cache data. If the data has expired, the cached data is deleted from the cache region. Deleting stale data may free up cache space and ensure the freshness of the cached data. When the data needs to be accessed next time, the latest data can be acquired from the back-end server again and cached, so that the timeliness and accuracy of the data are ensured.
On the one hand, by deleting the expired data, the data in the buffer area can be ensured to be kept in the latest state as possible. The cleaning of the expiration data ensures the accuracy of the cached data and avoids the condition of using the expiration data. On the other hand, the regular cleaning of the expired data can release the buffer space, and avoid the buffer area from excessively occupying storage resources. This is particularly important in the context of limited cache capacity and large amounts of data, which can improve cache utilization. On the other hand, by deleting the expiration data in time, the existence of invalid data in the cache region can be reduced, and the cache access efficiency is improved. Only the latest and effective data are reserved in the buffer area, so that the reading and displaying speed of the data can be increased. On the other hand, the storage time of different data can be flexibly controlled by presetting the clearing period. According to the service requirement and the data updating frequency, different clearing periods can be set for different cache data, so that the storage time of the cache data is matched with the validity period of the data.
In summary, through the data expiration policy and the cleaning of the expiration data, the accuracy and the freshness of the cached data can be maintained, the cache space is saved, and the cache access efficiency is improved on the premise of reducing the pressure of the back-end server, which is helpful to improve the performance and the resource utilization rate of the low-code platform, and provides better user experience and system reliability.
In some possible implementations, the deleting the cache data from the cache region according to the corresponding preset clearing period includes:
and deleting the cache data which is not checked in the preset clearing period from the cache region.
The beneficial effect of this technical scheme lies in: the data expiration strategy is used for deleting the cache data which is not checked in the preset clearing period, so that the data updating and effectiveness of the cache area can be maintained. It will be appreciated that each cache data is determined based on its predetermined flush period whether it has been checked during the period, and will be deleted from the cache if it has not been checked during the predetermined flush period. By deleting the expiration data which is not checked, the cache space can be released, only the data which is actually used is ensured to be reserved in the cache region, and the cache efficiency and performance are improved. Therefore, the storage space of the buffer area can be released by cleaning the unobserved expired data, the occupation of storage resources is reduced, the cost is saved, and the available storage capacity of the whole system is improved.
In a second aspect, the present application also provides an electronic device comprising a memory storing a computer program and at least one processor configured to implement the following steps when executing the computer program:
Acquiring a viewing request of a form page, wherein the viewing request comprises a viewing identifier used as a viewing basis;
matching the checking identification with the cache data in the cache region to obtain a matching result;
based on the matching result, obtaining a display value corresponding to the viewing request, and displaying the display value by using the form page;
the method for obtaining the cache data comprises the following steps:
monitoring the editing operation of the form page to obtain editing data;
and updating the storage value in the cache region by utilizing the editing data according to a preset cache rule to obtain the cache data.
In a third aspect, the application also provides a computer-readable storage medium storing a computer program which, when executed by at least one processor, performs the steps of any of the methods of the first aspect, or performs the functions of the electronic device of the second aspect.
In a fourth aspect, the application also provides a computer program product comprising a computer program which, when executed by at least one processor, performs the steps of any of the methods of the first aspect, or performs the functions of the electronic device of the second aspect.
Drawings
The application is further described below with reference to the drawings and the detailed description.
Fig. 1 is a flowchart of a front-end caching method according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of obtaining cache data according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of a preset caching rule according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of a data expiration policy according to an embodiment of the present application.
Fig. 5 is a block diagram of an electronic device according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a computer program product according to an embodiment of the present application.
Detailed Description
The technical scheme of the present application will be described below with reference to the drawings and the specific embodiments of the present application, and it should be noted that, on the premise of no conflict, new embodiments may be formed by any combination of the embodiments or technical features described below.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any implementation or design described as "exemplary" or "e.g." in the examples of this application should not be construed as preferred or advantageous over other implementations or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The first, second, etc. descriptions in the embodiments of the present application are only used for illustration and distinction of description objects, and no order division is used, nor does it represent a particular limitation on the number in the embodiments of the present application, nor should it constitute any limitation on the embodiments of the present application.
The technical field and related terms of the embodiments of the present application are briefly described below.
Low code platform: the system is a SaaS cloud platform which is used by a user to apply by a designer to rapidly carry out business logic design in a cloud end through a dragging mode and then release design results to the user end, wherein the platform can be a platform which can be shared by multiple tenants.
SaaS cloud platform: is a software delivery model based on cloud computing technology that provides software as a service to clients, who can access the service through the internet. The mode provides SaaS (Software as a Service) cloud service for users, helps enterprises to reduce cost, improves efficiency, and better meets customer requirements. SaaS cloud technology refers to a cloud computing service model in which software applications are delivered to users in the form of services. In the SaaS model, software providers are responsible for developing, maintaining, and hosting applications, while users access and use them over the internet without having to install and manage the infrastructure of the software locally. SaaS applications typically employ a multi-tenant architecture, i.e., multiple users share the same application instance. As a service delivery model, saaS applications do not require software installation and configuration locally at the user. The user can start to use only by accessing the application program through the supported equipment and the network. In the SaaS cloud technology field, many different types of applications are available for selection, and various business requirements are covered, such as Enterprise Resource Planning (ERP), customer Relationship Management (CRM), human Resource Management (HRM), project management, collaborative office, and the like. The SaaS model provides convenience and flexibility for users and relieves the burden of software deployment and management, thus being widely applied in various industries and organizations.
Backend Server (Backend Server): the back-end server refers to a software system or service running on the server end and is responsible for processing and managing back-end tasks such as data, business logic, persistent storage and the like. The backend servers are typically built based on specific programming languages and frameworks such as Java, python, node. Js, etc. It provides interfaces and services for front-end system calls and interactions with databases, file systems, etc. The back-end server mainly focuses on aspects of data processing, realization of business logic, data storage, processing performance and the like.
Front end System (front System): the front-end system refers to the user interface and interaction layer running on the client (typically a browser), responsible for presentation of the user interface, response to user input, and communication with the back-end server. Front-end systems are typically built using HTML, CSS, javaScript, etc. techniques that acquire data and present it by communicating with the back-end server's interface. The front-end system is primarily concerned with the user interface, the interaction experience, and the direct interaction with the user.
An interface: the back-end developer builds a service and provides an access address, and the back-end developer can obtain the returned data by accessing the access address. It will be appreciated that the interface is a service built by the back-end developer, providing an access address (URL) by which the front-end can send requests to the back-end and retrieve the returned data. The interfaces typically communicate using the standard HTTP protocol. The front end may send HTTP requests to the interface address using tools such as a web request library or a Fetch API built in the browser, and organize request parameters, set request header information, etc. according to the definition and specification of the interface. After receiving the request sent by the front end, the back end performs corresponding processing according to the type and parameters of the request, and generates data to be returned to the front end. The data may be packaged and transmitted in a format such as JSON, XML, HTML. Through access by the interface, the front end may obtain back-end provided data that may be used for presentation, processing, or further operations. The design and returned data structure of the interface is usually defined and implemented by a back-end developer according to the service requirements, and the front-end developer calls and parses the data returned by the interface according to the interface document and specification. In summary, the interface is a bridge with front and back ends cooperating, which provides a standardized way to enable the front end to interact with the back end to achieve data acquisition, transmission and processing.
In the technical scheme of the application, the back-end server is responsible for providing a data source, receiving a request from the front-end system, retrieving data from the data source according to request parameters, and returning the result to the front-end system. The front-end system is responsible for initiating a viewing request, communicating with the back-end server, and displaying pages according to the returned data. The two are communicated through the interface to jointly complete the data acquisition and display work.
Software applications generated by related low-code platforms often encounter problems caused by repeated calls to interfaces when pages are loaded, displayed, and interacted with, for example: the number of interface requests is increased, so that the pressure of the server is increased; after the page performance is reduced, the page data loading time is long; the data between the pages and the components are not shared, so that the user is inconvenient to use; and the difficulty of obtaining data is increased by performing data back display when the low-code platform performs secondary development.
Based on this, in order to solve the problem of low efficiency in the data query process, the present application proposes a front-end caching method, an electronic device, a computer readable storage medium and a computer program product for a low-code platform, and in which, in conjunction with the accompanying drawings and detailed description, a technical solution of an embodiment of the present application and how to solve the foregoing technical problem are specifically described, it should be noted that, any combination of embodiments or technical features described below may be used to form a new embodiment, and the same or similar concepts or processes may not be repeated in some embodiments. It will be apparent that the described embodiments are some, but not all, of the embodiments of the application.
Method embodiment
Referring to fig. 1 and fig. 2, fig. 1 is a schematic flow chart of a front-end caching method provided by an embodiment of the present application, and fig. 2 is a schematic flow chart of obtaining cached data provided by an embodiment of the present application.
The embodiment of the application provides a front-end caching method for a low-code platform, which comprises the following steps:
step S101: acquiring a viewing request of a form page, wherein the viewing request comprises a viewing identifier used as a viewing basis;
step S102: matching the checking identification with the cache data in the cache region to obtain a matching result;
step S103: and based on the matching result, acquiring a display value corresponding to the viewing request, and displaying the display value by using the form page.
The method for obtaining the cache data comprises the following steps:
step S201: monitoring the editing operation of the form page to obtain editing data;
step S202: and updating the storage value in the cache region by utilizing the editing data according to a preset cache rule to obtain the cache data.
Thus, when a user initiates a viewing request for a form page, a viewing identifier in the request is acquired, and the identifier is used as a viewing basis. The view identifier is matched with the cached data in the cache region to determine whether cached data associated with the view request exists. The matching result can be successful matching or unsuccessful matching, and whether cache data which are successful in matching exist is judged according to the matching result. If the cache data successfully matched exist, continuing the next step; if the match fails, the relevant data may be obtained by other means. Based on the matching result, a display value corresponding to the viewing request is obtained from the cache data and is used for being displayed to the user on the form page so as to meet the viewing requirement of the user on the form page. Meanwhile, the editing operation of the form page by the user is monitored so as to capture the data change generated in the editing process. For example, these edit data are captured and recorded when the user modifies the values of the form fields. And comparing and matching the edited data with the stored values in the cache area according to a preset cache rule. If the stored value is inconsistent with the edited data or the stored value is outdated, the corresponding stored value in the cache area can be updated according to the edited data, so that the accuracy and the instantaneity of the cached data are ensured.
On the one hand, frequent requests of the view requests of the form pages to the back-end server are reduced in a cache mode, so that the loading speed of the pages is increased, and the user experience is improved. On the other hand, the same data is prevented from being repeatedly requested through the data caching, the number of interface requests is reduced, and the load pressure on the back-end server is reduced. On the other hand, the cache data of the cache region can be shared through the arrangement of the cache region, and the same data can be obtained between different components or pages for sharing the cache data of the cache region, so that the reusability and the efficiency of the data are improved. In yet another aspect, a degree of offline browsing capability is provided because the data is already cached at the front-end. Meanwhile, by carrying out data caching and displaying on the front end, the request to the rear end server is reduced, the load pressure of the rear end is reduced, and the overall performance and stability of the page during viewing are improved. In the method for acquiring the cache data, the consistency of updating the cache data is maintained, and the latest data is ensured to be acquired by a user when the user views the form page.
In summary, by effectively managing and utilizing the cached data, the front-end caching method improves the page loading speed, reduces the interface request times, realizes high efficiency of the processes of sharing data and offline browsing capability, simultaneously reduces the pressure of the rear end and improves the user experience during viewing. Meanwhile, the method monitors the editing data, updates the cache data by using the editing data, and updates the cache data in time, so that the accuracy and instantaneity of the data are maintained, and better user experience and data display effect are provided.
In the embodiment of the application, the user can be a technician such as a software developer, a system architect, a data engineer, a business analyst and the like, and can be a user who finally performs data query.
To facilitate understanding, an example is provided: a report generation form for a low code platform, comprising the following fields:
report name: for entering the name of the report.
Report type: for selecting the type of report, such as sales report, financial report, etc.
Reporting date: for selecting the date of the report.
Report content: for entering details of the report.
The user initiates a viewing request for the report form page, the request comprises a viewing identifier, and when the viewing identifier is successfully matched with the viewing identifier of certain report data in the buffer area, a corresponding display value is obtained from the report data.
For example, the user requests to view report ID "123", and there is a matching report data in the buffer, wherein report ID "123" will be obtained from the report data, for example, the display value is: the report name "sales report 2023", the report type "sales report", the report date "2000-01-01", and the report content "the present month sales data statistics are as follows.
And displaying the acquired display value on a report form page, wherein the user can directly see the name, type, date and content of the report.
Meanwhile, in order to update the cache data, the editing operation of the user on the report form page can be monitored to acquire the editing data.
Assuming that the user modifies the report name to "sales report 2023 last year statistics" on the report form page, the edit data can be monitored and obtained. According to the preset caching rule, the report name storage value of the report data with the report ID of "123" is updated to "last half year statistics of sales report 2023" according to the edit data.
Through the above examples, it can be understood that the present embodiment organically combines the technical schemes of acquiring the viewing request, matching the cache data, acquiring the display value based on the matching result, displaying the form page by using the display value, and updating the cache data by monitoring the editing operation and the preset cache rule, so as to improve the loading speed and response performance of the form page, reduce unnecessary interface requests, and improve the user experience.
In Web applications, a user may send an HTTP request through a browser to view a form page. In an API-based application, a user may send a view request by calling a particular API interface, which may receive a view identifier as a parameter, and return corresponding data for presentation.
The present embodiment is not limited in the form of the viewing identifier, which is, for example, a character string, a numerical value, a symbol, a code, or a combination of two or more of them. The string may be in the form of a unique identifier, ID, code, etc., for example, the look-up identification of a form may be a specific UUID (Universally Unique Identifier) string or primary key of a database table. The view identifier may also be a numeric type, such as a self-increment ID, or the like.
The preset caching rules refer to a set of predefined rules or strategies, and are used for guiding the management and updating of cache data in the front-end caching method. It can be determined under what conditions, and how to store, update and purge the cached data. The purpose of the preset caching rules is to provide a flexible and configurable way to manage the cached data according to specific needs and business scenarios. Through presetting a caching rule, caching strategies of different types of data can be defined, wherein the caching strategies comprise one or more of a storage position of a cache, an effective period of the cache and an updating mode of the cache.
The specific content of the preset caching rules can be defined according to actual requirements, for example, the storage location rules are used for defining in which caching container different types of data should be stored, so that data grouping management is achieved. For example, an expiration rule is used to specify an expiration of cached data, and data exceeding the expiration will be considered as expired data and purged or updated. For example, update style rules are used to specify when and how to update cached data, such as data updates based on editing operations or periodic refresh mechanisms. Also for example, the purging policy rules are used to define policies for purging cached data according to different conditions, which may be based on data usage frequency, space constraints, or other business rules.
It can be considered that the use and management of the cache can be optimized by using the preset cache rule provided in the embodiment for the front-end cache of the low-code platform. The request for the back-end server can be effectively reduced by reasonably defining and applying the preset cache rule, and faster and more efficient data display and interaction experience can be provided.
In some embodiments, the preset buffer rule may further set a capacity limit of the buffer area, and when the buffered data exceeds the set capacity, perform data elimination according to a certain policy to release the storage space. A first-in first-out (FIFO) or Least Recently Used (LRU) or like elimination policy may be employed to manage the cache capacity, ensuring that the cache area does not over-expand or occupy excessive system resources.
In some embodiments, the obtaining, based on the matching result, a display value corresponding to the view request (i.e. step S103) may include:
when the check mark is successfully matched with the stored value of the cache data, the successfully matched stored value is used as a display value;
and when the check identifier is not successfully matched with the cache data in the cache region, searching the data source of the back-end server of the form page according to the check identifier to acquire a storage value matched with the check identifier and serve as a display value.
Thus, firstly, the check mark is compared with the stored value of the cache data in the cache area, and if the matching is successful, the stored value which is successfully matched is taken as the display value. If the check identifier does not match the stored value of the cache data in the cache region, that is, the cache data does not exist or is outdated, interaction can be performed with the data source of the back-end server of the form page according to the check identifier, and a request is sent to the back-end server through the check identifier to request to acquire the stored value matched with the check identifier. After receiving the request, the back-end server retrieves data from the database according to the viewing identifier, finds a stored value matched with the viewing identifier, responds to the obtained stored value, and takes the obtained stored value as a display value.
On the one hand, by using the front-end cache, frequent requests to the back-end server can be reduced, so that the load of the server is reduced, and the response speed and the overall performance of the low-code platform are improved. On the other hand, by caching the stored value and directly acquiring the display value during page loading, repeated interface requests can be avoided, the page loading speed is increased, and faster and smoother user experience is provided. On the other hand, as the stored value is cached at the front end, the communication frequency with the back end server is reduced, the network overhead and the data transmission cost are reduced, and the effect is more remarkable especially in a low-bandwidth or unstable network environment.
In summary, the latest display value is obtained by matching the cache data and the back-end data, so that the response speed and the overall performance of the low-code platform are improved.
As one example, the item detail page on one low code platform contains the following fields:
commodity ID: a unique ID for identifying the merchandise. Trade name: for displaying the names of the goods. Commodity price: for displaying the price of the commodity. Description of goods: for displaying descriptive information of the goods.
The user initiates a view request for the item detail page, the view request including a unique view identification, such as an ID of the item. The buffer memory area contains commodity data stored before, and each commodity data contains a unique check identifier and a corresponding stored value. And matching the checking identification of the user with commodity data in the buffer area to obtain a matching result. And when the check mark is successfully matched with the check mark of certain commodity data in the cache area, taking the successfully matched stored value as a display value. The method comprises the following steps:
the user requests to view the commodity with the commodity ID of '123', and matching commodity data exists in the buffer memory, wherein the commodity ID is '123', and the stored value is (the name is '3 rd generation of mobile phone', the price is 8999, and the latest mobile phone is described). The stored value in the commodity data, namely (name: 3 rd generation of mobile phone, price: 8999, description: latest mobile phone ") is obtained and used as the display value. The display value is applied to the corresponding field of the commodity detail page, and the user can directly see the name, price and description information of the commodity.
As another example, unlike the previous example, the view identification does not successfully match the merchandise data in the buffer. In this case, it is necessary to acquire a stored value matching the viewing identifier from the data source of the backend server as a display value. The method comprises the following steps:
the user requests to view the merchandise with the merchandise ID "456", but there is no merchandise data in the buffer that matches it. And using the viewing identification '456' to initiate a request to a back-end server, and acquiring a stored value with the commodity ID of '456'. The backend server returns a stored value (name: "tablet 1 generation", price: 1299, description: "latest tablet"). The stored value is used as a display value and applied to the corresponding field of the commodity detail page, and the user can see the name, price and description information of the commodity.
By the above two examples, it can be understood that when matching is successful, the stored value is directly obtained from the cache as the display value; when the matching is unsuccessful, the corresponding stored value is obtained as a display value by requesting the back-end server. Therefore, the request to the back-end server can be reduced to a certain extent, and the page loading speed and the user experience are improved.
Where the cached data is typically stored in the form of key-value pairs, where a key represents a unique identifier of the data and a value represents the specific content of the data. The view identifier is compared to the key of the cached data to determine if there is matching cached data. If matched cache data exist, namely the key of the check mark and the cache data is successfully matched, the corresponding storage value can be obtained through the key and used as a display value for displaying. The embodiment does not limit the specific implementation of the matching process. For example, a hash table, associative array, or other data structure may be used to store the cached data, which is then looked up and matched according to the key value.
In some embodiments, when the matching between the viewing identifier and the cached data in the cache area is unsuccessful, the method for obtaining the cached data further includes:
and updating the cache data in the cache area by using the storage value acquired from the data source of the back-end server.
Thus, when the viewing identifier does not match the cached data in the cache region, a request is sent to the backend server to obtain a stored value that matches the viewing identifier. Retrieving data from a database or other data source based on the view identification, and finding a stored value that matches the view identification. The back-end server returns the acquired stored value as a response to the front-end system. And after receiving the response of the back-end server, the front-end system updates the cache data in the cache region by using the acquired storage value.
On the one hand, the latest storage value is obtained from the back-end server, and the cache data is updated, so that the consistency of the cache data and the back-end data is ensured, and the problem of inconsistent data caused by expiration or inaccuracy of the cache data can be avoided. On the other hand, the update of the stored value to the buffer data does not affect the viewing request progress of the user, and the user can immediately obtain the latest display value without waiting for the completion of data update, thereby being beneficial to improving the interaction experience and page loading speed of the user.
In summary, by acquiring the storage value from the back-end server and updating the cache data, the consistency of the data is ensured, the data updating efficiency is improved, and the waiting time of the user and the network transmission cost are reduced.
And when the check mark is not successfully matched with the cache data in the cache region, updating the cache data in the cache region by using a storage value acquired from a data source of the back-end server. The request for data retrieval from the back-end server may take the form of an HTTP request, for example using methods GET (Get Request) or POST (Post Request), and pass the view identification to the back-end server as a request parameter. And after receiving the request, the back-end server queries a database or other data sources according to the viewing identification, and acquires the corresponding storage value.
In this embodiment, updating the cache data refers to updating the latest storage value obtained from the back-end server to the corresponding data item or key value pair in the cache region by obtaining the latest storage value from the data source of the back-end server when the data in the cache region is added or changed, so that the cache data and the data in the back-end server can be ensured to keep synchronous and consistent.
Referring to fig. 3, fig. 3 is a flowchart illustrating a preset caching rule according to an embodiment of the present application.
In some embodiments, the view request further includes key information for authenticating the encrypted edit data; the buffer area comprises a plurality of buffer containers, each buffer container is used for storing and managing different buffer data, and the preset buffer rule comprises:
step S301: judging the data type of the editing data, wherein the data type comprises service level data and session level data;
step S302: when the data type of the editing data is business grade data, obtaining the corresponding relation between the editing data and a cache container;
step S303: storing the editing data into a cache container corresponding to the editing data according to the corresponding relation;
step S304: encrypting the editing data by using an encryption algorithm when the data type of the editing data is session-level data;
step S305: and storing the encrypted editing data into a cache container corresponding to the editing data according to the corresponding relation between the editing data and the cache container.
Thus, when the viewing request includes the viewing identifier and the key information, the data to be viewed can be determined by the viewing identifier, and the encrypted data is authenticated and access controlled by the key information. When the data type of the editing data is business-level data, the corresponding relation between the editing data and the cache container is obtained according to a preset cache rule, and the editing data is stored in the corresponding cache container to update the cache data, so that the latest data can be obtained from the cache by a subsequent viewing request. When the data type of the editing data is session-level data, the editing data is encrypted by utilizing an encryption algorithm, and the encrypted data can protect the privacy and safety of the session-level data. According to the corresponding relation between the editing data and the cache container, the encrypted editing data is stored in the corresponding cache container, so that the confidentiality of the session-level data can be ensured while the cache data can be updated.
On the one hand, for session-level data, the editing data is encrypted through an encryption algorithm, so that the privacy and the security of the data can be protected. Only users with legal access rights can decrypt and view the data, thereby enhancing the confidentiality of the data. On the other hand, by updating the cached data, it can be ensured that the data viewed and edited by the user on different pages remains consistent. When the data changes, the cache can be updated in time, so that the problem of inconsistency of the data can be avoided.
In summary, by using the caching and encryption mechanism, the data access efficiency can be improved, the network overhead can be reduced, the user experience can be improved, and the confidentiality and consistency of the data can be ensured. This helps to improve the performance and security of the low code platform, providing better user experience and data protection mechanisms.
The manner in which the editing data is encrypted in this embodiment is not limited, and is, for example, a symmetric encryption algorithm or an asymmetric encryption algorithm. The viewing request includes key information for viewing the edit data within its scope of authority.
The present embodiment does not limit the implementation of the correspondence between the edit data and the cache container. Editing the correspondence between data and cache containers can be achieved by:
and setting a mapping table, recording the relation between each editing data and the corresponding cache container, wherein the mapping relation can be represented by using a key value pair mode, wherein a key represents a unique identification of the editing data, and a value represents the corresponding cache container. When the editing data is required to be stored, searching a corresponding cache container according to the type and the specific attribute of the editing data, and storing the data in the corresponding container.
It can also be realized by the following ways: the edit data is associated with the cache container using an indexing mechanism. Each edit data may be assigned a unique index value, which may be a number, string, or other form of identifier. Meanwhile, a corresponding index is created for each cache container, and the index value is associated with the corresponding cache container. When the editing data is required to be stored, searching a corresponding cache container according to the index value of the editing data, and storing the data in the corresponding container.
As one example, simple data types are stored into $DataFieldCache on the browser window object, such as short text, long text, numerical value, date and auto number, etc.; the complex data types are stored in the dataShareCache object of the sessionStorage, and the complex data types are, for example, pictures, attachments, people, departments, arrays, and the like. It will be appreciated that two different cache containers are provided, respectively $datafield cache and dataShareCache, for achieving cache relative independence between multiple pages (i.e., tag pages). The code corresponding to the storage mode may be referred to as follows:
window{
$dataFieldCache{
pageUrl:{
dataCode:{
fieldCode:{
value:displayValue
}
}
}
}
}
sessionStorage{
dataShareCache{
dataCode:{
fieldType:{
value:displayValue
}
}
}
}
in the cache container of the datafield cache, pageUrl is used for representing the URL address of a page and is used as an index of cache data; the dataCode is used for representing the identification of the data, and can be the unique identification of a field or other data identifications; the field code is used for representing the identification of the field; value is used to represent the stored value of the field; displayValue is used to represent the display value of a field. It is considered that the cache container is used to cache the stored value and the display value of the field during the page loading or editing operation, so that the data can be quickly acquired and displayed in the subsequent operation. By using page URLs, data identifications, and field identifications as indexes, quick access and updating of specific field data can be achieved.
In the dataShareCache cache container, dataCodey is used for representing the identification of data, and can be the unique identification of a field or other data identifications; fieldType is used to represent the type of field; value is used to represent the stored value of the field. displayValue is used to represent the display value of a field. The purpose of the cache container is to store shared data that can be shared and accessed by multiple pages or components. By using the data identification and field type as an index, fast access and updating of shared data can be achieved.
The code defines the data structures of two types of cache containers for storing and managing field data and shared data in a page. Therefore, the response speed of the page can be improved, the request times to the back-end server are reduced, and the sharing and cross-page access of data are realized. Therefore, the number of interfaces of the page request can be greatly reduced, cross-page data sharing can be realized among different pages by using components of data types such as pictures, accessories, personnel, departments and the like, for example, a code editing page is secondarily developed on a low-code platform, the interfaces or services are not required to be called, and the acquisition of the display value is realized by inquiring the cache.
In some embodiments, before the determining the data type of the edit data, the preset caching rule further includes: user authentication is carried out through the key information so as to obtain role authority of the user;
The judging of the data type of the edit data (i.e., S301) includes:
and taking the data corresponding to the role authority as editing data and judging the editing data to determine the data type of the editing data.
When the user operates, user verification is carried out through the key information, the role authority of the user is obtained, and different cache data acquisition strategies are set according to the role authority of the user. It is considered that all data changes can be monitored for an average user and stored in the corresponding cache container. Whereas for an administrator user (higher role authority than that of a normal user, higher data security requirement) part of the data change is monitored and stored in a proper cache container. When judging the data type of the editing data, taking the data corresponding to the user role authority as the editing data and using the data as the judging data type. In this way, it can be determined how to process and buffer data according to users with different role rights.
Therefore, users with different roles can enjoy personalized cache data acquisition strategies, data caching and updating are carried out according to the role demands, and the flexibility and the efficiency of the caching method are improved.
The cache data acquisition strategy based on the user role authority can provide a safer, more flexible and more efficient data management scheme, meet the requirements of different user roles and ensure the safety and confidentiality of data.
Wherein the role rights are used to indicate a specific rights level or scope of functionality possessed by the user. Each user may be assigned one or more roles, each defining operations that the user may perform, resources that are accessed, and functions that may be used. The purpose of the role rights is to limit its access and manipulation to sensitive or confidential information according to the user's duties and needs. Different roles may have different levels of authority, such as normal users, administrators, superadministrators, etc. Security and confidentiality of data can be ensured through allocation and management of role rights. The user can only perform operations that match their role rights and cannot override access to sensitive data. This helps prevent unauthorized users from maliciously manipulating or accessing sensitive information.
Therefore, according to the role authority of the user, the access authority range of the user can be determined, so that when the front-end caching method of the low-code platform is implemented, different caching strategies are adopted for users with different roles, and the security and the data protection are ensured.
As one example, there are two roles in an application of a low code platform: normal users and administrators. The role authority of the common user is lower than that of the manager.
The user can fill in personal information on a form page and save the personal information. The form contains the following fields:
name (text field), age (number field) and email (text field)
For the above case, the preset caching rules are as follows:
an administrator: only the name (text field) and age (number field) of the current user are cached.
The average user: the name (text field), age (number field) and email (text field) are cached.
When the user logs in or accesses the form page, the identity of the user can be verified through the key information, and the role authority of the user can be obtained. According to the role authority of the user, the caching rules and the logic for judging the data types are determined, so that the safety and the data privacy protection can be improved, and the access authority and the confidentiality of the data are ensured.
Referring to fig. 4, fig. 4 is a schematic flow chart of a data expiration policy according to an embodiment of the present application.
In some embodiments, the method further comprises:
acquiring a data expiration policy, and deleting the expiration data in the cache region through the data expiration policy.
The data expiration policy may include:
Step 401: respectively acquiring a preset clearing period corresponding to each cache data; the preset purge period is, for example, 10 minutes, 1 hour, or 20 hours;
step 402: and deleting the cache data from the cache region according to the corresponding preset clearing period for each cache data.
Therefore, a data expiration strategy is introduced in the front-end caching method and is used for deleting the expiration data in the cache region so as to ensure timely updating and effectiveness of the cache data. The preset clearing period corresponding to each cache data is obtained, and the preset clearing period can be considered to be the time length of storing the cache data in the cache region, and after the time length is exceeded, the data is considered to be outdated. And judging whether the expiration time is reached or not according to the corresponding preset clearing period for each cache data. If the data has expired, the cached data is deleted from the cache region. Deleting stale data may free up cache space and ensure the freshness of the cached data. When the data needs to be accessed next time, the latest data can be acquired from the back-end server again and cached, so that the timeliness and accuracy of the data are ensured.
On the one hand, by deleting the expired data, the data in the buffer area can be ensured to be kept in the latest state as possible. The cleaning of the expiration data ensures the accuracy of the cached data and avoids the condition of using the expiration data. On the other hand, the regular cleaning of the expired data can release the buffer space, and avoid the buffer area from excessively occupying storage resources. This is particularly important in the context of limited cache capacity and large amounts of data, which can improve cache utilization. On the other hand, by deleting the expiration data in time, the existence of invalid data in the cache region can be reduced, and the cache access efficiency is improved. Only the latest and effective data are reserved in the buffer area, so that the reading and displaying speed of the data can be increased. On the other hand, the storage time of different data can be flexibly controlled by presetting the clearing period. According to the service requirement and the data updating frequency, different clearing periods can be set for different cache data, so that the storage time of the cache data is matched with the validity period of the data.
In summary, through the data expiration policy and the cleaning of the expiration data, the accuracy and the freshness of the cached data can be maintained, the cache space is saved, and the cache access efficiency is improved on the premise of reducing the pressure of the back-end server, which is helpful to improve the performance and the resource utilization rate of the low-code platform and provide better user experience and reliability.
As an example, the following data items are in the buffer:
the data item 1 is cached, and the corresponding preset clearing period is 10 minutes.
The data item 2 is cached, and the corresponding preset clearing period is 1 hour.
The data item 3 is cached, and the corresponding preset clearing period is 20 hours.
For the cached data item 1, the difference between its creation time or last update time and the current time is checked. If more than 10 minutes, the data item is expired and is deleted from the buffer.
For the cached data item 2, the difference between its creation time or last update time and the current time is checked. If it exceeds 1 hour, indicating that the data item has expired, it is deleted from the buffer.
For the cached data item 3, the difference between its creation time or last update time and the current time is checked. If it exceeds 20 hours, indicating that the data item has expired, it is deleted from the buffer.
By checking and deleting according to the preset clearing period of each cache data item, the expired data in the cache area can be cleared regularly, and the effectiveness and timeliness of the cache data are ensured. This helps to reduce the storage space occupation of the buffer, improving performance and data consistency.
In some embodiments, the deleting the buffered data from the buffer according to the corresponding preset clearing period (i.e. step S302) may include:
and deleting the cache data which is not checked in the preset clearing period from the cache region.
Therefore, the data expiration strategy is used for deleting the cache data which is not checked in the preset clearing period, and the data updating and effectiveness of the cache area can be maintained. It will be appreciated that each cache data is determined based on its predetermined flush period whether it has been checked during the period, and will be deleted from the cache if it has not been checked during the predetermined flush period. By deleting the expiration data which is not checked, the cache space can be released, only the data which is actually used is ensured to be reserved in the cache region, and the cache efficiency and performance are improved. Therefore, the storage space of the buffer area can be released by cleaning the expiration data which is not checked, the occupation of storage resources is reduced, the cost is saved, and the available storage capacity is improved.
In a specific application scenario, the embodiment of the application further provides a front-end caching method of the low-code platform, which comprises the following steps:
obtaining a viewing request of a form page, wherein the viewing request comprises a viewing identifier and key information used as a viewing basis, and the key information is used for carrying out identity verification on encrypted editing data;
matching the checking identification with the cache data in the cache region to obtain a matching result;
when the check mark is successfully matched with the stored value of the cache data, the successfully matched stored value is used as a display value;
when the check mark is not successfully matched with the cache data in the cache area, searching a data source of a rear end server of the form page according to the check mark to obtain a storage value matched with the check mark and serve as a display value, and displaying the display value by utilizing the form page;
acquiring a data expiration policy, and deleting the expiration data in the cache region through the data expiration policy.
The method for obtaining the cache data comprises the following steps:
monitoring the editing operation of the form page to obtain editing data;
Updating the storage value in the cache region by utilizing the editing data according to a preset cache rule to obtain the cache data;
and when the check mark is not successfully matched with the cache data in the cache region, updating the cache data in the cache region by utilizing the stored value acquired from the data source of the back-end server.
The buffer area comprises a plurality of buffer containers, each buffer container is used for storing and managing different buffer data, and the preset buffer rule comprises:
judging the data type of the editing data, wherein the data type comprises service level data and session level data;
when the data type of the editing data is business grade data, obtaining the corresponding relation between the editing data and a cache container;
storing the editing data into a cache container corresponding to the editing data according to the corresponding relation;
encrypting the editing data by using an encryption algorithm when the data type of the editing data is session-level data;
and storing the encrypted editing data into a cache container corresponding to the editing data according to the corresponding relation between the editing data and the cache container.
The data expiration policy includes:
respectively acquiring a preset clearing period corresponding to each cache data;
and deleting the cache data which are not checked in a preset clearing period from the cache area aiming at each cache data.
The technical scheme is based on a low-code platform, and solves the problems of repeated calling of the components to the interface, different pages and data sharing among the components through browser caching. Meanwhile, when secondary development is carried out, a display value can be obtained from the cache according to the stored value. On one hand, by caching data, repeated network requests are avoided, and the communication times with a back-end server are reduced, so that the page loading speed and response performance are remarkably improved.
On the other hand, by caching the data, the request amount of the back-end server can be reduced, and the load pressure of the server is reduced. On the other hand, by caching the data, the quick display and the back display of the data are realized, and smoother user experience is provided. The user does not need to wait for loading and refreshing of the data too much, and can immediately check and edit the data, so that the working efficiency and satisfaction of the user are improved. On the other hand, by caching the data, frequent network requests are avoided, and network traffic consumption of data transmission is reduced. This is particularly important for mobile devices and users in low network bandwidth environments, where the user's traffic costs can be saved. On the other hand, by adopting a caching mechanism, the cache data can be flexibly managed and controlled, and the cache data is updated and cleaned according to service requirements.
In general, the beneficial effects of the technical scheme are that the method is beneficial to constructing a high-efficiency, stable and reliable low-code platform in terms of improving performance, improving user experience, reducing resource consumption, improving system expandability and the like.
Device embodiment
The embodiment of the application provides an electronic device, and the specific embodiment of the electronic device is consistent with the embodiment described in the method embodiment and the achieved technical effect, and part of the content is not repeated.
The electronic device comprises a memory storing a computer program and at least one processor configured to implement the following steps when executing the computer program:
acquiring a viewing request of a form page, wherein the viewing request comprises a viewing identifier used as a viewing basis;
matching the checking identification with the cache data in the cache region to obtain a matching result;
based on the matching result, obtaining a display value corresponding to the viewing request, and displaying the display value by using the form page;
the method for obtaining the cache data comprises the following steps:
monitoring the editing operation of the form page to obtain editing data;
And updating the storage value in the cache region by utilizing the editing data according to a preset cache rule to obtain the cache data.
In some embodiments, the at least one processor, when executing the computer program, obtains a display value corresponding to the view request based on the matching result in the following manner:
when the check mark is successfully matched with the stored value of the cache data, the successfully matched stored value is used as a display value;
and when the check identifier is not successfully matched with the cache data in the cache region, searching the data source of the back-end server of the form page according to the check identifier to acquire a storage value matched with the check identifier and serve as a display value.
In some embodiments, when the view identification does not match the cached data in the cache region successfully, the at least one processor, when executing the computer program, further obtains the cached data by:
and updating the cache data in the cache area by using the storage value acquired from the data source of the back-end server.
In some embodiments, the view request further includes key information for authenticating the encrypted edit data; the buffer area comprises a plurality of buffer containers, each buffer container is used for storing and managing different buffer data, and the preset buffer rule comprises:
Judging the data type of the editing data, wherein the data type comprises service level data and session level data;
when the data type of the editing data is business grade data, obtaining the corresponding relation between the editing data and a cache container;
storing the editing data into a cache container corresponding to the editing data according to the corresponding relation;
encrypting the editing data by using an encryption algorithm when the data type of the editing data is session-level data;
and storing the encrypted editing data into a cache container corresponding to the editing data according to the corresponding relation between the editing data and the cache container.
In some embodiments, before the determining the data type of the edit data, the preset caching rule further includes: user authentication is carried out through the key information so as to obtain role authority of the user;
the judging the data type of the editing data comprises the following steps:
and taking the data corresponding to the role authority as editing data and judging the editing data to determine the data type of the editing data.
In some embodiments, the at least one processor, when executing the computer program, further performs the steps of:
Acquiring a data expiration policy, and deleting the expiration data in the cache region through the data expiration policy;
wherein the data expiration policy comprises:
respectively acquiring a preset clearing period corresponding to each cache data;
and deleting the cache data from the cache region according to the corresponding preset clearing period for each cache data.
In some embodiments, the at least one processor further deletes the cache data from the cache region according to its corresponding preset purge period when executing the computer program by:
and deleting the cache data which is not checked in the preset clearing period from the cache region.
Referring to fig. 5, fig. 5 is a block diagram of an electronic device according to an embodiment of the present application.
The electronic device 10 may for example comprise at least one memory 11, at least one processor 12 and a bus 13 connecting the different platform systems.
Memory 11 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 111 and/or cache memory 112, and may further include Read Only Memory (ROM) 113.
The memory 11 also stores a computer program executable by the processor 12 to cause the processor 12 to implement the steps of any of the methods described above.
Memory 11 may also include utility 114 having at least one program module 115, such program modules 115 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Accordingly, the processor 12 may execute the computer programs described above, as well as may execute the utility 114.
The processor 12 may employ one or more application specific integrated circuits (ASICs, application Specific Integrated Circuit), DSPs, programmable logic devices (PLDs, programmable Logic Device), complex programmable logic devices (CPLDs, complex Programmable Logic Device), field programmable gate arrays (FPGAs, fields-Programmable Gate Array), or other electronic components.
Bus 13 may be a local bus representing one or more of several types of bus structures including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or any of a variety of bus architectures.
The electronic device 10 may also communicate with one or more external devices such as a keyboard, pointing device, bluetooth device, etc., as well as one or more devices capable of interacting with the electronic device 10 and/or with any device (e.g., router, modem, etc.) that enables the electronic device 10 to communicate with one or more other computing devices. Such communication may be via the input-output interface 14. Also, the electronic device 10 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through a network adapter 15. The network adapter 15 may communicate with other modules of the electronic device 10 via the bus 13. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with the electronic device 10 in actual applications, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage platforms, and the like.
Storage medium embodiment
The embodiment of the application provides a computer readable storage medium, and the specific embodiment of the computer readable storage medium is consistent with the embodiment and the achieved technical effect recorded in the method embodiment, and part of the contents are not repeated.
The computer readable storage medium stores a computer program which, when executed by at least one processor, implements the steps of any of the methods described above.
The computer readable medium may be a computer readable signal medium or a computer readable storage medium. In embodiments of the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a data signal propagated in baseband or as part of a carrier wave, with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable storage medium may also be any computer readable medium that can transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the C programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Program product embodiment
The embodiment of the application provides a computer program product, the specific embodiment of which is consistent with the embodiment described in the method embodiment and the achieved technical effect, and part of the contents are not repeated.
The computer program product comprises a computer program which, when executed by at least one processor, implements the steps of any of the methods described above.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a computer program product according to an embodiment of the present application.
The computer program product being adapted to carry out the steps of any of the methods described above. The computer program product may employ a portable compact disc read only memory (CD-ROM) and comprise program code and may run on a terminal device, such as a personal computer. However, the computer program product of the present application is not limited thereto, and the computer program product may employ any combination of one or more computer readable media.
The present application has been described in terms of its purpose, performance, advancement, and novelty, and the like, and is thus adapted to the functional enhancement and use requirements highlighted by the patent statutes, but the description and drawings are not limited to the preferred embodiments of the present application, and therefore, all equivalents and modifications that are included in the construction, apparatus, features, etc. of the present application shall fall within the scope of the present application.

Claims (8)

1. A front-end caching method for a low-code platform, the method comprising:
acquiring a viewing request of a form page, wherein the viewing request comprises a viewing identifier used as a viewing basis;
matching the checking identification with the cache data in the cache region to obtain a matching result;
based on the matching result, obtaining a display value corresponding to the viewing request, and displaying the display value by using the form page;
the method for obtaining the cache data comprises the following steps:
monitoring the editing operation of the form page to obtain editing data;
updating the storage value in the cache region by utilizing the editing data according to a preset cache rule to obtain the cache data;
the check request also comprises key information, wherein the key information is used for carrying out identity verification on the encrypted editing data; the buffer area comprises a plurality of buffer containers, each buffer container is used for storing and managing different buffer data, and the preset buffer rule comprises:
judging the data type of the editing data, wherein the data type comprises service level data and session level data;
When the data type of the editing data is business grade data, obtaining the corresponding relation between the editing data and a cache container;
storing the editing data into a cache container corresponding to the editing data according to the corresponding relation;
encrypting the editing data by using an encryption algorithm when the data type of the editing data is session-level data;
storing the encrypted editing data into a cache container corresponding to the editing data according to the corresponding relation between the editing data and the cache container;
meanwhile, the data types comprise simple data types and complex data types, the simple data types comprise short texts, long texts, numerical values, dates and automatic numbers, the complex data types comprise pictures, accessories, personnel, departments and arrays, the data of the simple data types and the data of the complex data types are respectively stored in different cache containers and are respectively used for storing and managing field data and shared data in pages so as to realize the relative independence of caches among multiple pages.
2. The front-end caching method according to claim 1, wherein the obtaining, based on the matching result, a display value corresponding to the view request includes:
When the check mark is successfully matched with the stored value of the cache data, the successfully matched stored value is used as a display value;
and when the check identifier is not successfully matched with the cache data in the cache region, searching the data source of the back-end server of the form page according to the check identifier to acquire a storage value matched with the check identifier and serve as a display value.
3. The front-end caching method according to claim 2, wherein when the matching between the viewing identifier and the cached data in the cache area is unsuccessful, the method for obtaining the cached data further comprises:
and updating the cache data in the cache area by using the storage value acquired from the data source of the back-end server.
4. The front-end caching method according to claim 1, wherein, before the determining the data type of the edit data, the preset caching rule further includes: user authentication is carried out through the key information so as to obtain role authority of the user;
the judging the data type of the editing data comprises the following steps:
and taking the data corresponding to the role authority as editing data and judging the editing data to determine the data type of the editing data.
5. The front-end caching method of claim 1, further comprising:
acquiring a data expiration policy, and deleting the expiration data in the cache region through the data expiration policy;
wherein the data expiration policy comprises:
respectively acquiring a preset clearing period corresponding to each cache data;
and deleting the cache data from the cache region according to the corresponding preset clearing period for each cache data.
6. The front-end caching method according to claim 5, wherein the deleting the cache data from the cache area according to the corresponding preset clearing period comprises:
and deleting the cache data which is not checked in the preset clearing period from the cache region.
7. An electronic device comprising a memory and at least one processor, the memory storing a computer program, the at least one processor being configured to implement the following steps when executing the computer program:
acquiring a viewing request of a form page, wherein the viewing request comprises a viewing identifier used as a viewing basis;
Matching the checking identification with the cache data in the cache region to obtain a matching result;
based on the matching result, obtaining a display value corresponding to the viewing request, and displaying the display value by using the form page;
the method for obtaining the cache data comprises the following steps:
monitoring the editing operation of the form page to obtain editing data;
updating the storage value in the cache region by utilizing the editing data according to a preset cache rule to obtain the cache data;
the check request also comprises key information, wherein the key information is used for carrying out identity verification on the encrypted editing data; the buffer area comprises a plurality of buffer containers, each buffer container is used for storing and managing different buffer data, and the preset buffer rule comprises:
judging the data type of the editing data, wherein the data type comprises service level data and session level data;
when the data type of the editing data is business grade data, obtaining the corresponding relation between the editing data and a cache container;
storing the editing data into a cache container corresponding to the editing data according to the corresponding relation;
Encrypting the editing data by using an encryption algorithm when the data type of the editing data is session-level data;
storing the encrypted editing data into a cache container corresponding to the editing data according to the corresponding relation between the editing data and the cache container;
meanwhile, the data types comprise simple data types and complex data types, the simple data types comprise short texts, long texts, numerical values, dates and automatic numbers, the complex data types comprise pictures, accessories, personnel, departments and arrays, the data of the simple data types and the data of the complex data types are respectively stored in different cache containers and are respectively used for storing and managing field data and shared data in pages so as to realize the relative independence of caches among multiple pages.
8. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by at least one processor, implements the steps of the method of any one of claims 1 to 6 or implements the functionality of the electronic device of claim 7.
CN202311136627.5A 2023-09-05 2023-09-05 Front-end caching method of low-code platform and related equipment Active CN116860862B8 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311136627.5A CN116860862B8 (en) 2023-09-05 2023-09-05 Front-end caching method of low-code platform and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311136627.5A CN116860862B8 (en) 2023-09-05 2023-09-05 Front-end caching method of low-code platform and related equipment

Publications (3)

Publication Number Publication Date
CN116860862A CN116860862A (en) 2023-10-10
CN116860862B true CN116860862B (en) 2023-12-08
CN116860862B8 CN116860862B8 (en) 2023-12-26

Family

ID=88222065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311136627.5A Active CN116860862B8 (en) 2023-09-05 2023-09-05 Front-end caching method of low-code platform and related equipment

Country Status (1)

Country Link
CN (1) CN116860862B8 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102012907A (en) * 2010-11-10 2011-04-13 上海光芒科技有限公司 Method and system for cache at browser client side
CN106294365A (en) * 2015-05-15 2017-01-04 阿里巴巴集团控股有限公司 The page data processing method of a kind of single page web application and equipment
CN112417343A (en) * 2020-12-29 2021-02-26 中科院计算技术研究所大数据研究院 Method for caching data based on front-end Angular frame
CN114547106A (en) * 2022-02-22 2022-05-27 网银在线(北京)科技有限公司 Data query method and device, storage medium and computer system
CN114979234A (en) * 2022-04-22 2022-08-30 中国工商银行股份有限公司 Session control sharing method and system in distributed cluster system
CN116186022A (en) * 2021-11-26 2023-05-30 北京神州泰岳软件股份有限公司 Form processing method, form processing device, distributed form system and computer storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9112537B2 (en) * 2011-12-22 2015-08-18 Intel Corporation Content-aware caches for reliability
US11556608B2 (en) * 2021-03-22 2023-01-17 Salesforce.Com, Inc. Caching for single page web applications

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102012907A (en) * 2010-11-10 2011-04-13 上海光芒科技有限公司 Method and system for cache at browser client side
CN106294365A (en) * 2015-05-15 2017-01-04 阿里巴巴集团控股有限公司 The page data processing method of a kind of single page web application and equipment
CN112417343A (en) * 2020-12-29 2021-02-26 中科院计算技术研究所大数据研究院 Method for caching data based on front-end Angular frame
CN116186022A (en) * 2021-11-26 2023-05-30 北京神州泰岳软件股份有限公司 Form processing method, form processing device, distributed form system and computer storage medium
CN114547106A (en) * 2022-02-22 2022-05-27 网银在线(北京)科技有限公司 Data query method and device, storage medium and computer system
CN114979234A (en) * 2022-04-22 2022-08-30 中国工商银行股份有限公司 Session control sharing method and system in distributed cluster system

Also Published As

Publication number Publication date
CN116860862B8 (en) 2023-12-26
CN116860862A (en) 2023-10-10

Similar Documents

Publication Publication Date Title
US8572023B2 (en) Data services framework workflow processing
US10909064B2 (en) Application architecture supporting multiple services and caching
US9628493B2 (en) Computer implemented methods and apparatus for managing permission sets and validating user assignments
EP2143051B1 (en) In-memory caching of shared customizable multi-tenant data
US9740435B2 (en) Methods for managing content stored in cloud-based storages
US8555018B1 (en) Techniques for storing data
US9201610B2 (en) Cloud-based storage deprovisioning
US9854052B2 (en) Business object attachments and expiring URLs
US9020973B2 (en) User interface model driven data access control
WO2008154032A1 (en) Secure hosted databases
CN109587233A (en) Cloudy Container Management method, equipment and computer readable storage medium
US10009399B2 (en) Asset streaming and delivery
US11663322B2 (en) Distributed security introspection
US10268721B2 (en) Protected handling of database queries
US20110213816A1 (en) System, method and computer program product for using a database to access content stored outside of the database
EP3049940B1 (en) Data caching policy in multiple tenant enterprise resource planning system
US11063922B2 (en) Virtual content repository
US20210081550A1 (en) Serving Data Assets Based on Security Policies by Applying Space-Time Optimized Inline Data Transformations
US20220245560A1 (en) On-site appointment assistant
US11902852B2 (en) On-site appointment assistant
US10649964B2 (en) Incorporating external data into a database schema
CN116860862B (en) Front-end caching method of low-code platform and related equipment
US11537499B2 (en) Self executing and self disposing signal
US20210152650A1 (en) Extraction of data from secure data sources to a multi-tenant cloud system
US8453166B2 (en) Data services framework visibility component

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CI03 Correction of invention patent
CI03 Correction of invention patent

Correction item: Inventor

Correct: silver dollar|Chang Jiaqi|Qi Yu|Jiang Nan

False: White Poplar|Chang Jiaqi|Qi Yu|Jiang Nan

Number: 49-02

Page: The title page

Volume: 39

Correction item: Inventor

Correct: silver dollar|Chang Jiaqi|Qi Yu|Jiang Nan

False: White Poplar|Chang Jiaqi|Qi Yu|Jiang Nan

Number: 49-02

Volume: 39

OR01 Other related matters
OR01 Other related matters