CN109977036B - Method and device for caching process template, storage medium and electronic equipment - Google Patents

Method and device for caching process template, storage medium and electronic equipment Download PDF

Info

Publication number
CN109977036B
CN109977036B CN201910122658.2A CN201910122658A CN109977036B CN 109977036 B CN109977036 B CN 109977036B CN 201910122658 A CN201910122658 A CN 201910122658A CN 109977036 B CN109977036 B CN 109977036B
Authority
CN
China
Prior art keywords
sub
target
template
cache
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910122658.2A
Other languages
Chinese (zh)
Other versions
CN109977036A (en
Inventor
赵振国
董洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201910122658.2A priority Critical patent/CN109977036B/en
Publication of CN109977036A publication Critical patent/CN109977036A/en
Application granted granted Critical
Publication of CN109977036B publication Critical patent/CN109977036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0875Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The disclosure relates to a method, a device, a storage medium and an electronic device for caching a process template, which can acquire a target process template to be cached; decomposing the target flow template into a plurality of sub-objects; determining a plurality of first idle cache areas from a memory, and caching each sub-object in the first idle cache areas in sequence; the cache space of the first idle cache area is smaller than the size of the target process template; establishing a template index of the target process template; the template index comprises the incidence relation between the target process template and a plurality of the sub-objects; and determining a second idle cache area from the memory, and caching the template index into the second idle cache area.

Description

Method and device for caching process template, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of process template caching, and in particular, to a method and an apparatus for caching a process template, a storage medium, and an electronic device.
Background
As is known, a cache can provide high-performance data fast access, in an application scenario, when a system wants to read related data of a flow template, in order to improve the data access efficiency of the flow template, the flow template is first searched from a cache region, if the flow template is found, the flow template is immediately read, and if the flow template is not found, the flow template can be searched in a memory, and then the flow template is cached, so that the flow template can be read from the cache region later, and the flow template does not need to be called from the memory, thereby improving the access efficiency of the flow template.
Disclosure of Invention
The disclosure provides a method and a device for caching a flow template, a storage medium and an electronic device.
In a first aspect, a method for caching a flow template is provided, where the method includes: acquiring a target process template to be cached; decomposing the target flow template into a plurality of sub-objects; determining a plurality of first idle cache areas from a memory, and caching each sub-object in the first idle cache areas in sequence; the cache space of the first idle cache area is smaller than the size of the target process template; establishing a template index of the target process template; the template index comprises the incidence relation between the target process template and a plurality of the sub-objects; and determining a second idle cache area from the memory, and caching the template index into the second idle cache area.
Optionally, the decomposing the target process template into a plurality of sub-objects includes: acquiring identification information of each component element in the target process template; determining a flag bit of each of the constituent elements according to the identification information; and dividing the component elements with the same zone bit into the same sub-object.
Optionally, the sequentially caching each of the child objects in the plurality of first free cache regions includes: circularly executing the step of caching the sub-objects until all the sub-objects are cached; the step of caching the child objects comprises: determining a first sub-object in the plurality of sub-objects, the first sub-object comprising any one of the plurality of sub-objects; traversing the remaining idle areas in the first idle cache areas until a first target area is determined, wherein the cache space of the first target area is larger than or equal to the size of the first sub-object, and caching the first sub-object to the first target area; and determining a second sub-object in the remaining uncached sub-objects, wherein the second sub-object comprises any one of the remaining uncached sub-objects, and the second sub-object is used as the updated first sub-object.
Optionally, the method further comprises: acquiring a process operation instruction of the target process template; and reading the sub-objects cached in the first idle cache region according to the flow operation instruction.
Optionally, the reading, according to the flow operation instruction, the child object cached in the first free cache region includes: establishing a cache queue corresponding to the flow operation instruction; determining a circulation process when the target flow template is operated according to the flow operation instruction; the circulation process comprises one or more sub-processes; determining a target sub-object corresponding to each sub-process in a plurality of sub-objects; and caching the target sub-object in the cache queue according to the circulation process.
Optionally, the caching the target sub-object in the cache queue according to the circulation process includes: acquiring an execution sequence of each subprocess in the circulation process, and sequentially caching the target subprocesses respectively corresponding to each subprocess cached in the first idle cache region into the cache queue according to the execution sequence; and after the sub-process executes the return process, removing the target sub-object corresponding to the sub-process from the cache queue.
In a second aspect, an apparatus for caching a flow template is provided, the apparatus comprising: the first acquisition module is used for acquiring a target process template to be cached; the flow template decomposition module is used for decomposing the target flow template into a plurality of sub-objects; the first cache module is used for determining a plurality of first idle cache areas from a memory and sequentially caching each sub-object in the first idle cache areas; the cache space of the first idle cache area is smaller than the size of the target process template; the template index establishing module is used for establishing a template index of the target process template; the template index comprises the incidence relation between the target process template and a plurality of the sub-objects; and the second cache module is used for determining a second idle cache area from the memory and caching the template index into the second idle cache area.
Optionally, the process template decomposition module is configured to obtain identification information of each constituent element in the target process template; determining a flag bit of each of the constituent elements according to the identification information; and dividing the component elements with the same zone bit into the same sub-object.
Optionally, the first caching module is configured to perform the step of caching the sub-objects in a loop until all of the plurality of sub-objects are cached; the step of caching the child objects comprises: determining a first sub-object in the plurality of sub-objects, the first sub-object comprising any one of the plurality of sub-objects; traversing the remaining idle areas in the first idle cache areas until a first target area is determined, wherein the cache space of the first target area is larger than or equal to the size of the first sub-object, and caching the first sub-object to the first target area; and determining a second sub-object in the remaining uncached sub-objects, wherein the second sub-object comprises any one of the remaining uncached sub-objects, and the second sub-object is used as the updated first sub-object.
Optionally, the apparatus further comprises: the second acquisition module is used for acquiring a process operation instruction of the target process template; and the data access module is used for reading the sub-objects cached in the first idle cache region according to the flow operation instruction.
Optionally, the data access module is configured to establish a cache queue corresponding to the flow operation instruction; determining a circulation process when the target flow template is operated according to the flow operation instruction; the circulation process comprises one or more sub-processes; determining a target sub-object corresponding to each sub-process in a plurality of sub-objects, wherein the target sub-object comprises one or more sub-objects; and caching the target sub-object in the cache queue according to the circulation process.
Optionally, the data access module is configured to obtain an execution sequence of each sub-process in the circulation process, and sequentially cache the target sub-objects corresponding to each sub-process cached in the first idle cache area in the cache queue according to the execution sequence; and after the sub-process executes the return process, removing the target sub-object corresponding to the sub-process from the cache queue.
In a third aspect, a computer readable storage medium is provided, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of the method according to the first aspect of the disclosure.
In a fourth aspect, an electronic device is provided that includes a memory having a computer program stored thereon; a processor for executing the computer program in the memory to implement the steps of the method of the first aspect of the disclosure.
By the technical scheme, the target process template to be cached can be obtained; decomposing the target flow template into a plurality of sub-objects; determining a plurality of first idle cache areas from a memory, and caching each sub-object in the first idle cache areas in sequence; the cache space of the first idle cache area is smaller than the size of the target process template; establishing a template index of the target process template; the template index comprises the incidence relation between the target process template and a plurality of the sub-objects; and determining a second idle cache region from the memory, and caching the template index to the second idle cache region, so that a plurality of first idle cache regions with smaller space in the memory can be utilized, and each sub-object of the target process template is cached in the plurality of first idle cache regions in sequence, thereby avoiding that the process template can be cached only in the memory region with the cache space larger than or equal to the size of the target process template, improving the utilization rate of the memory and saving system resources.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow diagram illustrating a first method of caching flow templates in accordance with an illustrative embodiment;
FIG. 2 is a process diagram illustrating a first type of cache flow template in accordance with an illustrative embodiment;
FIG. 3 is a process diagram illustrating a second type of cache flow template in accordance with an illustrative embodiment;
FIG. 4 is a flow diagram illustrating a second method of caching flow templates in accordance with an illustrative embodiment;
FIG. 5 is a block diagram illustrating a first apparatus for caching flow templates in accordance with an illustrative embodiment;
FIG. 6 is a block diagram illustrating a second apparatus for caching flow templates in accordance with an illustrative embodiment;
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Firstly, introducing the application scenario of the present disclosure, when a user wants to perform a flow operation, the user may complete a corresponding operation by logging in a service system using a flow template, and after the service system obtains a flow triggering operation of the user, the service system may generate a data access request for a target flow template, then first search relevant data of the target flow template from a cache according to the data access request, if the relevant data is found, the data is read immediately, and if the relevant data is not found, the target flow template may be obtained in a template database, and then apply for a continuous area whose cache space is greater than or equal to the size of the target flow template to cache the whole target flow template, but in memory management of the operating system, a free memory is not a continuous but a discrete fragmented area whose space is relatively small, when a continuous memory area with a large space is applied to cache the whole flow template, for fragmented regions with smaller space, it is not available, which reduces the memory utilization.
In order to solve the existing problems, the present disclosure provides a method, an apparatus, a storage medium, and an electronic device for caching a process template, where a target process template to be cached is first obtained, the target process template is decomposed into a plurality of sub-objects, then a plurality of first free cache areas with a cache space smaller than the size of the target process template are determined from a memory, each of the sub-objects is sequentially cached in the plurality of first free cache areas, then a template index that can represent an association relationship between the target process template and each of the sub-objects is established, and the template index is cached, so that the plurality of first free cache areas with a smaller space in the memory can be utilized, each of the sub-objects of the target process template is sequentially cached in the plurality of first free cache areas, and the process template is prevented from being cached only in a memory area with a cache space larger than or equal to the size of the target process template, therefore, the utilization rate of the memory is improved, and system resources are saved.
Specific embodiments of the present disclosure will be described below with reference to the accompanying drawings.
FIG. 1 is a flow chart illustrating a method of caching flow templates, as shown in FIG. 1, according to an example embodiment, the method comprising the steps of:
in step 101, a target process template to be cached is obtained.
In a possible implementation manner, the target flow template may include a flow template corresponding to a flow triggering operation of a user, for example, after the business system obtains a business application submitted by the user, the business system first obtains a business approval flow template corresponding to the business application (at this time, the business approval flow template is the target flow template), and then transmits the business application to a target node (such as an approver) according to a connection relationship between nodes in the business approval flow template, which is only illustrated here, and the disclosure does not limit this.
In step 102, the target flow template is decomposed into a plurality of sub-objects.
The sub-objects may include objects such as manual nodes (e.g., approvers), automatic nodes (e.g., applications), branch nodes (e.g., routing conditions), and connection lines in the process template.
Considering that in the memory management of the operating system, the free memory is not continuous, but one block, one block and discontinuous, for this purpose, the target flow template can be decomposed into a plurality of sub-objects, so that each sub-object of the target flow template can be cached in each discrete free area, thereby increasing the utilization rate of the memory.
In this step, the identification information of each constituent element in the target process template may be obtained; determining a flag bit of each component element according to the identification information; and dividing the component elements with the same zone bits into the same sub-object.
In general, a process template may include nodes and connecting lines between nodes in a business process, the composition elements are the nodes and connecting lines that make up the flow template, and in the actual flow template, identification information for uniquely identifying the constituent element may be preset for each of the constituent elements, and the identification information may include a flag bit, wherein the flag may indicate the type of the component element, for example, the nodes in a flow template usually include multiple node types such as manual node, automatic node, branch node and concurrent node, in one possible implementation, nodes with the same flag bit belong to the same node type, and therefore, in this step, the component elements with the same flag bit may be divided into the same sub-object, so that the entire target process template is decomposed into a plurality of sub-objects.
By way of example, assuming that the constituent elements of the target flow template include 10 flow nodes and connecting lines between the flow nodes, where of the 10 flow nodes, the identification information of node 1 is a001, the identification information of node 2 is B001, the identification information of node 3 is B002, the identification information of node 4 is a002, the identification information of node 5 is C001, the identification information of node 6 is a003, the identification information of node 7 is a004, the identification information of node 8 is C002, the identification information of node 9 is B003, the identification information of node 10 is B004, the identification information of each connecting line is L001 to L009, respectively, and the first bit in each identification information is a flag bit of the corresponding constituent element, in this example, when the target flow template is decomposed, the constituent elements with the same flag bit may be divided into the same child object, specifically, the flag bits of each connecting line are L, therefore, each connecting line in the target flow template can be divided into sub-objects 1; flag bits in the node 1, the node 4, the node 6 and the node 7 are all A, so that the node 1, the node 4, the node 6 and the node 7 in the target flow template can be divided into sub-objects 2; the flag bits in the node 2, the node 3, the node 9 and the node 10 are all B, so that the node 2, the node 3, the node 9 and the node 10 in the target flow template can be divided into the sub-objects 3; the flag bits in the node 5 and the node 8 are both C, so that the node 5 and the node 8 in the target flow template may be divided into the sub-object 4, and up to this point, the target flow template may be decomposed into four sub-objects, i.e., the sub-object 1, the sub-object 2, the sub-object 3, and the sub-object 4.
In step 103, determining a plurality of first free cache regions from the memory, and sequentially caching each of the child objects in the plurality of first free cache regions; the buffer space of the first free buffer area is smaller than the size of the target flow template.
For example, fig. 2 is a schematic diagram illustrating a cache flow template according to an exemplary embodiment, and storage areas numbered 2, 5, 7, 9, and 11 in fig. 2 are the first free cache area.
In this step, the step of caching the sub-objects may be performed in a loop until all of the sub-objects are cached; the step of caching the child object includes: determining a first sub-object in the plurality of sub-objects, wherein the first sub-object comprises any one of the plurality of sub-objects; traversing the remaining idle areas in the first idle cache areas until a first target area is determined, wherein the cache space of the first target area is larger than or equal to the size of the first sub-object, and caching the first sub-object to the first target area; and determining a second sub-object in the remaining uncached sub-objects, wherein the second sub-object comprises any one of the remaining uncached sub-objects, and taking the second sub-object as the updated first sub-object.
For example, assuming that after S102 is executed, the target flow template is decomposed into four sub-objects, namely sub-object 1, sub-object 2, sub-object 3 and sub-object 4, and the size of sub-object 1 is 127K, the size of sub-object 2 is 35K, the size of sub-object 3 is 65K and the size of sub-object 4 is 15K, as shown in fig. 2, five first free cache regions with the numbers of 2, 5, 7, 9 and 11 can be obtained in the memory, and the cache space of No. 2 free cache region is 20K, the cache space of No. 5 free cache region is 40K, the cache space of No. 7 free cache region is 30K, the cache space of No. 9 free cache region is 80K, the cache space of No. 11 free cache region is 150K, the cache space of No. 13 free cache region is 800K, and assuming that the step of the sub-object is executed for the first time, the first sub-object is a sub-object 2, and when the step of caching the sub-object is performed for the first time, the remaining free areas are the five first free cache areas determined from the memory, in a possible caching manner, when the five first free cache areas are traversed from front to back, the No. 2 free cache area is traversed first, since the cache space of the No. 2 free cache area is 20K and is smaller than the size 35K of the sub-object 2, it can be determined that the No. 2 free cache area cannot cache the sub-object 2, and when the No. 5 free cache area is traversed, since the cache space of the No. 5 free cache area is 40K and is larger than the size 35K of the sub-object 2, it can be determined that the No. 5 free cache area can cache the sub-object 2, that is, it can be determined that the No. 5 free cache area is the first target area corresponding to the first sub-object (sub-object 2) at the current time, then, the sub-object 2 is cached to the No. 5 free cache area, it should be noted that after the sub-object 2 is cached to the No. 5 free cache area, the remaining cache space of the No. 5 free cache area is changed to 5K (40K-35K), that is, the remaining cache space of the No. 5 free cache area in the remaining free area is updated to 5K, the cache spaces of other first free cache areas are kept unchanged, and at this time, the remaining uncached sub-objects are the sub-object 1, the sub-object 3, and the sub-object 4, and any sub-object in the three sub-objects is determined to be the second sub-object, for example, the second sub-object is the sub-object 3, at this time, the first sub-object is updated from the sub-object 2 to the sub-object 3, and the sub-object 3 is cached in the same manner as the cached sub-object 2 (i.e. the step of caching the sub-object is re-executed), similarly, first traverse the No. 2 idle cache region, since the cache space of the No. 2 idle cache region is 20K and is smaller than the size 65K of the sub-object 3, it may be determined that the No. 2 idle cache region cannot cache the sub-object 3, then traverse the No. 5 idle cache region, since the cache space of the No. 5 idle cache region at this time is 5K and is smaller than the size 65K of the sub-object 3, it may be determined that the No. 5 idle cache region cannot cache the sub-object 3, after continuing the traverse, it may be determined that the No. 7 idle cache region cannot cache the sub-object 3, and when traversing the No. 9 idle cache region, it may be determined that the cache space of the No. 9 idle cache region is 80K and is larger than the size 65K of the sub-object 3, so that the No. 9 idle cache region may cache the sub-object 3, that is, it may be determined that the No. 9 idle cache region is the first target region corresponding to the first sub-object (sub-object 3) at the present time, then, the sub-object 3 is cached in the No. 9 free cache area, similarly, after the sub-object 3 is cached in the No. 9 free cache area, the remaining cache space of the No. 9 free cache area is changed to 15K (80K-65K), that is, the remaining cache space of the No. 9 free cache area in the remaining free area is updated to 15K, at this time, the remaining uncached sub-objects are the sub-object 1 and the sub-object 4, at this time, the sub-object 1 and the sub-object 4 may be continuously cached in the manner of the above-mentioned caching sub-object 2 and the sub-object 3, so that the four sub-objects in the target process template are respectively cached in the corresponding first free cache areas, which is only exemplified by the above example and is not limited by the present disclosure.
It should be further noted that, in this step, when the cache space of the first idle cache region is greater than or equal to the size of the target process template, the whole target process template may be directly cached in the first idle cache region without decomposition, which is the same as the specific implementation manner in the prior art and is not described herein again.
In step 104, a template index of the target process template is established; the template index includes an association relationship between the target process template and the plurality of child objects.
After the target flow template is decomposed, the connection relation of each component element in the target flow template is disturbed, so that a template index of the target flow template can be established by executing the step to record the association relation between the target flow template and a plurality of the sub-objects, and thus, when the target flow template is read, the connection relation between the sub-objects in the target flow template can be determined according to the template index, and the sub-objects can be conveniently called in the execution process of the flow operation instruction.
In a possible implementation manner, the template index may also be established according to identification information of each constituent element, and specifically, a tree index of the target flow template may be constructed according to the identification information of each constituent element, and an association relationship between each node in the target flow template and a connection line between each node is recorded, so that a sub-object to be accessed may be quickly found according to the tree index in the execution process of the flow operation instruction.
In addition, the template index may further include a cache address of each sub-object in the memory, so that data of the related sub-object may be quickly accessed according to the cache address, and the efficiency of data access is improved.
In step 105, a second free buffer area is determined from the memory, and the template index is buffered in the second free buffer area.
The second free cache area may be a continuous storage area obtained in the memory after all the sub-objects are cached, such as the No. 13 free cache area in fig. 2.
In a possible implementation manner, when the template index is cached in the second free cache region, a continuous storage region may be applied in the memory according to the size of the template index to serve as the second free cache region, and then the template index is cached in the second free cache region.
In consideration of an actual application scenario, executing a flow operation instruction once generally passes through a plurality of logic processing analysis sub-processes, that is, a flow process for executing the flow operation instruction includes one or more sub-processes, and in each sub-process, one or more sub-objects of the target flow template are accessed, for example, after a user fills a business trip application form, submitting the business trip application form on a system is that the flow operation instruction is initiated once, and after obtaining the flow operation instruction, the system may sequentially pass through the following sub-processes: sub-process 1: creating a flow instance according to the flow operation instruction; and a sub-process 2: creating a start node; and a sub-process 3: performing flow branch analysis; and 4, a sub-process: establishing a manual node according to the analysis result and determining an auditor; when the sub-process 1 is executed, two sub-objects of a process instance node and a connecting line are accessed; while performing sub-process 2, the start node sub-object is to be accessed; when sub-process 3 is executed, the branch node sub-object is to be accessed; when the sub-process 4 is executed, the manual node sub-object is to be accessed, after the sub-process 4 is executed, the system can return prompt information of 'submission success' to the business trip application, that is, the process operation instruction is executed, and corresponds to a complete instruction cycle, and under a normal condition, after all sub-processes related to the process operation instruction are executed, all related cache sub-objects can be popped up, but the occupation of dynamic cache in the thread can be increased, and the data access efficiency can be influenced to a certain extent, therefore, in the present disclosure, in order to further improve the data access efficiency and ensure the minimum occupation of the cache in the instruction cycle, after the target process template is cached, the cache in the instruction cycle of first-in and last-out can be established when a flow specific process operation instruction is obtained, specifically, a flow operation instruction for the target flow template may be obtained, and then the child object cached in the first free cache region may be read according to the flow operation instruction.
In a possible implementation manner, the sub-object cached in the first free cache region may be read according to the flow operation instruction, and first, a cache queue corresponding to the flow operation instruction is established; then, determining a circulation process when the target flow template is operated according to the flow operation instruction; the circulation process includes one or more sub-processes; determining a target sub-object corresponding to each sub-process in the plurality of sub-objects, wherein the target sub-object comprises one or more sub-objects; caching the target sub-object in the cache queue according to the circulation process, specifically, obtaining an execution sequence of each sub-process in the circulation process, and sequentially caching the target sub-object corresponding to each sub-process cached in the first idle cache region in the cache queue according to the execution sequence; after the sub-process finishes the return process, the target sub-object corresponding to the sub-process is removed from the cache queue, so that the target sub-object cached to the cache queue at last is removed from the cache queue first, and the target sub-object cached to the cache queue at first is removed from the cache queue at last.
Exemplarily, when a user finishes filling a business trip application form and submits the business trip application form on a system, the system acquires the flow operation instruction once, at this time, to improve the execution speed of the flow operation instruction, caching in an instruction cycle may be performed, fig. 3 is a schematic diagram of caching in an instruction cycle according to an exemplary embodiment, as shown in fig. 3, a cache queue corresponding to the flow operation instruction is first established, assuming that the flow operation instruction needs to sequentially pass through four sub-processes of sub-process a, sub-process B, sub-process C, and sub-process D when executing the flow operation instruction, when acquiring the flow operation instruction, sub-process a is first executed, and when executing sub-process a, sub-object 1 needs to be accessed (sub-object 1 is a target sub-object corresponding to sub-process a), at this time, sub-object 1 may be cached from a first free cache area to a logical area in the cache queue, when the sub-process a is executed, the sub-process B needs to be called, when the sub-process B is executed, the sub-object 2 needs to be accessed (the sub-object 2 is a target sub-object corresponding to the sub-process B), at this time, the sub-object 2 may be cached from the first free cache region to the logic B region in the cache queue, when the sub-process B is executed, the sub-process C needs to be called, when the sub-process C is executed, the sub-object 3 needs to be accessed (the sub-object 3 is a target sub-object corresponding to the sub-process C), at this time, the sub-object 3 may be cached from the first free cache region to the logic C region in the cache queue, when the sub-process C is executed, the sub-process D needs to be called, when the sub-process D is executed, the sub-object 4 needs to be accessed (the sub-object 4 is a target sub-object corresponding to the sub-process D), at this time, the sub-object 4 may be cached from the first free cache region to the logic D region in the cache queue, after the data of the sub-object 4 is read in the logical D area, the sub-process D executes a return process (for example, the sub-process D returns a result to the sub-process C in fig. 3), at this time, in order to reduce the occupation of the dynamic cache in the thread, the sub-object 4 cached in the logical D area at the top of the cache list may be removed, and according to the same execution steps, the sub-objects cached in the logical C area, the logical B area, and the logical a area, respectively, are sequentially removed, so that the sub-object 4 cached to the cache queue at last is removed from the cache queue at first, and the sub-object 1 cached to the cache queue at first is removed from the cache queue at last, so that the minimum occupation of the cache in the instruction cycle may be ensured, and the access speed to the cache object may be improved.
By adopting the method, a plurality of first idle cache areas with smaller space in the memory can be utilized, and each sub-object of the target process template is sequentially cached in the first idle cache areas, so that the process template can be prevented from being cached only in the memory area of which the cache space is larger than or equal to the size of the target process template, the utilization rate of the memory is improved, and system resources are saved.
FIG. 4 is a flowchart illustrating a method of caching flow templates, as shown in FIG. 4, in accordance with an exemplary embodiment, the method including the steps of:
in step 401, a target process template to be cached is obtained.
In a possible implementation manner, the target flow template may include a flow template corresponding to a flow triggering operation of a user, for example, after the business system obtains a business application submitted by the user, the business system first obtains a business approval flow template corresponding to the business application (at this time, the business approval flow template is the target flow template), and then transmits the business application to a target node (such as an approver) according to a connection relationship between nodes in the business approval flow template, which is only illustrated here, and the disclosure does not limit this.
In addition, considering that in the memory management of the operating system, the free memory is not continuous, but one block, and is discontinuous, for this reason, the target flow template may be decomposed into a plurality of sub-objects, so that each sub-object of the target flow template may be cached in each discrete free area, thereby increasing the utilization rate of the memory.
In step 402, identification information of each component element in the target flow template is obtained.
The composition element may include nodes and connecting lines constituting the target process template.
In step 403, the flag of each of the component elements is determined according to the identification information, and the component elements with the same flag are divided into the same sub-objects.
The sub-objects may include objects such as manual nodes (e.g., approvers), automatic nodes (e.g., applications), branch nodes (e.g., routing conditions), and connection lines in the process template.
In step 404, a plurality of first free buffer areas are determined from the memory, and the step of buffering the sub-object is executed in a loop until all the sub-objects are buffered.
The first free buffer area may include a blank area where data is not stored in the memory, and the step of buffering the child object may include the following steps:
in step 4041, a first sub-object is determined from the plurality of sub-objects, the first sub-object comprising any one of the plurality of sub-objects.
In step 4042, the remaining free areas in the first free buffer areas are traversed until a first target area is determined, the buffer space of the first target area is greater than or equal to the size of the first sub-object, and the first sub-object is buffered in the first target area.
In step 4043, a second sub-object is determined from the remaining uncached sub-objects, where the second sub-object includes any one of the remaining uncached sub-objects, and the second sub-object is used as the updated first sub-object.
The specific implementation manner of this step may refer to the related description in step 103 in the first embodiment, and is not described herein again.
In step 405, a template index of the target process template is established; the template index includes an association relationship between the target process template and the plurality of child objects.
After the target flow template is decomposed, the connection relation of each component element in the target flow template is disturbed, so that a template index of the target flow template can be established by executing the step to record the association relation between the target flow template and a plurality of the sub-objects, and thus, when the target flow template is read, the connection relation between the sub-objects in the target flow template can be determined according to the template index, and the sub-objects can be conveniently called in the execution process of the flow operation instruction.
The template index may include a tree index of the target flow template, and in addition, the template index may further include a cache address of each sub-object in the memory, so that data of the related sub-object may be quickly accessed according to the cache address, and the efficiency of data access is improved.
The specific implementation manner of this step may refer to the related description in step 104 in the first embodiment, and is not described herein again.
In step 406, a second free buffer area is determined from the memory, and the template index is buffered in the second free buffer area.
The second free cache area may be a continuous storage area obtained in the memory after all the sub-objects are cached, such as the No. 13 free cache area in fig. 2.
The specific implementation of this step may refer to the related description in step 105 in the first embodiment, and is not described herein again.
It should be noted that, in this disclosure, in order to further improve the efficiency of data access and ensure the minimum occupation of the cache in the instruction cycle, after the target flow template is cached, when a specific flow operation instruction is obtained, a cache in the instruction cycle that comes first and then comes out may be established, in this embodiment, the cache in the instruction cycle may be performed through executing steps 407 to 412, so as to complete access to the cache data.
In step 407, a flow operation instruction for the target flow template is obtained.
For example, after the user completes the business trip application form, submitting the business trip application form on the system is to initiate the flow operation instruction once, and when the system returns a prompt message of "successful submission" for the submitted application, the system is to complete the flow operation instruction and correspond to a complete instruction cycle.
In step 408, a cache queue corresponding to the flow operation instruction is established.
In step 409, determining a flow process when the target flow template is operated according to the flow operation instruction; the flow process includes one or more sub-processes.
In step 410, a target sub-object corresponding to each sub-process is determined among the plurality of sub-objects.
Wherein the target sub-object may comprise one or more of the sub-objects.
In step 411, the execution sequence of each sub-process in the circulation process is obtained, and the target sub-objects corresponding to each sub-process cached in the first idle cache region are sequentially cached in the cache queue according to the execution sequence.
In step 412, after the sub-process completes the return process, the target sub-object corresponding to the sub-process is removed from the cache queue.
In addition, the specific implementation manner of step 407 to step 412 may refer to the related description in step 105 in the first embodiment, and is not described herein again.
By adopting the method, a plurality of first idle cache areas with smaller space in the memory can be utilized, and each sub-object of the target process template is cached in the first idle cache areas in sequence, so that the process template can be prevented from being cached only in the memory area of which the cache space is larger than or equal to the size of the target process template, the utilization rate of the memory is improved, and system resources are saved.
Fig. 5 is a block diagram illustrating an apparatus for caching flow templates, according to an example embodiment, as shown in fig. 5, the apparatus comprising:
a first obtaining module 501, configured to obtain a target process template to be cached;
a process template decomposition module 502 for decomposing the target process template into a plurality of sub-objects;
a first caching module 503, configured to determine multiple first idle cache regions from a memory, and sequentially cache each child object in the multiple first idle cache regions; the buffer space of the first idle buffer area is smaller than the size of the target process template;
a template index establishing module 504, configured to establish a template index of the target process template; the template index comprises the incidence relation between the target process template and a plurality of the sub-objects;
the second cache module 505 is configured to determine a second free cache area from the memory, and cache the template index in the second free cache area.
Optionally, the process template decomposition module 502 is configured to obtain identification information of each constituent element in the target process template; determining a flag bit of each component element according to the identification information; and dividing the component elements with the same zone bits into the same sub-object.
Optionally, the first caching module 503 is configured to perform the step of caching the sub-objects in a loop until all of the sub-objects are cached;
the step of caching the child object includes: determining a first sub-object in the plurality of sub-objects, wherein the first sub-object comprises any one of the plurality of sub-objects; traversing the remaining idle areas in the first idle cache areas until a first target area is determined, wherein the cache space of the first target area is larger than or equal to the size of the first sub-object, and caching the first sub-object to the first target area; and determining a second sub-object in the remaining uncached sub-objects, wherein the second sub-object comprises any one of the remaining uncached sub-objects, and taking the second sub-object as the updated first sub-object.
Fig. 6 is a block diagram of an apparatus for caching flow templates according to the embodiment shown in fig. 5, where the apparatus further includes, as shown in fig. 6:
a second obtaining module 506, configured to obtain a flow operation instruction for the target flow template;
the data access module 507 is configured to read the child object cached in the first free cache region according to the flow operation instruction.
Optionally, the data access module 507 is configured to establish a cache queue corresponding to the flow operation instruction; determining a circulation process when the target flow template is operated according to the flow operation instruction; the circulation process includes one or more sub-processes; determining a target sub-object corresponding to each sub-process in a plurality of sub-objects; and caching the target sub-object in the cache queue according to the circulation process.
Optionally, the data access module 507 is configured to obtain an execution order of each sub-process in the circulation process, and sequentially cache the target sub-objects corresponding to each sub-process cached in the first idle cache region in the cache queue according to the execution order; and after the sub-process finishes the return process, removing the target sub-object corresponding to the sub-process from the cache queue.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
By adopting the device, a plurality of first idle cache areas with smaller space in the memory can be utilized, and each sub-object of the target process template is sequentially cached in the first idle cache areas, so that the process template can be prevented from being cached only in the memory area of which the cache space is larger than or equal to the size of the target process template, the utilization rate of the memory is improved, and system resources are saved.
Fig. 7 is a block diagram illustrating an electronic device 700 in accordance with an example embodiment. As shown in fig. 7, the electronic device 700 may include: a processor 701 and a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700, so as to complete all or part of the steps in the above-mentioned cache flow template method. The memory 702 is used to store various types of data to support operation at the electronic device 700, such as instructions for any application or method operating on the electronic device 700 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 702 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components 703 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 702 or transmitted through the communication component 705. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 707 can therefore include: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described cache flow template method.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the above-described cache flow template method is also provided. For example, the computer readable storage medium may be the memory 702 described above that includes program instructions executable by the processor 701 of the electronic device 700 to perform the cache flow template method described above.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A method for caching a flow template, the method comprising:
acquiring a target process template to be cached;
decomposing the target flow template into a plurality of sub-objects;
determining a plurality of first idle cache areas from a memory, and caching each sub-object in the first idle cache areas in sequence; the cache space of the first idle cache area is smaller than the size of the target process template;
establishing a template index of the target process template; the template index comprises the incidence relation between the target process template and a plurality of the sub-objects;
determining a second idle cache area from the memory, and caching the template index into the second idle cache area;
the decomposing the target flow template into a plurality of sub-objects comprises:
acquiring identification information of each component element in the target process template;
determining a flag bit of each of the constituent elements according to the identification information;
dividing the component elements with the same zone bit into the same sub-object;
the method further comprises the following steps:
acquiring a process operation instruction of the target process template;
and reading the sub-objects cached in the first idle cache region according to the flow operation instruction.
2. The method according to claim 1, wherein said sequentially caching each of said sub-objects in a plurality of said first free cache areas comprises:
circularly executing the step of caching the sub-objects until all the sub-objects are cached;
the step of caching the child objects comprises:
determining a first sub-object in the plurality of sub-objects, the first sub-object comprising any one of the plurality of sub-objects;
traversing the remaining idle areas in the first idle cache areas until a first target area is determined, wherein the cache space of the first target area is larger than or equal to the size of the first sub-object, and caching the first sub-object to the first target area;
and determining a second sub-object in the remaining uncached sub-objects, wherein the second sub-object comprises any one of the remaining uncached sub-objects, and the second sub-object is used as the updated first sub-object.
3. The method of claim 1, wherein the reading the child object cached in the first free cache area according to the flow operation instruction comprises:
establishing a cache queue corresponding to the flow operation instruction;
determining a circulation process when the target flow template is operated according to the flow operation instruction; the circulation process comprises one or more sub-processes;
determining a target sub-object corresponding to each sub-process in a plurality of sub-objects;
and caching the target sub-object in the cache queue according to the circulation process.
4. The method of claim 3, wherein the caching the target child object within the cache queue according to the flow-around process comprises:
acquiring an execution sequence of each subprocess in the circulation process, and sequentially caching the target subprocesses respectively corresponding to each subprocess cached in the first idle cache region into the cache queue according to the execution sequence;
and after the sub-process executes the return process, removing the target sub-object corresponding to the sub-process from the cache queue.
5. An apparatus for caching process templates, the apparatus comprising:
the first acquisition module is used for acquiring a target process template to be cached;
the flow template decomposition module is used for decomposing the target flow template into a plurality of sub-objects;
the first cache module is used for determining a plurality of first idle cache areas from a memory and sequentially caching each sub-object in the first idle cache areas; the cache space of the first idle cache area is smaller than the size of the target process template;
the template index establishing module is used for establishing a template index of the target process template; the template index comprises the incidence relation between the target process template and a plurality of the sub-objects;
the second cache module is used for determining a second idle cache area from the memory and caching the template index into the second idle cache area;
the flow template decomposition module is used for acquiring identification information of each component element in the target flow template; determining a flag bit of each of the constituent elements according to the identification information; dividing the component elements with the same zone bit into the same sub-object;
the device further comprises:
the second acquisition module is used for acquiring a process operation instruction of the target process template;
and the data access module is used for reading the sub-objects cached in the first idle cache region according to the flow operation instruction.
6. The apparatus according to claim 5, wherein the first caching module is configured to perform the step of caching the sub-objects in a loop until all of the plurality of sub-objects are cached;
the step of caching the child objects comprises: determining a first sub-object in the plurality of sub-objects, the first sub-object comprising any one of the plurality of sub-objects; traversing the remaining idle areas in the first idle cache areas until a first target area is determined, wherein the cache space of the first target area is larger than or equal to the size of the first sub-object, and caching the first sub-object to the first target area; and determining a second sub-object in the remaining uncached sub-objects, wherein the second sub-object comprises any one of the remaining uncached sub-objects, and the second sub-object is used as the updated first sub-object.
7. The apparatus of claim 5, wherein the data access module is configured to establish a buffer queue corresponding to the flow operation instruction; determining a circulation process when the target flow template is operated according to the flow operation instruction; the circulation process comprises one or more sub-processes; determining a target sub-object corresponding to each sub-process in a plurality of sub-objects; and caching the target sub-object in the cache queue according to the circulation process.
8. The apparatus according to claim 7, wherein the data access module is configured to obtain an execution order of each sub-process in the flow process, and sequentially cache the target sub-objects corresponding to each sub-process cached in the first idle cache area in the cache queue according to the execution order; and after the sub-process executes the return process, removing the target sub-object corresponding to the sub-process from the cache queue.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 4.
CN201910122658.2A 2019-02-19 2019-02-19 Method and device for caching process template, storage medium and electronic equipment Active CN109977036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910122658.2A CN109977036B (en) 2019-02-19 2019-02-19 Method and device for caching process template, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910122658.2A CN109977036B (en) 2019-02-19 2019-02-19 Method and device for caching process template, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN109977036A CN109977036A (en) 2019-07-05
CN109977036B true CN109977036B (en) 2021-10-29

Family

ID=67077105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910122658.2A Active CN109977036B (en) 2019-02-19 2019-02-19 Method and device for caching process template, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN109977036B (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164774A (en) * 2013-03-11 2013-06-19 苏州市奥杰汽车技术有限公司 Automobile complete vehicle development system based on workflow
CN106296243A (en) * 2015-05-22 2017-01-04 阿里巴巴集团控股有限公司 Service implementation method and device
CN104991960B (en) * 2015-07-22 2018-10-30 北京京东尚科信息技术有限公司 Build the method and apparatus of data warehouse model
CN106325991B (en) * 2016-08-19 2020-04-03 东软集团股份有限公司 Instruction scheduling method and device of flow engine
CN107886295A (en) * 2017-10-23 2018-04-06 东软集团股份有限公司 Flow template changing process method, device, readable storage medium storing program for executing and electronic equipment
CN108563425B (en) * 2018-02-27 2019-10-01 北京邮电大学 A kind of event driven multipaths coprocessing system
CN109087054B (en) * 2018-06-01 2023-08-04 平安科技(深圳)有限公司 Collaborative office data stream processing method, device, computer equipment and storage medium
CN109101191B (en) * 2018-06-21 2021-07-16 东软集团股份有限公司 Data storage method, data storage device, storage medium and electronic equipment
CN108985709A (en) * 2018-06-26 2018-12-11 中国科学院遥感与数字地球研究所 Workflow management method towards more satellite data centers collaboration Remote Sensing Products production
CN108876309B (en) * 2018-07-04 2021-06-04 东软集团股份有限公司 Starting method and device of flow form, storage medium and electronic equipment
CN109218747B (en) * 2018-09-21 2020-05-26 北京邮电大学 Video service classification caching method based on user mobility in super-dense heterogeneous network

Also Published As

Publication number Publication date
CN109977036A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN113067883B (en) Data transmission method, device, computer equipment and storage medium
US20130073536A1 (en) Indexing of urls with fragments
CN114416667B (en) Method and device for rapidly sharing network disk file, network disk and storage medium
CN111338797A (en) Task processing method and device, electronic equipment and computer readable storage medium
CN114116065B (en) Method and device for acquiring topological graph data object and electronic equipment
CN109299352B (en) Method and device for updating website data in search engine and search engine
CN113204345A (en) Page generation method and device, electronic equipment and storage medium
CN109299152B (en) Suffix array indexing method and device for real-time data stream
US20140068005A1 (en) Identification, caching, and distribution of revised files in a content delivery network
CN109739487B (en) Business logic processing method and device and computer readable storage medium
CN113190517B (en) Data integration method and device, electronic equipment and computer readable medium
CN113886683A (en) Label cluster construction method and system, storage medium and electronic equipment
CN111444148B (en) Data transmission method and device based on MapReduce
CN114625407A (en) Method, system, equipment and storage medium for implementing AB experiment
CN109977036B (en) Method and device for caching process template, storage medium and electronic equipment
CN109614383B (en) Data copying method and device, electronic equipment and storage medium
CN111367500A (en) Data processing method and device
CN116339716A (en) Flow chart analysis method
CN113590985B (en) Page jump configuration method and device, electronic equipment and computer readable medium
CN114265846A (en) Data operation method and device, electronic equipment and storage medium
CN114253922A (en) Resource directory management method, resource management method, device, equipment and medium
CN114637499A (en) Visualization component processing method, device, equipment and medium
CN114461595A (en) Method, device, medium and electronic equipment for sending message
CN112597119A (en) Method and device for generating processing log and storage medium
CN108629003B (en) Content loading method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant