CN112783628A - Data operation optimization method and device and readable storage medium - Google Patents

Data operation optimization method and device and readable storage medium Download PDF

Info

Publication number
CN112783628A
CN112783628A CN202110111819.5A CN202110111819A CN112783628A CN 112783628 A CN112783628 A CN 112783628A CN 202110111819 A CN202110111819 A CN 202110111819A CN 112783628 A CN112783628 A CN 112783628A
Authority
CN
China
Prior art keywords
data
cache
computing node
spark
output data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110111819.5A
Other languages
Chinese (zh)
Inventor
李栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202110111819.5A priority Critical patent/CN112783628A/en
Publication of CN112783628A publication Critical patent/CN112783628A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Abstract

The invention discloses a data operation optimization method, a device and a readable storage medium, comprising the following steps: acquiring a preset Spark task, wherein the Spark task comprises a plurality of computing nodes representing computing processes, and the computing nodes are used for receiving input data and generating output data; executing the preset Spark task to sequentially execute each computing node according to a preset calling sequence; if the called current computing node meets the preset condition, the current computing node is executed by combining the cache data in the Spark cache; wherein the cache data comprises output data of at least one of the compute nodes. Compared with the prior art, the scheme can realize process data multiplexing without an external cache intermediate file, avoids the resource competition problem caused by external component writing, and improves the performance of Spark tasks in processing big data.

Description

Data operation optimization method and device and readable storage medium
Technical Field
The present invention relates to the field of big data, and in particular, to a data operation optimization method, apparatus, and readable storage medium.
Background
In an actual data computing service scenario, sometimes an intermediate computing result is required as an input parameter to implement data multiplexing.
Spark is used as a distributed computing engine, and in order to compute multiplexing computation of process data, generally, UDF is used to write the process data into an external cache intermediate file, and then read the data from the external cache intermediate file as input to perform data multiplexing. However, the scheme has two serious problems, one is that the scheme must rely on an external component, which has high requirements on the deployment of the system, and particularly when the amount of data to be cached is large, the data read-write performance of the external component affects the performance of the whole computing process; secondly, since Spark is a distributed computing framework, the problem of resource competition can occur when external components are written in, and the requirement of Spark on data distribution in subsequent computing cannot be guaranteed.
Disclosure of Invention
The embodiment of the invention provides a data operation optimization method, a data operation optimization device and a readable storage medium, which have the technical effects of avoiding the resource competition problem caused by writing of an external component when process data are multiplexed and improving the performance of a Spark task when large data are processed.
One aspect of the present invention provides a data operation optimization method, including: acquiring a preset Spark task, wherein the Spark task comprises a plurality of computing nodes representing computing processes, and the computing nodes are used for receiving input data and generating output data; executing the preset Spark task to sequentially execute each computing node according to a preset calling sequence; if the called current computing node meets the preset condition, the current computing node is executed by combining the cache data in the Spark cache; wherein the cache data comprises output data of at least one of the compute nodes.
In an embodiment, the meeting of the preset condition includes: the cache data is input data of the current computing node.
In an embodiment, the executing the current compute node in conjunction with cache data in a Spark cache includes: acquiring first output information of a previous computing node; and taking the first output data and the cache data as the input of the current computing node to generate second output data.
In an embodiment, the taking the first output data and the cache data as the input of the current computing node includes: selecting subdata in the cache data according to the current computing node; and taking the first output data and the subdata as the input of the current computing node.
In an embodiment, after generating the second output data, the method further comprises: and updating the cache data according to the second output data.
In an embodiment, the method further comprises: and after the execution of the preset Spark task is finished, clearing the cache data in the Spark cache.
In one possible embodiment, the input data and the output data are both abstract elastic distributed data set RDD data.
Another aspect of the present invention provides a data operation optimization apparatus, including: the task obtaining module is used for obtaining a preset Spark task, wherein the Spark task comprises a plurality of computing nodes representing an operation process, and the computing nodes are used for receiving input data and generating output data; the task execution module is used for executing the preset Spark task so as to sequentially execute each computing node according to a preset calling sequence; the task combining module is used for combining cache data in the Spark cache to execute the current computing node if the called current computing node meets the preset condition; wherein the cache data comprises output data of at least one of the compute nodes.
In an implementation manner, the task combining module is specifically configured to: acquiring first output information of a previous computing node; and taking the first output data and the cache data as the input of the current computing node to generate second output data.
In another aspect, the present invention provides a computer-readable storage medium comprising a set of computer-executable instructions, which when executed, perform any one of the data operation optimization methods described above.
In the embodiment of the present invention, when a Spark task is executed, output data that needs to be reused may be stored in a Spark cache, and if a certain computing node needs to use the output data, the output data may be directly read from the Spark cache to cooperate with the operation of the current computing node, so as to implement process data multiplexing. Compared with the prior art, the scheme can realize process data multiplexing without an external cache intermediate file, avoids the resource competition problem caused by external component writing, and improves the performance of Spark tasks in processing big data.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
FIG. 1 is a schematic diagram illustrating an implementation flow of a data operation optimization method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an execution process of invoking a Spark cache in a data operation optimization method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a specific execution process of updating a Spark cache in a data operation optimization method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a data operation optimization device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 is a schematic diagram illustrating an implementation flow of a data operation optimization method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an execution process of invoking a Spark cache in a data operation optimization method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a specific execution process of updating a Spark cache in a data operation optimization method according to an embodiment of the present invention.
Referring to fig. 1 and fig. 2, in one aspect, the present invention provides a data operation optimization method, including:
step 101, acquiring a preset Spark task, wherein the Spark task comprises a plurality of computing nodes representing a computing process, and the computing nodes are used for receiving input data and generating output data;
102, executing a preset Spark task to sequentially execute each computing node according to a preset calling sequence;
step 103, if the called current computing node meets the preset condition, executing the current computing node by combining the cache data in the Spark cache; wherein the cache data comprises output data of at least one compute node.
In this embodiment, in step 101, a Spark task may be set according to an actual business requirement, where one Spark task includes a plurality of computing nodes, and each computing node includes at least one computing process, where the computing process is a logical operation of a program, such as a binary conversion operation and a batch accumulation operation, the binary conversion operation is used to convert binary data into decimal data, and the batch accumulation operation is used to accumulate values in a database, for example.
In step 102, when the Spark task is executed, each computing node is sequentially called according to a preset execution sequence in the Spark task, for example, if first data is input to a first computing node, the first computing node is called, second data is output, and if second data is input to a second computing node, the second computing node is called.
In step 103, in the task invoking process, the executing computing node is the current computing node, the preset condition is a preset rule, the current computing node is configured in the computing node according to a requirement, and if the invoked current computing node meets the preset rule, the current computing node is executed by combining cache data in a Spark cache, where the Spark cache is used for caching output data obtained by operation of one or more computing nodes in the Spark task, and during caching, the one or more computing nodes can be designated to store the output data in the Spark cache after the operation of the one or more computing nodes is finished.
Therefore, when the Spark task is executed, the output data which needs to be repeatedly used can be stored in the Spark cache, and if a certain computing node needs to use the output data, the output data can be directly read from the Spark cache to match the operation of the current computing node, so that the process data multiplexing is realized. Compared with the prior art, the scheme can realize process data multiplexing without an external cache intermediate file, avoids the resource competition problem caused by external component writing, and improves the performance of Spark tasks in processing big data.
In one embodiment, the predetermined condition is satisfied, including:
the cache data is input data of the current computing node.
In this embodiment, the cache data is input data of the current computing node, specifically, for example, the current computing node is specifically an optimal parameter of a certain control parameter obtained through computation, and the optimal parameter needs to be taken as a reference, and the cache data is the control parameter obtained by executing the computing node, and when executing, the parameter in the cache needs to be taken as the input of the current computing node.
In one embodiment, executing the current compute node in conjunction with cached data in the Spark cache includes:
acquiring first output information of a previous computing node;
and taking the first output data and the cache data as the input of the current computing node to generate second output data.
In this embodiment, a specific process of step 103 is shown in fig. 2, it is assumed that the computing node 2 is a current computing node, and when the computing node 2 is executed, the first output information RDD1 of the previous computing node, that is, the computing node 1, is obtained, the cache data RDD3 is obtained, the first output information RDD1 and the cache data RDD3 are used as input of the current computing node together, the computing node 2 is executed, and the second output data RDD2 is generated.
Further, when cache data is obtained from the Spark cache, if the Spark cache contains a plurality of cache data, one or more cache data can be filtered and selected from the Spark cache as the input of the current computing node; the selecting process may specifically be selecting specific cache data according to a requirement of the current computing node, for example, if the current computing node represents a counting process for a pipeline product, and corresponding count cache data is stored in the Spark cache data, the count cache data is filtered and selected from a plurality of cache data in the Spark cache data when the current computing node is executed.
In one embodiment, the taking the first output data and the cache data as the input of the current computing node includes:
selecting subdata in the cache data according to the current computing node;
and taking the first output data and the subdata as the input of the current computing node.
In this embodiment, when the Spark task is executed, the output data of the multiple computing nodes may be stored in a Spark cache and may be stored in a collective form, and when the current computing node is executed, one or more sub-data in the cache data may be selected by a preset execution program and used as the input of the current computing node together with the first output data of the previous computing node to perform an operation.
In one embodiment, after generating the second output data, the method further comprises:
and updating the cache data according to the second output data.
In this embodiment, as shown in fig. 3, it is assumed that the computing node 2 is a current computing node, the computing node 2 obtains the second output data RDD2 through calculation after receiving the first output data RDD1, and if the second output data RDD2 is multiplexed data, the second output data RDD2 is output to the Spark cache to update the cache data, where the update method may be to directly replace the original cache data with the second output data RDD2, for example, replace the original cache data RDD3 in fig. 2 with the second output data RDD2, or add the second output data RDD2 as new added data into the Spark cache data, and it is still assumed that the current computing node represents a counting process of a pipeline product, and if the second output data RDD2 used for representing a product count does not exist in the original Spark cache, the second output data RDD2 is newly added to the Spark cache.
Further, if there are a plurality of second output data RDDs 2, one or more second output data RDDs 2 used as multiplexed data may be selected and stored in Spark cache data according to actual requirements, and still assuming that the current computing node represents a counting process for pipeline products as an example, the output second output data RDDs 2 include the second output data RDD2 used for counting products, and then the second output data RDD2 used for counting products is stored in Spark cache data as multiplexed data.
Further, the second output data may also be combined with other cache data in the Spark cache to update the cache data, taking a unit conversion process as an example, assuming that the input required by the computing node 3 is multiplexed data representing the amount of money in the Spark cache data, and the second output data RDD2 in the Spark cache data is represented as a base number, after receiving the second output data RDD2, the Spark cache data performs an operation with the cache data representing the amount of money multiplying factor in the Spark cache data to obtain multiplexed data that can be used for the input of the computing node 3.
In an embodiment, the method further comprises:
and after the execution of the preset Spark task is finished, clearing the cache data in the Spark cache.
In this embodiment, after the preset Spark task is executed, the cache data in the Spark cache needs to be cleared, so as to prevent the cache data from affecting the execution of the next Spark task.
In one possible embodiment, the input data and the output data are both abstract elastic distributed data set RDD data.
In this embodiment, under the Spark platform, the distributed data set includes RDD, DataFrame, and Dataset, and with respect to other data sets, the RDD is a lazy execution invariable parallel data set that can support Lambda expressions, and the humanization degree of the API is very high.
Fig. 4 is a schematic structural diagram of a data operation optimization device according to an embodiment of the present invention.
As shown in fig. 4, another aspect of the present invention provides a data operation optimization apparatus, including:
the task obtaining module 201 is configured to obtain a preset Spark task, where the Spark task includes a plurality of computing nodes representing an operation process, and the computing nodes are configured to receive input data and generate output data;
the task execution module 202 is configured to execute a preset Spark task, so as to sequentially execute each computing node according to a preset calling sequence;
the task combining module 203 is configured to, if the called current computing node meets a preset condition, combine the cache data in the Spark cache to execute the current computing node; wherein the cache data comprises output data of at least one compute node.
In this embodiment, in the task obtaining module 201, a Spark task may be set according to an actual business requirement, one Spark task includes a plurality of computing nodes, each computing node includes at least one computing process, where the computing process is a logical operation of a program, such as a binary conversion operation and a batch accumulation operation, the binary conversion operation is used to convert binary data into decimal data, and the batch accumulation operation is used to accumulate values in a database, for example.
In the task execution module 202, when the Spark task is executed, each computing node is sequentially called according to a preset execution sequence in the Spark task, for example, if first data is input to a first computing node, the first computing node is called, second data is output, and if second data is input to a second computing node, the second computing node is called.
In the task combining module 203, the Spark cache is located in the cache module 204, in the task calling process, the executing computing node is the current computing node, the preset condition is a preset rule, the computing node is configured in the computing node according to the requirement, if the called current computing node meets the preset rule, the current computing node is executed by combining cache data in the Spark cache, and with reference to fig. 2 and fig. 4, if the computing node 2 is the current computing node and the current computing node needs the cache data in the Spark cache as input, the cache module 204 is made to call the relevant cache data in the Spark cache to serve as input of the current computing node. The Spark cache is used for caching output data obtained by operation of one or more computing nodes in the Spark task. In conjunction with fig. 3 and fig. 4, during caching, the task combining module 203 may designate one or more computing nodes to send output data to the caching module 204 after the operation is finished, and store the output data in the Spark cache through the caching module 204.
Therefore, when the Spark task is executed, the output data which needs to be repeatedly used can be stored in the Spark cache, and if a certain computing node needs to use the output data, the output data can be directly read from the Spark cache to match the operation of the current computing node, so that the process data multiplexing is realized. Compared with the prior art, the scheme can realize process data multiplexing without an external cache intermediate file, avoids the resource competition problem caused by external component writing, and improves the performance of Spark tasks in processing big data.
In an embodiment, the task combining module 203 is specifically configured to:
acquiring first output information of a previous computing node;
and taking the first output data and the cache data as the input of the current computing node to generate second output data.
In this embodiment, when the Spark task is executed, the output data of the multiple computing nodes may be stored in a Spark cache and may be stored in a collective form, and when the current computing node is executed, one or more sub-data in the cache data may be selected by a preset execution program and used as the input of the current computing node together with the first output data of the previous computing node to perform an operation.
In another aspect, the invention provides a computer-readable storage medium comprising a set of computer-executable instructions that, when executed, perform the data operation optimization of any of the above.
In an embodiment of the present invention, a computer-readable storage medium includes a set of computer-executable instructions, which when executed, are configured to obtain a preset Spark task, where the Spark task includes a plurality of computing nodes representing computing processes, and the computing nodes are configured to receive input data and generate output data; executing a preset Spark task to sequentially execute each computing node according to a preset calling sequence; if the called current computing node meets the preset condition, the current computing node is executed by combining the cache data in the Spark cache; wherein the cache data comprises output data of at least one compute node.
Therefore, when the Spark task is executed, the output data which needs to be repeatedly used can be stored in the Spark cache, and if a certain computing node needs to use the output data, the output data can be directly read from the Spark cache to match the operation of the current computing node, so that the process data multiplexing is realized. Compared with the prior art, the scheme can realize process data multiplexing without an external cache intermediate file, avoids the resource competition problem caused by external component writing, and improves the performance of Spark tasks in processing big data.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method of data operation optimization, the method comprising:
acquiring a preset Spark task, wherein the Spark task comprises a plurality of computing nodes representing computing processes, and the computing nodes are used for receiving input data and generating output data;
executing the preset Spark task to sequentially execute each computing node according to a preset calling sequence;
if the called current computing node meets the preset condition, the current computing node is executed by combining the cache data in the Spark cache; wherein the cache data comprises output data of at least one of the compute nodes.
2. The method of claim 1, the meeting of a preset condition comprising:
the cache data is input data of the current computing node.
3. The method of claim 1, wherein executing the current compute node in conjunction with cached data in a Spark cache, comprises:
acquiring first output information of a previous computing node;
and taking the first output data and the cache data as the input of the current computing node to generate second output data.
4. The method of claim 3, the taking the first output data and cached data as inputs to the current compute node, comprising:
selecting subdata in the cache data according to the current computing node;
and taking the first output data and the subdata as the input of the current computing node.
5. The method of claim 3, after generating the second output data, the method further comprising:
and updating the cache data according to the second output data.
6. The method of claim 1, further comprising:
and after the execution of the preset Spark task is finished, clearing the cache data in the Spark cache.
7. The method of claim 1, the input data and output data being abstract elastic distributed dataset RDD data.
8. An apparatus for data operation optimization, the apparatus comprising:
the task obtaining module is used for obtaining a preset Spark task, wherein the Spark task comprises a plurality of computing nodes representing an operation process, and the computing nodes are used for receiving input data and generating output data;
the task execution module is used for executing the preset Spark task so as to sequentially execute each computing node according to a preset calling sequence;
the task combining module is used for combining cache data in the Spark cache to execute the current computing node if the called current computing node meets the preset condition; wherein the cache data comprises output data of at least one of the compute nodes.
9. The apparatus of claim 8, the task binding module to:
acquiring first output information of a previous computing node;
and taking the first output data and the cache data as the input of the current computing node to generate second output data.
10. A computer-readable storage medium comprising a set of computer-executable instructions that, when executed, perform the data operation optimization method of any one of claims 1-7.
CN202110111819.5A 2021-01-27 2021-01-27 Data operation optimization method and device and readable storage medium Pending CN112783628A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110111819.5A CN112783628A (en) 2021-01-27 2021-01-27 Data operation optimization method and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110111819.5A CN112783628A (en) 2021-01-27 2021-01-27 Data operation optimization method and device and readable storage medium

Publications (1)

Publication Number Publication Date
CN112783628A true CN112783628A (en) 2021-05-11

Family

ID=75758313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110111819.5A Pending CN112783628A (en) 2021-01-27 2021-01-27 Data operation optimization method and device and readable storage medium

Country Status (1)

Country Link
CN (1) CN112783628A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874215A (en) * 2017-03-17 2017-06-20 重庆邮电大学 A kind of serializing storage optimization method based on Spark operators
CN110163233A (en) * 2018-02-11 2019-08-23 陕西爱尚物联科技有限公司 A method of so that machine is competent at more complex works
CN110232087A (en) * 2019-05-30 2019-09-13 湖南大学 Big data increment iterative method, apparatus, computer equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874215A (en) * 2017-03-17 2017-06-20 重庆邮电大学 A kind of serializing storage optimization method based on Spark operators
CN110163233A (en) * 2018-02-11 2019-08-23 陕西爱尚物联科技有限公司 A method of so that machine is competent at more complex works
CN110232087A (en) * 2019-05-30 2019-09-13 湖南大学 Big data increment iterative method, apparatus, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN102129425B (en) The access method of big object set table and device in data warehouse
US7698312B2 (en) Performing recursive database operations
CN111932257B (en) Block chain parallelization processing method and device
CN111722918A (en) Service identification code generation method and device, storage medium and electronic equipment
US10664555B2 (en) Two-stage distributed estimation system
CN109086126B (en) Task scheduling processing method and device, server, client and electronic equipment
CN108415934A (en) A kind of Hive tables restorative procedure, device, equipment and computer readable storage medium
CN115016905A (en) Calling topological graph generation method and device
CN110020333A (en) Data analysing method and device, electronic equipment, storage medium
CN112712125B (en) Event stream pattern matching method and device, storage medium and processor
CN112783628A (en) Data operation optimization method and device and readable storage medium
CN111861100A (en) Work order processing method and device based on process scoring
CN109558403B (en) Data aggregation method and device, computer device and computer readable storage medium
CN106980673A (en) Main memory database table index updating method and system
CN108536447B (en) Operation and maintenance management method
CN113296788B (en) Instruction scheduling method, device, equipment and storage medium
CN115328457A (en) Method and device for realizing form page based on parameter configuration
JPWO2018061219A1 (en) Job scheduling system, job scheduling method, and job scheduling apparatus
CN113377652A (en) Test data generation method and device
CN113760237A (en) Compiling address updating method and device, terminal equipment and readable storage medium
US20180239640A1 (en) Distributed data processing system, and distributed data processing method
CN111143326A (en) Method and device for reducing database operation, computer equipment and storage medium
CN116048978B (en) Software service performance self-adaptive test method, system, terminal and medium
CN113687882B (en) Activiti-based flow rollback method, activiti-based flow rollback device and storage medium
CN113448962B (en) Database data management method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination