CN117193740A - Data distribution method, device, computing equipment and storage medium - Google Patents

Data distribution method, device, computing equipment and storage medium Download PDF

Info

Publication number
CN117193740A
CN117193740A CN202210608443.3A CN202210608443A CN117193740A CN 117193740 A CN117193740 A CN 117193740A CN 202210608443 A CN202210608443 A CN 202210608443A CN 117193740 A CN117193740 A CN 117193740A
Authority
CN
China
Prior art keywords
data
execution
node
information
allocation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210608443.3A
Other languages
Chinese (zh)
Inventor
涂印
林庆春
郑循茂
曾侃
李思作
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210608443.3A priority Critical patent/CN117193740A/en
Publication of CN117193740A publication Critical patent/CN117193740A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a data distribution method, a device, a computing device, a storage medium and a program product, wherein the method comprises the following steps: acquiring data allocation requirements of at least one target device; acquiring data to be distributed according to data distribution requirements; based on the data distribution requirement, determining a data distribution strategy of the data to be distributed in a visual mode; allocating at least a portion of the data to be allocated to at least one target device according to a data allocation policy, wherein the visually determining the data allocation policy of the data to be allocated comprises: based on the data distribution requirement, at least one execution node for data distribution is determined in a visual mode, and each execution node in the at least one execution node comprises parameter entering information, execution information and parameter exiting information; and determining the execution flow of at least one execution node in a visual mode, wherein the execution flow indicates the execution order of each execution node in the at least one execution node.

Description

Data distribution method, device, computing equipment and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a data distribution method, apparatus, computing device, computer readable storage medium, and computer program product.
Background
In order for at least one device to cooperate to meet the requirements of an application scenario, data required by the device in this scenario is typically allocated to the device according to a data allocation policy. When the application scenario changes, the data required by the device and the allocation policy of these data will typically change. At this point, the developer will have to redesign the data allocation policy to meet the latest data allocation requirements.
Currently, developers design data allocation policies, typically by writing program code for application scenarios. This makes the designed data allocation strategy poorly mobile, requiring developers to re-write program code from scratch even if the application scenario does not change much. The method not only increases the workload of developers, but also improves the time cost of data distribution and seriously influences the efficiency of data distribution. Therefore, how to achieve fast and accurate data distribution is a widely focused problem.
Disclosure of Invention
In view of the above, the present application provides a data distribution method and apparatus, computing device, computer readable storage medium and computer program product, which desirably mitigate or overcome some or all of the above-mentioned disadvantages and other possible disadvantages.
According to an aspect of the present application, there is provided a data allocation method, characterized in that the method comprises: acquiring data allocation requirements of at least one target device; acquiring data to be distributed according to data distribution requirements; determining a data distribution strategy of data to be distributed in a visual mode; allocating at least a portion of the data to be allocated to at least one target device according to a data allocation policy, wherein the data allocation policy of the data to be allocated is determined visually, comprising: based on the data distribution requirement, at least one execution node for data distribution is determined in a visual mode, and each execution node in the at least one execution node comprises parameter entering information, execution information and parameter exiting information; and determining the execution flow of at least one execution node in a visual mode, wherein the execution flow indicates the execution order of each execution node in the at least one execution node.
In the data distribution method according to some embodiments of the present application, the parameter entering information of each execution node includes a data type to be distributed corresponding to the execution node, the execution information of each execution node includes a data distribution target corresponding to the execution node, and the parameter exiting information of each execution node includes an execution result of the execution node.
In the data distribution method according to some embodiments of the present application, the execution flow of the at least one execution node includes at least one of the following: a sequential execution flow indicating sequential execution of at least a portion of the at least one execution node; a conditional execution flow indicating that at least a portion of the at least one execution node is conditionally executed; and a loop execution flow indicating to loop execute at least a portion of the at least one execution node.
In a data allocation method according to some embodiments of the present application, wherein at least one execution node for data allocation is determined visually based on data allocation requirements, comprising: based on the data distribution requirement, presenting an execution node editing interface, wherein the execution node editing interface comprises an input control for editing at least one execution node to be generated, an execution information input control and an output input control; acquiring the input information, the execution information and the output information of at least one execution node to be generated from the input control, the execution information input control and the output control; generating at least one executing node according to the parameter entering information, the executing information and the parameter exiting information of the at least one executing node to be generated.
In a data distribution method according to some embodiments of the present application, the visually determining an execution flow of at least one execution node includes: presenting an execution flow editing interface, wherein the execution flow editing interface comprises an execution node input control and an execution flow input control; acquiring a plurality of to-be-processed execution nodes selected from the at least one execution node from the execution node input control; and acquiring the execution flows of the plurality of to-be-processed execution nodes from the execution flow input control.
In a data allocation method according to some embodiments of the present application, wherein at least a portion of data to be allocated is allocated to at least one target device according to a data allocation policy, comprising: the at least one executing node is to be executed to allocate at least a portion of the data to be allocated to the at least one target device according to an execution flow of the at least one executing node.
In a data allocation method according to some embodiments of the present application, wherein at least one execution node is executed to allocate at least a portion of data to be allocated to at least one target device according to an execution flow of the at least one execution node, comprising: an initial node acquisition step: acquiring an initial execution node from at least one execution node as a current execution node; candidate allocation data acquisition: acquiring candidate allocation data from the data to be allocated according to a constraint condition related to at least one of the participation of the current execution node and the execution context; candidate allocation data allocation step: distributing the candidate distribution data to target equipment corresponding to the current execution node in at least one target equipment based on the execution information of the current execution node; feedback acquisition: acquiring distribution feedback about candidate distribution data from target equipment corresponding to a current execution node, comparing the distribution feedback with parameter output of the current execution node, and determining whether the candidate distribution data is distributed correctly; a context updating step: responding to the correct allocation of the candidate allocation data, updating the execution context based on the parameter output of the current execution node and determining whether an unexecuted execution node exists according to the updated context and the execution flow; the currently executing node updating step: in response to the existence of an unexecuted execution node, updating the current execution node according to the execution context, and turning to a candidate allocation data acquisition step; and (3) a cycle ending step: and ending the execution flow in response to the absence of the unexecuted execution node.
In a data allocation method according to some embodiments of the present application, the context updating step includes: in response to the feedback indicating that the candidate allocation data allocation is correct, adding the current execution node to a historical execution tree, the historical execution tree being used to indicate a historical execution order of at least one execution node; determining the current execution progress based on the execution tree and the execution flow; the context is updated according to the current execution schedule.
In the data allocation method according to some embodiments of the present application, further comprising: and (3) a rollback step: in response to incorrect allocation of the candidate allocation data, determining a rolling node and a rolling order based on a rolling tree, and executing the rolling node according to the rolling order to roll back the allocated data, wherein the rolling node is used for rolling back the data allocated to the target device from the target device, and the rolling tree indicates the rolling back flow of at least one rolling-back node; and (3) updating the rollback tree: and determining a roll-back node according to the context information in response to the correct allocation of the candidate allocation data, adding the roll-back node to the roll-back tree, and updating the roll-back flow of at least one roll-back node in the roll-back tree based on the rule of rolling back the allocated data.
In a data allocation method according to some embodiments of the present application, the acquiring data to be allocated according to data allocation requirements includes: form parameters are determined in a visual mode according to the data distribution requirements; generating a form for data collection based on the form parameters; and acquiring data to be distributed by using the form.
In a data allocation method according to some embodiments of the present application, wherein the visually determining form parameters according to the data allocation requirements includes: presenting a form parameter input interface, wherein the form parameter input interface comprises a content parameter input control and a rendering parameter input control; acquiring the content parameters of the form from the content parameter input control, and acquiring the rendering parameters of the form from the rendering parameter input control, wherein the content parameters of the form contain information of data to be collected by the form, and the rendering parameters of the form contain rendering information of the form; form parameters are determined based on the content parameters and the rendering parameters.
In a data allocation method according to some embodiments of the present application, wherein acquiring data to be allocated using a form includes: serializing the information of the form into form data in a predetermined format; transmitting the form data in the predetermined format to the terminal device to collect data to be allocated through the terminal device, and receiving the data to be allocated from the terminal device, wherein the collecting the data to be allocated through the terminal device comprises: running form data of a predetermined format on the terminal device to render a form, the form including one or more information input controls; obtaining entry data from one or more information input controls of the form to generate a form comprising the entry data; the data to be allocated is extracted from a form comprising the entered data.
According to another aspect of the present application, there is provided a data distribution apparatus comprising: a first acquisition module configured to acquire data allocation requirements of at least one target device; the second acquisition module is configured to acquire data to be distributed according to data distribution requirements; a determination module configured to visually determine a data allocation policy for data to be allocated, comprising: based on the data distribution requirement, visually determining at least one execution node for data distribution, and visually determining an execution flow of the at least one execution node, wherein each of the at least one execution node comprises parameter entering information, execution information and parameter exiting information, and the execution flow indicates an execution sequence of each of the at least one execution node; an allocation module configured to allocate at least a portion of the data to be allocated to at least one target device according to a data allocation policy.
According to another aspect of the present application, there is provided a computing device comprising: a memory configured to store computer-executable instructions; and a processor configured to perform the steps of the data allocation method according to some embodiments of the present application when the computer executable instructions are executed by the processor.
According to another aspect of the present application, there is provided a computer readable storage medium storing computer executable instructions that, when executed, implement the steps of a data allocation method according to some embodiments of the present application.
According to another aspect of the present application there is provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of a data allocation method according to some embodiments of the present application.
In the data distribution method and the data distribution device, firstly, the data distribution requirement of at least one target device is acquired, and the data to be distributed is acquired according to the data distribution requirement, so that the acquired data to be distributed can fully meet the requirement of the target device in an application scene; then, a data distribution strategy of data to be distributed is determined in a visual mode, so that a developer does not need to rewrite program codes from scratch, and the workload of the developer is reduced; finally, at least one part of the data to be distributed is distributed to at least one target device according to the data distribution strategy, and the data distribution process is faster and more visual due to the fact that the data distribution strategy is determined in a visual mode. In the application, the repeated development problem of the data release or distribution flow is effectively avoided by utilizing a visual data distribution strategy editing mode, the manpower input about the data release or distribution flow is obviously reduced, and the construction efficiency of an operation configuration system is improved.
These and other advantages of the present application will become apparent from and elucidated with reference to the embodiments described hereinafter.
Drawings
Embodiments of the application will now be described in more detail and with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram showing a process of data allocation in the related art;
FIG. 2 illustrates an exemplary application scenario of a data allocation method according to some embodiments of the application;
FIG. 3 illustrates an exemplary flow chart of a data allocation method according to some embodiments of the application;
FIG. 4 illustrates a schematic diagram of acquiring data to be allocated according to data allocation requirements, according to some embodiments of the application;
FIG. 5 illustrates a schematic diagram of visually determining form parameters according to some embodiments of the application;
FIG. 6 illustrates a schematic diagram of visually determining rendering parameters according to some embodiments of the application;
FIG. 7 illustrates a schematic diagram of visually determining form parameters according to some embodiments of the application;
FIG. 8 illustrates a schematic diagram of a rendering generated form according to some embodiments of the application;
FIG. 9 illustrates a schematic diagram of data types of a form, according to some embodiments of the application;
FIG. 10 illustrates a schematic diagram of sub-items of a form, according to some embodiments of the application;
FIG. 11 illustrates an exemplary flow diagram for form rendering according to some embodiments of the application;
FIG. 12 illustrates an exemplary flow chart for extracting form data according to some embodiments of the application;
FIG. 13 illustrates a schematic diagram of a visualization determination of a data allocation policy, according to some embodiments of the application;
FIG. 14A illustrates a schematic diagram of a visual determination execution node according to some embodiments of the application;
FIG. 14B illustrates a schematic diagram of a visual determination execution flow according to some embodiments of the application
FIG. 15 illustrates an exemplary flow chart for distributing data according to a distribution policy according to some embodiments of the application;
FIG. 16 illustrates an exemplary flow chart for distributing data according to a distribution policy according to some embodiments of the application;
FIG. 17 illustrates an exemplary flow chart for updating a rollback tree according to some embodiments of the application;
FIG. 18 shows a schematic diagram of a rollback tree according to some embodiments of the application;
FIG. 19 illustrates an exemplary flow chart for rollback according to a rollback tree according to some embodiments of the application;
FIG. 20 illustrates an exemplary flow chart for ensuring rollback according to some embodiments of the application;
FIG. 21 illustrates a schematic diagram of a data allocation method according to some embodiments of the application;
FIG. 22 illustrates a schematic diagram of data allocation and rollback according to some embodiments of the application;
FIG. 23 illustrates an exemplary block diagram of a data distribution device according to some embodiments of the application; and
FIG. 24 illustrates an example system including an example computing device that represents one or more systems and/or devices that can implement the various methods described herein.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the promotional information and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another element. Accordingly, a first component discussed below could be termed a second component without departing from the teachings of the present inventive concept. As used herein, the term "and/or" and similar terms include all combinations of any, many, and all of the associated listed items.
Those skilled in the art will appreciate that the drawings are schematic representations of example embodiments and that the modules or flows in the drawings are not necessarily required to practice the application and therefore should not be taken to limit the scope of the application.
Before describing in detail embodiments of the present application, concepts of enterprise portraits and related technologies will be explained first:
transaction: a transaction is a sequence of database operations defined by a user or developer, which are either all done or not done at all, and are an integral unit of work. Transactions have four large properties (ACID for short), including atomicity, consistency, isolation, and durability. Atomicity (atomicity) refers to the minimum unit of work that a transaction must be considered as an indivisible, and all operations in the entire transaction either commit completely or fail back completely, and for a transaction it is not possible to perform only a portion of the operations, which is the atomicity of the transaction. Consistency (database) databases always transition from one consistent state to another consistent state. Isolation refers to the fact that modifications made by one transaction are not visible to other transactions until the final commit. Persistence (durability) refers to the fact that once a transaction commits, its modifications are permanently saved to the database (in which case the modified data is not lost even if the system crashes).
JSON format: i.e., javaScript Object Notation format, also known as JS object profile, is a lightweight data exchange format. It stores and presents data in a text format that is completely independent of the programming language, based on a subset of ECMAScript (JS specification formulated by the european computer institute). The compact and clear hierarchical structure makes JSON an ideal data exchange language. Is easy to read and write by people, is easy to analyze and generate by machines, and effectively improves the network transmission efficiency.
JsonSchema: also written as JSON Schema, it is based on JSON format for defining JSON data structures and verifying JSON data content.
The low code method comprises the following steps: refers to a method that enables development of an automated program without or with little code.
Rollback: also referred to as transaction rollback, refers to the revocation of an update operation to a database that has completed for that transaction. Rollback is used to delete content that has been executed without executing the complete transaction. Rollback is often required to ensure that the integrity of the database is restored after an application, database, or system error. Rollback may include program rollback and data rollback types.
Form: forms are commonly used in web pages for implementing data acquisition functions. A form has three basic components: form labels, form fields, and form buttons. The form tag contains the URL (uniform resource locator, uniform resource location system) of the CGI (Common Gateway Interface ) program used to process the form data and the method of submitting the data to the server. The form field contains text boxes, password boxes, hidden fields, multiple lines of text boxes, check boxes, radio boxes, drop-down selection boxes, file upload boxes, and the like. Form buttons include a submit button, a reset button, and a general button; the CGI script used to transfer data to the server or cancel the input may also use form buttons to control other processing tasks that define the processing script.
Rendering: rendering in a computer drawing refers to the process of generating images from a model with software.
Serializing: serialization (Serialization) is a process of converting the state information of an object into a form that can be stored or transmitted. During serialization, an object writes its current state to a temporary or persistent storage area. Later, the object may be recreated by reading or de-serializing the state of the object from the storage area. In contrast, the data is converted from a form that can be stored or transmitted into state information of an object by a deserialization operation.
DOM: i.e., a file object model (Document Object Model, DOM for short), is a standard programming interface for processing extensible markup language recommended by W3C organizations. When the web page is loaded, the browser creates a document object model of the page (Document Object Model). The HTML DOM model is structured as an object tree.
The Database (Database), which can be considered as an electronic filing cabinet, is a place for storing electronic files, and users can perform operations such as adding, inquiring, updating, deleting and the like on the data in the files. A "database" is a collection of data stored together in a manner that can be shared with multiple users, with as little redundancy as possible, independent of the application.
Input Parameters: for calling a function, the value of the parameter is required by the called function.
Ginseng (Output Parameters): for returning from the called function to the main function, the value of the parameter is required by the main function.
API (Application Programming Interface): an application program interface, also known as an application programming interface, is a convention for the joining of different components of a software system. The primary purpose of the API is to provide the application and developer with the ability to access a set of routines without having to access source code or understand the details of the internal operating mechanisms. Software that provides the functionality defined by an API is referred to as an implementation of this API. An API is an interface and therefore an abstraction.
Assertion (assertion): an assertion is a first order logic in a program (e.g., a logical predicate with a true or false result) that is intended to represent and verify the expected result of a software developer: when a program executes to the position of an assertion, the corresponding assertion should be true. If the assertion is not true, the program will abort execution and give an error message.
First, how to distribute data according to scene requirements in the related art is described with reference to fig. 1.
Fig. 1 is a schematic diagram showing a process of data allocation in the related art. Four stages of data distribution are shown, namely, acquiring the demand of the target device, acquiring the data, distributing the data, and rolling back the data. Firstly, a developer needs to determine the data distribution requirement of the target device according to the application scene. For example, in the scenario of "distributing materials to retirees," information such as the age, name, etc. of the person needs to be assigned to at least one target device, and these target devices cooperate to guide the distribution of materials according to the information.
It can be seen that, since the data allocation requirements are associated with an application scenario, when the application scenario changes, the data required by the device and the allocation policy of these data will typically change. At this point, the developer will have to redesign the data allocation policy to meet the latest data allocation requirements. For example, when the application scene changes, the steps of "determining form", "form submitting data", "distributing data" and the like in fig. 1 all need to be redesigned. In the related art, a developer designs a data allocation policy, generally by writing program codes for specific application scenarios. This makes the designed data allocation strategy poorly mobile, requiring developers to re-write program code from scratch even if the application scenario does not change much. The method has the defects of poor mobility, multiple repeated construction planes, low development efficiency, inefficient data rollback, unsafe and the like in the related technology.
Fig. 2 illustrates an exemplary application scenario 200 of a data allocation method according to some embodiments of the application. The application scenario 200 may include a server 210, a terminal device 220, a terminal device 230, and a target device 240. Server 210 and terminal 230, target device 240 are communicatively coupled together by network 250. The server 210 and the terminal 220 are coupled through an interface. Alternatively, the server 210 and the terminal 220 may be coupled together through a network 250. The network 250 may be, for example, a Wide Area Network (WAN), a Local Area Network (LAN), a wireless network, a public telephone network, an intranet, and any other type of network known to those skilled in the art.
In this embodiment, the server 210 first obtains the data allocation requirements of at least one target device. Then, the server 210 initiates the collection of data to be distributed by the terminal 230 through the network 250 according to the data distribution requirements, and acquires the data collected by the terminal 230 through the network 250. The server 210 then visually determines, via the terminal 220, a data allocation policy for the data to be allocated based on the data allocation requirements. Finally, the server 210 allocates at least a portion of the data to be allocated to at least one target device 240 according to the data allocation policy.
It should be noted that server 210, terminal 220, terminal 230, and target device 240 may each comprise media and/or devices capable of persistently storing information, and/or tangible storage. Thus, computer-readable storage media refers to non-signal bearing media. Computer-readable storage media include hardware such as volatile and nonvolatile, removable and non-removable media and/or storage devices implemented in methods or techniques suitable for storage of information such as computer-readable instructions, data structures, program modules, logic elements/circuits or other data. As understood by those of ordinary skill in the art, the server 210 may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein. The server 210 may present the data allocation policy to be determined to a developer through the terminal device 220, and interact with the developer to implement visual determination of the development policy.
Terminal devices 220, 230 may be any type of mobile computing device, including mobile computers (e.g., personal Digital Assistants (PDAs), laptop computers, notebook computers, tablet computers, netbooks, etc.), mobile phones (e.g., cellular phones, smartphones, etc.), wearable computing devices (e.g., smartwatches, headsets, including smart glasses, etc.), or other types of mobile devices. In some embodiments, terminal devices 220, 230 may also be stationary computing devices, such as desktop computers, gaming machines, smart televisions, and the like. Further, in the case where the application scenario 200 includes a plurality of terminal devices 230, the plurality of terminal devices 230 may be the same or different types of computing devices.
As shown in fig. 2, terminal devices 220, 230 may include a display screen and a terminal application that may interact with a terminal user via the display screen. The terminal application may be a local application, a Web page (Web) application, or an applet (LiteApp, e.g., a cell phone applet, a WeChat applet) that is a lightweight application. In the case where the terminal application is a local application program that needs to be installed, the terminal application may be installed in the terminal device 220, the terminal device 230. In the case where the terminal application is a Web application, the terminal application may be accessed through a browser. In the case that the terminal application is an applet, the terminal application may be directly opened on the user terminal 220, 230 by searching for related information of the terminal application (e.g., name of the terminal application, etc.), scanning a graphic code of the terminal application (e.g., bar code, two-dimensional code, etc.), etc., without installing the terminal application.
In some embodiments, the application scenario 200 described above may be a distributed system constituted by the server 210, which may constitute, for example, a blockchain system. Blockchains are novel application modes of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanisms, encryption algorithms, and the like. The blockchain is essentially a decentralised database, which is a series of data blocks generated by cryptographic methods, each data block containing a batch of information of network transactions for verifying the validity (anti-counterfeiting) of the information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
The blockchain underlying platform may include processing modules for user management, basic services, smart contracts, and the like. The user management module is responsible for identity information management of all blockchain participants, including maintenance of public and private key generation (account management), key management, maintenance of corresponding relation between the real identity of the user and the blockchain address (authority management) and the like, and under the condition of authorization, supervision and audit of transaction conditions of certain real identities, and provision of rule configuration (wind control audit) of risk control; the basic service module is deployed on all block chain node devices, is used for verifying the validity of a service request, recording the service request on a storage after the effective request is identified, for a new service request, the basic service firstly analyzes interface adaptation and authenticates the interface adaptation, encrypts service information (identification management) through an identification algorithm, and transmits the encrypted service information to a shared account book (network communication) in a complete and consistent manner, and records and stores the service information; the intelligent contract module is responsible for the registration and release of contracts, the triggering of contracts and the execution of contracts, developers can define contract logic through a certain programming language, release the contract logic to a blockchain (contract registration), call keys or other event triggering execution according to the logic of contract clauses to complete the contract logic, and simultaneously provide functions of updating and logging off the contract.
The platform product service layer provides basic capabilities and implementation frameworks of typical applications, and developers can complete the blockchain implementation of business logic based on the basic capabilities and the characteristics of the superposition business. The application service layer provides the application service based on the block chain scheme to the business participants for use.
Fig. 3 illustrates an exemplary flow chart of a data allocation method 300 according to some embodiments of the application. The illustrated method 300 may be implemented at a server (e.g., may be at the server 210 illustrated in fig. 2). In other embodiments, the data distribution method according to the application may also be performed by a server and a terminal device in combination. As shown in fig. 3, a data allocation method according to some embodiments of the present application may include steps S310 to S340.
In step S310, data allocation requirements of at least one target device are obtained. In some embodiments, to implement an application scenario, at least one target device needs to be assigned data that is needed by the target devices. For example, in a "register new user" scenario, the target device will need data for the new user's name, age, gender, phone number, etc. In the scenario of "give subsidies to retirement staff", the target device will need the user's name, age, address, etc. data. Thus, a specific application scenario will generally bring about a specific data allocation requirement to the target device. To this end, the method 300 first obtains the data allocation requirements of at least one target device in step S310.
In step S320, data to be distributed is acquired according to the data distribution requirement. For example, in the "register new user" scenario, the data allocation requirement is data of the name, age, sex, telephone number, etc. of the new user, and the name, age, sex, telephone number, etc. of the new user are acquired as data to be allocated according to these requirements. For example, in the scenario of "giving a subsidy to retirement staff", the data allocation requirement of the target device is data such as the name, age, address, etc. of the user, and the name, age, address, etc. of the user are acquired as the data to be allocated according to these requirements. In some embodiments, to improve the efficiency of acquiring the data to be distributed, the data to be distributed may be acquired by designing a form according to the data distribution requirement and then sending the form to a client (e.g. the client on the terminal 230 in fig. 2) for the user to fill in.
In step S330, a data allocation policy of the data to be allocated is determined in a visual manner. In a specific application scenario, at least one target device is often required to work together to achieve a specific function. This makes the data required for each of these devices often different. For example, in the "register new user" scenario, some target devices require name data of the new user, some target devices require age data of the new user, some target devices require sex data of the new user, and some target devices require data such as telephone numbers of the new user. These target devices can only function correctly after they are correctly assigned data, and cooperate to meet the needs of the "register new user" scenario. Therefore, a data allocation policy for determining the data to be allocated is required. Step S330 determines the data allocation policy of the data to be allocated in a visual manner. For example, the data allocation policy of the data to be allocated is determined in a visual manner through a human-computer interaction interface (e.g., an interface of a web page, an application program, etc. on the terminal 220 in fig. 2) based on the data allocation requirements. It should be noted that the visualization in step S330 is not limited to a specific terminal or server, nor to a client or web page. Because the step S330 determines the data distribution strategy of the data to be distributed in a visual manner, the process of determining the distribution strategy is more visual and concise, lengthy and complex program writing work of developers is avoided, the workload and the work difficulty of the developers are reduced, and the data distribution efficiency is improved.
At step S340, at least a portion of the data to be allocated is allocated to at least one target device according to a data allocation policy. Since the data allocation policy has been determined in step S330, the data to be allocated will be allocated according to the determined data allocation policy in step S340. For example, in the scenario of "issue patches to retirement staff", the data allocation policy is determined as: if the employee is older than 30 years, the employee's name is sent to the first target device. At this time, in step S340, according to the data allocation policy, the relationship between the work age data in the data to be allocated and the size of "30 years" is compared, and it is determined whether the name of the employee is sent to the first target device. As an example, in the "register new user" scenario, the data allocation policy is determined as: first, name data of a new user is allocated to a first target device, age data of the new user is allocated to a second target device, and finally telephone number data of the new user is allocated to a third target device. Then in step S340, according to the data allocation policy, the following steps are sequentially performed: the method comprises the steps of assigning name data of a new user to a first target device, assigning age data of the new user to a second target device, and assigning telephone number data of the new user to a third target device.
The method 300 first obtains the data allocation requirements of at least one target device, which enables the subsequent operations to maximally satisfy the requirements of the target device to implement the functions of the specific scenario requirements. And then, acquiring data to be distributed according to the data distribution requirement, so that the acquired data to be distributed can fully meet the requirement of target equipment in an application scene. Then, a data distribution strategy of the data to be distributed is determined in a visual mode, so that a developer does not need to rewrite program codes from the beginning in order to determine the distribution strategy, the workload of the developer is reduced, and the data distribution efficiency is improved; finally, at least one part of the data to be distributed is distributed to at least one target device according to the data distribution strategy, and the data distribution process is faster and more visual due to the fact that the data distribution strategy is determined in a visual mode.
In some embodiments, at step S320 of the method 300, obtaining the data to be distributed according to the data distribution requirements may be accomplished by creating a form. At this time, step S320 may include the following sub-steps.
First, a form for data collection is visually generated according to the data allocation requirements of the target device, alternatively, this may be accomplished using the terminal 220 shown in fig. 2, or may be accomplished by a combination of the server 210 and the terminal 220. For example, the server 210 obtains form parameters through the terminal 220 having a visual interface. For example, visually generating a form for data collection may include: form parameters are determined visually, and forms are generated based on the form parameters.
And then, acquiring data to be distributed by using the form. Alternatively, this may be accomplished by the terminal 230, or by a combination of the server 210 and the terminal 230. As an example, the server 210 may send the form parameters to the terminal 230 such that the terminal 230 generates a corresponding form based on the form parameters and presents the form to the user through a visual interface.
Then, collection of data to be distributed based on the generated form is initiated. Alternatively, this may be obtained using terminal 230. For example, the terminal 230 presents the generated form to the user through its visual interface and prompts the user to fill in the form. After the user fills out, the terminal 230 will collect the data to be distributed from the form filled out by the user.
And finally, acquiring the collected data to be distributed. Alternatively, this may be achieved by the server 210 in combination with the terminal 230. For example, after the server 210 initiates the collection of data to be distributed by the terminal 230, the collected data to be distributed is acquired from the terminal 230 through the network 250.
FIG. 4 illustrates a schematic diagram of acquiring data to be allocated according to data allocation requirements in some embodiments. As shown in fig. 4, form parameters are first edited visually; after the form parameters are determined, serializing the form parameters into JSON data and sending the JSON data to a client; then, the client side deserializes the JSON data into a form in a reading, rendering and other modes and presents the form to a user; after the user fills out the presented form, extracting the data filled out by the user in a traversing way to serve as the data to be distributed.
In some embodiments, the form parameters are determined visually, and may include the following substeps.
First, a form parameter input interface is presented, the form parameter input interface including a content parameter input control and a rendering parameter input control. Alternatively, the form parameter input interface may be located in the terminal 220 in FIG. 2. The form parameter input interface may be an interactive interface of a client on the terminal 220, or may be a web page interface on the terminal 220, which is not limited herein. The form parameter input interface comprises a content parameter input control and a rendering parameter input control which are respectively used for acquiring the content parameter and the rendering parameter of the form.
Then, the content parameters of the form are acquired from the content parameter input control, and the rendering parameters of the form are acquired from the rendering parameter input control. Wherein the content parameters of the form contain information of the data that the form is to collect, such as a description of the use of the form, the type of data that the form is to collect, etc. The rendering parameters of the form are used to describe the rendering information of the form, such as the number of items of the form, whether the form is a list-type form, the size of the form, etc. Finally, form parameters are determined based on the acquired content parameters and rendering parameters.
For example, in a "register new user" scenario, where the data allocation requirements are data of the new user's name, age, gender, phone number, etc., form parameters may be designed according to these requirements to generate a corresponding form to collect the data. Table I shows exemplary content parameters of a form designed according to data allocation requirements in a "register new user" scenario. Table II shows exemplary content parameters and rendering parameters for a form designed according to data allocation requirements in a "register new user" scenario.
As can be seen from table I, in the "register new user" scenario, according to the data allocation requirement of the target device, the form for collecting the data to be allocated needs to have the form items of user name, age, address, member, etc.
As can be seen from table II, in the "register new user" scenario, according to the data allocation requirement of the target device, the form parameters are determined to need rendering parameters such as rendering attributes in addition to the content parameters shown in table I. The contents of table II will be explained in detail below. The "field that needs to be entered" refers to what data the form will use to collect, e.g., name, address, phone number, etc. The "type of form" is used to define the type of data that the form will collect, such as an int type (integer type), string type (string type), object type (object type), etc., as is common. Wherein the object type is a nested form type, which consists of sub-items of the string, int, object type. For example, address information, namely nested form types, including nationality, province and city of string types. The array.object type represents a nested list of phenotype tables, which is also composed of type sub-items of string, int, object. It differs from an object type in that it can enter multiple lines of data. Such as the user's family attributes, can be entered in a plurality in a sense, so an array. Object type can be employed. "Chinese description" is used to describe the name of the data that needs to be entered within the form. The remark description is added with the description of the input data on the basis of the Chinese description so as to enhance the readability. In some scenarios, "whether or not to fill" the data within the form may be set to not fill. In the above example, the mobile phone number of the family is set to be unnecessary to be filled. And filling in the field in the subsequent form, and inputting the mobile phone number is not mandatory. The "rendering attribute" is used to describe the rendering parameters of the form.
FIG. 5 illustrates a schematic diagram of visually determining form parameters, according to some embodiments. As shown in FIG. 5, the form parameters shown in Table II may be determined in a visual manner by presenting a form parameter input interface. FIG. 6 illustrates a schematic diagram of visually determining rendering parameters, in particular, capturing rendering parameters for nationality items in Table II from rendering parameter input controls, in accordance with some embodiments. FIG. 7 illustrates an embodiment of visually determining form parameters, and in particular, obtaining form parameters for a form II province item using a form parameter input interface. As can be seen from fig. 5, 6, and 7, by visual editing, whether or not a form has a default value, a fixed value, and an input method (text box, drop-down box) of the form at the time of dynamic rendering can be specified by a rendering attribute. For example, in the embodiment shown in FIG. 6, "Address information-nationality" may be set to a default value of "China"; in the embodiment shown in fig. 7, the "province" option may be set to be entered in the form of a drop down box. It can be seen that after the data allocation requirements of table II are determined, the data allocation requirements can be converted into key information defining the form by a visual manner, i.e. input fields and types, chinese description and remark information of the fields, whether to fill and render attributes, etc. As an example, the visual editing interface may be an operation platform of a web management end, or may be a client end, which is not limited herein.
In some embodiments, obtaining the data to be allocated using the form includes: serializing the information of the form into form data in a preset format; transmitting the form data of the preset format to a terminal device so as to collect data to be distributed through the terminal device, and receiving the data to be distributed from the terminal device, wherein the collecting the data to be distributed through the terminal device comprises the following steps: running the form data in the preset format on the terminal equipment to render the form, wherein the form comprises one or more information input controls; obtaining entry data from the one or more information input controls of the form to generate a form comprising entry data; the data to be allocated is extracted from a form comprising the entered data. As an example, after the form parameters are visually edited, the form parameters of the form may be stored in JSON format into a database. These JSON data describing the form can render a fully exposed form at run-time. The input items of the form, the constraint of the input items and the input form of the input items are all edited and completed on the visual interface. FIG. 8 illustrates a schematic diagram of rendering a generated form in accordance with some embodiments. As shown in fig. 8, the terminal renders and generates a form for collecting data to be distributed in the "new user registration" scene through the acquired form parameters of table II. Alternatively, initiating the generation of a form based on the form parameters may be accomplished with the server 210 initiating the generation of the form by the terminal 230. At this point, the server 210 may send the form parameters to the terminal 230 over the network 250, and the initiating terminal 230 renders the generated form based on the received form parameters.
In some embodiments, initiating collection of data to be allocated based on the generated form may include: initiating a form generated by presentation, and acquiring a data form containing data to be distributed; initiating a traversal data form, and collecting data to be distributed; and acquiring the collected data to be distributed. As an example, initiating collection of data to be allocated based on the generated form may be implemented with server 210 initiating terminal 230 collecting data to be allocated. At this time, after presenting the generated form to collect the data to be distributed, the terminal 230 may traverse the form after collecting the data to collect the data to be distributed. The server 210 then acquires the collected data to be distributed from the terminal 230 through the network 250.
In some embodiments, to facilitate transmission and reading of form parameters, serialization and anti-serialization operations may be employed. For example, the form parameters may be serialized into JSON data storage and sent to the terminal, which in turn deserializes the JSON data into form parameters. In order to achieve serialization, a model needs to be built for the data types that may occur. Fig. 9 and 10 illustrate schematic diagrams of form data types in some embodiments. As shown in fig. 9, the data model of the form may take a tree structure. Each form represents an independent tree, with the children in the form abstracted to the nodes on the tree. More child nodes may be defined within the nodes of object type and array. Child nodes cannot be defined under the string type and the int type nodes. FIG. 10 illustrates a data model of a form sub-item in some embodiments. The data model of the form sub-items defines four main attributes that describe the names of the form items, constraint rules, rendering modes, default values. Based on the data models, form parameters, form data and the like can be serialized into JSON data, so that storage and transmission are realized. Illustratively, table III shows JSON data storage structures corresponding to data models in accordance with some embodiments; table IV shows JSON data after form parameter serialization in the "register new user" scenario.
/>
/>
/>
/>
/>
FIG. 11 illustrates an exemplary flow diagram for rendering a form in accordance with some embodiments. As shown in fig. 11, after receiving JSON data of form parameters, the JSON data is loaded first, and form nodes are created according to a storage model of the JSON data. Then, rendering is performed continuously according to the form nodes to generate the form until no unrendered form nodes exist. When a list item exists in a certain list item, rendering is sequentially carried out according to the storage sequence of the list items until the list item does not exist in the unrendered list item. In the embodiment shown in FIG. 11, the JSON data describing the form structure will be parsed in the order of depth-first traversal. After the JSON nodes are obtained, rendering the JSON nodes into a list layout. Each attribute in the JSON node is sequentially converted into contents such as an input item name, an input item configuration, a rendering mode, an input item default value and the like in the sub-layout according to a protocol. If the JSON node has the child nodes, the rendering of the current node is finished after all the child nodes are needed to be resolved. The child node's table sub-topology is wrapped in the parent node's topology. And finally generating a tree form memory structure. Illustratively, by the method shown in fig. 11, a form rendering result as shown in fig. 8 can be rendered.
FIG. 12 illustrates an exemplary flow chart for extracting data to be allocated from a form according to some embodiments. As shown in fig. 12, after a data form containing data to be allocated is acquired, a form data storage body is first created. The form memory structure is then loaded and the form items are traversed until there are no unremoved form items. In the process of traversing the form item, the form item is loaded first, and if the form item has a form sub-item, the form sub-item is loaded continuously until the form does not have the form sub-item. Then, extracting the value of the user input form from the DOM structure, converting the value into JSON node data and continuing to traverse the form if the value accords with the preset necessary filling constraint, and otherwise ending the data extraction. The data extraction algorithm shown in fig. 12 parses the form memory structure according to the depth-first traversal order to obtain the layout of each form subitem. Based on the layout of the list sub-items, the input data bound with the current list sub-items is obtained through the DOM structure of the layout. After traversing all the form memory nodes, the form data extraction is completed. The form data extraction algorithm shown in fig. 11 and the form rendering algorithm shown in fig. 12 are the inverse processes to each other.
In some embodiments, in step S330 of the method 300, visually determining the data allocation policy of the data to be allocated may include: based on the data allocation requirements, at least one execution node for data allocation is determined visually and an execution flow of the at least one execution node is determined visually. For example, the determination of the execution node and execution flow in a visual manner may be implemented through a visual interface of a web page or application program on the terminal 220. As an example, the at least one execution node may include in-parameter information, execution information, out-parameter information, and the like. For example, in the scenario of "give subsidies to retirement staff", the data allocation requirement is determined as: if the employee is older than 30 years, the employee's name is sent to the first target device. At this time, an execution node for transmitting data to the first target device may be established in a visual manner, the entry information of the execution node is "name", the execution information is the address of the first target device, and the exit information target device feeds back about the received "name" data. Meanwhile, the execution flow is determined for the node in a visual mode, namely: if the business age data is greater than 30, the node is executed. As an example, the execution flow may be used to indicate an execution order of each of the at least one execution node.
In some embodiments, the parameter entering information of each execution node includes a data type to be allocated corresponding to the execution node, the execution information of each execution node includes a data allocation target (such as API information) corresponding to the execution node, and the parameter exiting information of each execution node includes an execution result of the execution node. By way of example, the above information may all be obtained visually by the terminal 220.
In some embodiments, the execution flow of the at least one execution node may include at least one of: sequentially executing at least a portion of the at least one execution node; conditionally executing at least a portion of the at least one execution node; and cyclically executing at least a portion of the at least one execution node. Alternatively, different execution flows may be adopted for different application scenarios. For example, in the scenario of "issue subsidies to retired staff", it may be determined whether to execute an execution node for transmitting name data, which may transmit the staff name to a target device that records retired staff information, based on the staff's work age setting condition. In the scenario of "statistics personnel information", an execution flow may be set to sequentially execute a plurality of execution nodes, which are used to send different data to corresponding target devices. It should be noted that the execution flow of the execution node is not limited to one of the above flows, but may include multiple flows at the same time. For example, in a more complex application scenario, partial execution nodes may be sequentially executed first, then execution conditions may be set for other partial execution nodes, and finally a few execution nodes may be executed in a loop.
In some embodiments, visually determining at least one execution node for data allocation based on data allocation requirements may include the following steps. Based on the data allocation requirements, an execution node editing interface is presented, the execution node editing interface comprising an input control for editing at least one execution node to be generated, an information input control for execution, and an output control for input of output. Alternatively, the execution node editing interface may be implemented using a web page or application program on the terminal 220. And acquiring the parameter entering information, the executing information and the parameter exiting information of the at least one executing node to be generated from the parameter entering input control, the executing information input control and the parameter exiting input control. As an example, the execution information may include API information that transmits data to the target device. Generating at least one executing node according to the generated parameter entering information, executing information and parameter exiting information of the at least one executing node.
FIG. 13 illustrates a schematic diagram of a visualization determination of data allocation policies, according to some embodiments. The left side of fig. 13 shows an embodiment of visually determining at least one execution node for data allocation. The operating principle of the execution node editing interface is shown on the left side of fig. 13. As an example, the interactive interface of the execution node editing interface may be as shown in fig. 14A. As shown in fig. 14A, the execution node editing interface has a plurality of execution nodes to be edited, and these execution nodes to be edited can add an in-description, execution API information, an out-description, and the like. After editing the parameter entering information, the parameter exiting information and the parameter executing API information, at least one executing node is generated.
In some embodiments, visually determining the execution flow of at least one execution node includes the following steps. First, an execution flow editing interface is presented, the execution flow editing interface including an execution node input control and an execution flow input control. Alternatively, the execution flow editing interface may be implemented using a web page or application program on the terminal 220. Then, a plurality of pending execution nodes selected from the at least one execution node are obtained from the execution node input control. And finally, acquiring the execution flows of the plurality of to-be-processed execution nodes from the execution flow input control. The right side of fig. 13 shows a schematic diagram of an execution flow of visually determining at least one execution node in some embodiments. As shown on the right side of fig. 13, the workflow editing interface provides an execution node input control and an execution flow input control. Some of the execution nodes generated from the execution node editing interface on the left side of fig. 13 may be selected as alternative execution nodes in the execution node input control. Then, the execution flow of the execution node acquired by the execution node input control can be acquired through the execution flow editing input interface. As an example, the execution flow editing interface may be as shown in fig. 14B. As shown in fig. 14B, the execution flow editing interface includes an execution node input control and an execution flow input control. The execution node input control can acquire at least one execution node for an execution flow to be generated through the execution node number input in the input box. The execution flow input control obtains the execution flow in at least one execution node through the input box. Alternatively, the execution flow input control may provide at least three execution flows as alternatives, such as a sequential execution flow, a conditional execution flow, a loop execution flow, and the like. And finally, acquiring the execution flow of at least one execution node from the execution flow input control. As an example, the acquired execution flow in at least one execution node may be serialized into a JSON storage structure, and saved into a database, so as to facilitate subsequent propagation and reading. Table V shows the storage design for the generated workflow in some embodiments. Table VI shows the storage design of execution units within a workflow in some embodiments.
/>
In some embodiments, allocating at least a portion of the data to be allocated to at least one target device according to a data allocation policy may include: and executing the at least one executing node according to the execution flow of the at least one executing node to allocate at least a part of the data to be allocated to the at least one target device. Alternatively, the operation may be performed on the server 210, at which time the server 210 may acquire a data allocation policy from the terminal 220 and allocate the data to be allocated acquired from the terminal 230 to the target device 240 according to the data allocation policy.
FIG. 15 illustrates an exemplary flow chart of a method 1500 of data allocation according to an execution policy in some embodiments. As shown in fig. 15, the method 1500 includes an initial node acquisition step S1510, a current execution node update step S1520, a candidate allocation data acquisition step S1530, a candidate allocation data allocation step S1540, a feedback acquisition step S1550, an allocation correctness determination step S1560, a context update step S1570, a rollback tree update step S1580, a rollback step S1590, a loop end step S1511, and the like.
First, in step S1510, an initial execution node is acquired from at least one execution node as a current execution node. Alternatively, the acquisition of the initial execution node from the at least one execution node may be determined from the workflow. Then, it is determined whether or not there is an unexecuted node, and if there is an unexecuted node, the process proceeds to step S1520, and if there is no unexecuted node, the process proceeds to step S1511.
In step S1520, a current execution node is determined according to the execution context. Wherein the execution context may be updated by step S1570.
In step S1530, candidate allocation data is obtained from the data to be allocated according to a constraint condition related to at least one of the participation and the execution context of the current execution node. As an example, when the sequential execution flow is performed, candidate allocation data matched with the current execution node is obtained from the data to be allocated according to the participation of the current execution node. When conditional execution is performed, whether an execution condition is met or not is determined according to the context information, and whether the current node is executed or not is determined.
In step S1540, the candidate allocation data is allocated to a target device corresponding to the current execution node from among at least one target device based on the execution information of the current execution node. For example, the execution information of the current execution node is that name data is sent to the first target device, and then the name data is sent to the first device as candidate allocation data.
In step S1550, allocation feedback regarding candidate allocation data is acquired from the target device corresponding to the current execution node. For example, after the data to be allocated is allocated to the first target device, feedback is received from the first target device indicating that "name data has been received".
In step S1560, the received feedback is compared with the parameter of the currently executing node to determine whether the candidate allocation data is allocated correctly. For example, after sending the name data to the first target device, feedback "received name data" will be received from the first target device. At this time, comparing the feedback with the parameter of the executing node will result in whether the candidate allocation data is correctly allocated. For example, if the parameter-out of the executing node indicates "sent age data", at which time a comparison of feedback from the target device and parameter-out may conclude that the candidate allocation data is not correctly allocated. If the allocation candidate data allocation is correct, the process proceeds to step S1570. If the allocation candidate data allocation is incorrect, the process proceeds to step S1590.
In step S1570, in response to the candidate allocation data allocation being correct, an execution node is determined to be non-executing based on the current execution node' S parameters stored in the execution context and based on the parameters and the execution flow. In some embodiments, the context updating step includes: in response to feedback indicating that the candidate allocation data allocation is correct, adding the current execution node to a historical execution tree, the historical execution tree being used to indicate a historical execution order of the at least one execution node; determining the current execution progress based on the execution tree and the execution flow; and updating the context according to the current execution progress. For example, the execution flow indicates that the five execution nodes are sequentially executed, and in the execution process of the five execution nodes, context information is continuously updated based on the parameter of the executed execution node. The step of updating the current execution node comprises the steps of determining the current execution node according to the execution context in response to the existence of the unexecuted execution node, and converting to the step of obtaining candidate allocation data, otherwise, ending the execution of the execution flow. The currently executing node updating step is used for determining the currently executing node according to the context information. For example, the execution flow indicates that the five execution nodes are sequentially executed, the context information indicates that the first two execution nodes have been executed, and the third execution node is determined as the current execution node.
In step S1511, in response to there being no execution node that is not executed, execution of the execution flow ends. The loop ending step S1511 is used for indicating whether the execution of all the execution nodes in the execution flow has been completed, and ending the execution data allocation when it is determined that the execution of all the execution nodes has been completed.
In step S1580, a rollback node is determined according to the context information, the rollback node is added to the rollback tree, and the rollback flow of at least one rollback node in the rollback tree is updated based on the rule that the later allocated data is rolled back first. The rollback node is used for rolling back data which is allocated to the target device from the target device, and the rollback tree comprises at least one rollback node and a rollback flow of the at least one rollback node.
In step S1590, a roll-back node and a roll-back order are determined based on the roll-back tree, and the roll-back node is executed according to the roll-back order, and roll-back of the allocated data is implemented. Alternatively, both the rollback tree and rollback nodes may be stored, transmitted, and read in a data storage format (e.g., without limitation, JSON storage structures).
FIG. 16 illustrates one embodiment of distributing data based on a distribution policy. As shown in fig. 14, first, execution parameters are submitted, and all data to be allocated are acquired. Alternatively, the data to be distributed may be stored, transferred, and read in a JSON storage structure. The stored workflow data is then loaded, the workflow data comprising a data allocation policy. Alternatively, the workflow data may be stored, transferred and read in a JSON storage structure. Then, the loaded workflow data is instantiated to obtain a data allocation policy. Then, the executing nodes are sequentially scheduled based on the execution flow in the data allocation policy. When an execution node is scheduled, the current execution progress is determined according to the execution context, and the execution progress is updated when the execution of the execution node is completed. Before executing the content of the executing node, a condition check is performed to determine whether the executing node can be executed. For example, if the execution condition of the execution node is set to "the execution node is executed when the age data is more than 30 years old and the candidate allocation data is a name", the API is smoothly executed only when the age data included in the context information is more than 30 years old and the candidate allocation data is a name data, through the condition check. If the condition check fails, the executing node can be skipped, the next executing node can be executed according to the executing flow, and the data to be distributed can be traversed until the candidate distribution data can meet the condition check. After executing the API, feedback from the target device will be received, at which point the assertion will be executed. That is, the feedback of the target device is compared with the parameter of the executing node to judge whether the candidate allocation data is correctly allocated. For example, if the feedback of the target device means "age data has been received", and the play-out of the executing node means "address data has been sent out", and the assertion is executed by comparing the feedback of the target device with the play-out of the executing node, a result of failure in execution of the assertion will be obtained, and it can be determined that the address data is not correctly allocated according to the result. When the execution assertion is successful, it means that the candidate allocation data is already allocated correctly, and at this time, the information before execution, the input parameters after execution and the output after execution of the current execution node are all stored in the execution context data to update the context information. When the execution of the assertion fails, the candidate allocation data is not correctly allocated, and the context information of the current execution node is submitted to the transaction manager to trigger the rollback operation, so that the transactivity of the configuration data issuing is ensured. And when no downstream execution node exists in the execution flow, ending the data distribution.
FIG. 17 illustrates an embodiment of generating a rollback tree during a data allocation process. As shown in fig. 17, when execution of an assertion is successful, meaning that the candidate allocation data has been correctly allocated, the run-time context of the current execution node may be committed to the transaction manager to build and update a rollback tree of equal proportions to the execution tree built by the execution node that has completed execution. When the execution of the assertion fails, it means that the candidate allocation data is not allocated correctly, at this time, the rollback will be submitted, i.e. the context information of the current execution node is submitted to the transaction manager to trigger the rollback operation, at which time the rollback operation will proceed according to the established rollback tree. Alternatively, the rollback tree may be stored, transmitted, and read in a JSON storage structure.
In some embodiments, according to the execution flow of at least one execution node and the data to be allocated, executing at least one execution node to allocate at least a portion of the data to be allocated to at least one target device may further include a rollback step and a rollback tree update step. The rollback node is used for rolling back data which is allocated to the target device from the target device, and the rollback tree comprises at least one rollback node and a rollback flow of the at least one rollback node. Alternatively, both the rollback tree and rollback nodes may be stored, transmitted, and read in a JSON storage structure. The rollback step includes determining a rollback node and a rollback order based on the rollback tree in response to the candidate allocation data being incorrectly allocated, performing the rollback node according to the rollback order, and effectuating rollback of the allocated data. The step of updating the rolling tree comprises the steps of responding to correct allocation of candidate allocation data, determining rolling nodes according to context information, adding the rolling nodes to the rolling tree, and updating the rolling flow of at least one rolling node in the rolling tree based on the rule that the later allocated data is rolled first. Both the rollback step and the rollback tree update step may be performed on the server 210. As an example, in response to the candidate allocation data allocation being incorrect, the server 210 determines a rollback node and a rollback order based on the rollback tree, the rollback node being executed according to the rollback order, thereby rollback data that has been allocated to the target device 240. As an example, in response to the candidate allocation data allocation being correct, the server 210 determines a rollback node from the context information, adds the rollback node to the rollback tree, and updates a rollback flow of at least one rollback node in the rollback tree based on a rule that the later allocated data was rolled back first.
Fig. 18 shows a schematic diagram of a rollback tree in some embodiments. As shown in fig. 18, the rollback tree and the execution tree have equal proportions, and the rollback nodes and the execution nodes may also be in one-to-one correspondence. As an example, the rollback node may include execution information, a query at runtime, rollback conditions, and an execution API. The query at run-time contains the context information stored when the rollback node was established. Both the rollback tree and the rollback nodes may be stored in JSON structures, as shown below, table VII shows the storage structure of the rollback tree and table VIII shows the storage structure of the rollback nodes.
/>
FIG. 19 illustrates an exemplary flow diagram for performing rollback according to a rollback tree in some embodiments. As shown in fig. 19, after rollback is committed, the JSON storage structure of the rollback tree may be read from the transaction manager to load the rollback tree, and then a rollback operation is initiated according to individual rollback nodes according to the order of the rollback tree until there are no rollback nodes that are not rolled back. After the rollback operation is initiated according to the rollback node, whether the rollback based on the rollback node is successful or not is judged, if so, the rollback operation of the rollback node is repeatedly executed until the rollback is successful, and then the rollback node is transferred to the next rollback node. Alternatively, when a roll-back node needs to be repeatedly executed until the roll-back is successful, the roll-back node may be sent to a message queue retry, which will repeatedly execute the roll-back node until the roll-back is successful. As an example, when performing rollback, the rollback call may fail to be initiated due to unexpected factors such as network jitter. At this time, a certain number of system retries can be performed, and after the system retries fail, the rollback operation can be packaged into a message queue for retries, so that the rollback operation is ensured to be finally executed.
FIG. 20 illustrates an exemplary flow chart for performing rollback in some embodiments. As shown in FIG. 20, the rollback tree is loaded first in the running process, and after confirming that the rollback condition exists, rollback operations are sequentially performed according to the execution sequence of the rollback tree, and the parameters need to be kept equal when the rollback operations are performed and the data distribution is performed. The rollback condition is verified to avoid unexpected overwriting of data due to the rollback operation. The rollback node compares the data at the time the rollback is performed with the stored update data in the context, and the rollback is performed when the data at the time of the rollback and the stored update data meet the expected constraints. When rollback is performed, the rollback API restores the current data to the snapshot state stored in the rollback node. The rollback API is used to restore the data to be rolled back and the state of the corresponding target device to the state before the execution node corresponding to the rollback node distributes the data. The snapshot information stores data values prior to performing node operations.
FIG. 21 illustrates a schematic diagram of a data allocation and rollback method according to some embodiments. As shown in fig. 21, data to be distributed is first acquired through an intelligent form, and then a distribution policy of the data to be distributed is determined by a scheduler and distributed to target devices based on the policy. And updating the execution tree and the rollback tree in real time by using transaction management in order to ensure that the data distribution process meets the requirement of the transaction, and executing rollback based on the rollback tree when the rollback is initiated.
As shown in fig. 21, the intelligent form first determines form parameters using a form editor (e.g., the visual editing interface shown in fig. 5, 6, 7), then converts the form parameters into json schema data and sends the JsonSchema data to the client. The client then parses the json schema data and generates a form using the form renderer (the form rendering process may be as shown in fig. 11, for example). The client then presents the generated form to the user and gathers the data filled in by the user using a form data gathering algorithm (the data gathering process may be as shown in fig. 12, for example). And finally submitting the collected data to a scheduler.
After receiving the data submitted by the intelligent form, the scheduling front end schedules the execution node to distribute the data according to the stored workflow information. The workflow data may be generated by the workflow editing unit orchestrating the execution node determination. In some embodiments, the workflow editing unit may employ the visual execution node editing interface and workflow editing interface shown in fig. 14A and 14B. FIG. 22 illustrates a schematic diagram of scheduler operation in some embodiments. As shown in fig. 22, the main work contents of the scheduler are divided into an edit workflow and an execute workflow. As an example, the editing workflow includes: the workflow is first visually edited (e.g., may be implemented using the visual execution node editing interface and workflow editing interface shown in fig. 14), and then the resulting workflow is serialized into JSON data. Executing the workflow includes: the workflow is loaded with JSON data for the workflow, and then the execution nodes are scheduled for data distribution based on the loaded workflow.
Transaction management is used for ensuring that a process of scheduling an execution node by a scheduler to realize data distribution is executed in a long transaction, and the data distribution back-rolling capability is realized. As shown in fig. 21, the transaction center will perform run-time management, i.e., when the scheduler is running, generate and update the execution tree and rollback tree for guiding rollbacks that may occur later. And initiates rollback upon receipt of a rollback request. Illustratively, generating and updating the rollback tree may employ the methods shown in fig. 17 or 18.
Fig. 23 illustrates an exemplary block diagram of a data distribution device 2300 according to some embodiments of the application. The data distribution device 2300 includes a first acquisition module 2310, a second acquisition module 2320, a determination module 2330, and a distribution module 2340. The first acquisition module 2310 is configured to acquire data allocation requirements of at least one target device. The second acquisition module 2320 is configured to acquire data to be allocated according to the data allocation requirement. The determination module 2330 is configured to visually determine a data allocation policy for the data to be allocated, comprising: based on the data distribution requirement, at least one execution node for data distribution is determined in a visual mode, and an execution flow of the at least one execution node is determined in a visual mode, wherein each of the at least one execution node comprises parameter entering information, execution information and parameter exiting information, and the execution flow indicates the execution sequence of each of the at least one execution node. The allocation module 2340 is configured to allocate at least a portion of the data to be allocated to the at least one target device according to the data allocation policy. As an example, the data allocation device 2300 may be used to implement a data allocation method such as the method 300.
It should be noted that the various modules described above may be implemented in software or hardware or a combination of both. The different modules may be implemented in the same software or hardware structure or one module may be implemented by different software or hardware structures.
The apparatus 2300 may first acquire the data allocation requirements of at least one target device using the first acquisition module 2310, which enables subsequent operations to maximally satisfy the requirements of the target device to implement the functions of specific scenario requirements. Then, the second acquisition module 2320 is utilized to acquire the data to be allocated according to the data allocation requirement, so that the acquired data to be allocated can fully meet the requirement of the target equipment in the application scene. Then, the data allocation strategy of the data to be allocated is determined in a visual mode by utilizing the determining module 2330, so that a developer does not need to rewrite program codes from the beginning in order to determine the allocation strategy, the workload of the developer is reduced, and the data allocation efficiency is improved; finally, at least a portion of the data to be distributed is distributed to at least one target device according to a data distribution policy using the distribution module 2340, which makes the data distribution process faster and more intuitive because the data distribution policy is determined visually.
Fig. 24 illustrates an example system 2400 that includes an example computing device 2410 that represents one or more systems and/or devices that can implement the various methods described herein. Computing device 2410 may be, for example, a server of a service provider, a device associated with a server, a system-on-chip, and/or any other suitable computing device or computing system. The data distribution apparatus 2300 described above with reference to fig. 23 may take the form of a computing device 2410. Alternatively, the data distribution device 2300 may be implemented as a computer program in the form of an application 2416.
The example computing device 2410 as illustrated includes a processing system 2411, one or more computer-readable media 2412, and one or more I/O interfaces 2413 communicatively coupled to one another. Although not shown, the computing device 2410 may also include a system bus or other data and command transfer system that couples the various components to one another. The system bus may include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. Various other examples are also contemplated, such as control and data lines.
The processing system 2411 represents functionality to perform one or more operations using hardware. Accordingly, the processing system 2411 is illustrated as including hardware elements 2414 that may be configured as processors, functional blocks, and the like. This may include implementation in hardware as application specific integrated circuits or other logic devices formed using one or more semiconductors. The hardware element 2414 is not limited by the materials from which it is formed or the processing mechanisms employed therein. For example, the processor may be comprised of semiconductor(s) and/or transistors (e.g., electronic Integrated Circuits (ICs)). In such a context, the processor-executable instructions may be electronically-executable instructions.
The computer-readable medium 2412 is illustrated as including a memory/storage 2416. Memory/storage 2416 represents memory/storage capacity associated with one or more computer-readable media. Memory/storage 2416 may include volatile media (such as Random Access Memory (RAM)) and/or nonvolatile media (such as Read Only Memory (ROM), flash memory, optical disks, magnetic disks, and so forth). The memory/storage 2416 may include fixed media (e.g., RAM, ROM, a fixed hard drive, etc.) and removable media (e.g., flash memory, a removable hard drive, an optical disk, and so forth). The computer-readable medium 2412 may be configured in a variety of other ways as described further below.
One or more I/O interfaces 2413 represent functionality that allows a user to input commands and information to computing device 2410 using various input devices, and optionally also allows information to be presented to the user and/or other components or devices using various output devices. Examples of input devices include keyboards, cursor control devices (e.g., mice), microphones (e.g., for voice input), scanners, touch functions (e.g., capacitive or other sensors configured to detect physical touches), cameras (e.g., motion that does not involve touches may be detected as gestures using visible or invisible wavelengths such as infrared frequencies), and so forth. Examples of output devices include a display device, speakers, printer, network card, haptic response device, and the like. Accordingly, the computing device 2410 may be configured in a variety of ways to support user interaction as described further below.
Computing device 2410 also includes an application 2416. The application 2416 may be, for example, a software instance of the data distribution device 2300 and implement the techniques described herein in combination with other elements in the computing device 2410.
The present application provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computing device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computing device to perform the data distribution methods provided in the various alternative implementations described above.
Various techniques may be described herein in the general context of software hardware elements or program modules. Generally, these modules include routines, programs, objects, elements, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The terms "module," "functionality," and "component" as used herein generally represent software, firmware, hardware, or a combination thereof. The features of the techniques described herein are platform-independent, meaning that the techniques may be implemented on a variety of computing platforms having a variety of processors.
An implementation of the described modules and techniques may be stored on or transmitted across some form of computer readable media. Computer-readable media can include a variety of media that are accessible by computing device 2410. By way of example, and not limitation, computer readable media may comprise "computer readable storage media" and "computer readable signal media".
"computer-readable storage medium" refers to a medium and/or device that can permanently store information and/or a tangible storage device, as opposed to a mere signal transmission, carrier wave, or signal itself. Thus, computer-readable storage media refers to non-signal bearing media. Computer-readable storage media include hardware such as volatile and nonvolatile, removable and non-removable media and/or storage devices implemented in methods or techniques suitable for storage of information such as computer-readable instructions, data structures, program modules, logic elements/circuits or other data. Examples of a computer-readable storage medium may include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, hard disk, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage devices, tangible media, or articles of manufacture adapted to store the desired information and which may be accessed by a computer.
"computer-readable signal media" refers to signal bearing media configured to hardware, such as to send instructions to computing device 2410 via a network. Signal media may typically be embodied in computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, data signal, or other transport mechanism. Signal media also include any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
As before, the hardware elements 2414 and computer-readable medium 2412 represent instructions, modules, programmable device logic, and/or fixed device logic implemented in hardware that, in some embodiments, may be used to implement at least some aspects of the techniques described herein. The hardware elements may include integrated circuits or components of a system on a chip, application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), complex Programmable Logic Devices (CPLDs), and other implementations in silicon or other hardware devices. In this context, the hardware elements may be implemented as processing devices that perform program tasks defined by instructions, modules, and/or logic embodied by the hardware elements, as well as hardware devices that store instructions for execution, such as the previously described computer-readable storage media.
Combinations of the foregoing may also be used to implement the various techniques and modules herein. Accordingly, software, hardware, or program modules, and other program modules may be implemented as one or more instructions and/or logic embodied on some form of computer readable storage medium and/or by one or more hardware elements 2414. The computing device 2410 may be configured to implement particular instructions and/or functions corresponding to the software and/or hardware modules. Thus, for example, by using the computer-readable storage medium of the processing system and/or the hardware elements 2414, the modules may be implemented at least in part in hardware as modules executable by the computing device 2410 as software. The instructions and/or functions may be executable/operable by one or more articles of manufacture (e.g., the one or more computing devices 2410 and/or processing systems 2411) to implement the techniques, modules, and examples described herein.
In various implementations, the computing device 2410 may take on a variety of different configurations. For example, computing device 2410 may be implemented as a computer-like device including a personal computer, desktop computer, multi-screen computer, laptop computer, netbook, or the like. Computing device 2410 may also be implemented as a mobile appliance-type device including a mobile device such as a mobile phone, portable music player, portable gaming device, tablet computer, multi-screen computer, or the like. The computing device 2410 may also be implemented as a television-like device that includes a device having or connected to a generally larger screen in a casual viewing environment. Such devices include televisions, set-top boxes, gaming machines, and the like.
The techniques described herein may be supported by these various configurations of computing device 2410 and are not limited to the specific examples of techniques described herein. The functionality may also be implemented in whole or in part on the "cloud" 2420 through the use of a distributed system, such as through the platform 2422 described below.
Cloud 2420 includes and/or is representative of a platform 2422 for resources 2424. The platform 2422 abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud 2420. The resources 2424 may include applications and/or data that may be used when executing computer processes on servers remote from the computing device 2410. The resources 2424 may also include services provided over the internet and/or over subscriber networks such as cellular or Wi-Fi networks.
Platform 2422 may abstract resources and functions to connect computing device 2410 with other computing devices. The platform 2422 may also be used to abstract a hierarchy of resources to provide a corresponding level of hierarchy of encountered demand for the resources 2424 implemented via the platform 2422. Thus, in an interconnected device embodiment, the implementation of the functionality described herein may be distributed throughout the system 2400. For example, functionality may be implemented in part on computing device 2410 and by platform 2422 abstracting the functionality of cloud 2420.
It will be appreciated that for clarity, embodiments of the application have been described with reference to different functional units. However, it will be apparent that the functionality of each functional unit may be implemented in a single unit, in a plurality of units or as part of other functional units without departing from the application. For example, functionality illustrated to be performed by a single unit may be performed by multiple different units. Thus, references to specific functional units are only to be seen as references to suitable units for providing the described functionality rather than indicative of a strict logical or physical structure or organization. Thus, the application may be implemented in a single unit or may be physically and functionally distributed between different units and circuits.
Although the present application has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the application is limited only by the appended claims. Additionally, although individual features may be included in different claims, these may possibly be advantageously combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. The order of features in the claims does not imply any specific order in which the features must be worked. Furthermore, in the claims, the word "comprising" does not exclude other elements, and the term "a" or "an" does not exclude a plurality. Reference signs in the claims are provided merely as a clarifying example and shall not be construed as limiting the scope of the claims in any way.
It will be appreciated that in particular embodiments of the present application, data relating to user information and the like is referred to. When the above embodiments of the present application are applied to specific products or technologies, user approval or consent is required, and the collection, use and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions.

Claims (16)

1. A method of data distribution, the method comprising:
acquiring data allocation requirements of at least one target device;
acquiring data to be distributed according to the data distribution demand;
determining a data distribution strategy of the data to be distributed in a visual mode;
distributing at least one part of the data to be distributed to the at least one target device according to the data distribution strategy;
wherein the visually determining the data allocation policy of the data to be allocated includes:
based on the data distribution requirement, at least one execution node for data distribution is determined in a visual mode, and each execution node comprises parameter entering information, execution information and parameter exiting information;
and visually determining the execution flow of the at least one execution node, wherein the execution flow indicates the execution order of each execution node in the at least one execution node.
2. The method of claim 1, wherein the entry information of each execution node includes a data type to be allocated corresponding to the execution node, the execution information of each execution node includes a data allocation target corresponding to the execution node, and the exit information of each execution node includes an execution result of the execution node.
3. The method of claim 1, wherein the execution flow of the at least one execution node comprises at least one of:
a sequential execution flow indicating sequential execution of at least a portion of the at least one execution node;
a conditional execution flow indicating that at least a portion of the at least one execution node is conditionally executed; and
and a loop execution flow indicating to loop execute at least a portion of the at least one execution node.
4. The method of claim 1, wherein the visually determining at least one execution node for data allocation based on data allocation requirements comprises:
based on the data distribution requirement, presenting an execution node editing interface, wherein the execution node editing interface comprises an input control for editing at least one execution node to be generated, an execution information input control and an output input control;
Acquiring the parameter entering information, the executing information and the parameter exiting information of the at least one executing node to be generated from the parameter entering input control, the executing information input control and the parameter exiting input control;
generating at least one executing node according to the parameter entering information, the executing information and the parameter exiting information of the at least one executing node to be generated.
5. The method of claim 3, wherein visually determining the execution flow of the at least one execution node comprises:
presenting an execution flow editing interface, wherein the execution flow editing interface comprises an execution node input control and an execution flow input control;
acquiring a plurality of to-be-processed execution nodes selected from the at least one execution node from the execution node input control;
and acquiring the execution flows of the plurality of to-be-processed execution nodes from the execution flow input control.
6. The method of claim 1, wherein assigning at least a portion of the data to be assigned to the at least one target device according to the data assignment policy comprises:
and executing the at least one executing node according to the execution flow of the at least one executing node to distribute at least one part of the data to be distributed to the at least one target device.
7. The method of claim 6, wherein the executing the at least one executing node to allocate at least a portion of the data to be allocated to the at least one target device according to an execution flow of the at least one executing node comprises:
an initial node acquisition step: acquiring an initial execution node from the at least one execution node as a current execution node;
candidate allocation data acquisition: acquiring candidate allocation data from the data to be allocated according to a constraint condition related to at least one of the participation of the current execution node and the execution context;
candidate allocation data allocation step: distributing the candidate distribution data to a target device corresponding to the current execution node in the at least one target device based on the execution information of the current execution node;
feedback acquisition: obtaining allocation feedback about the candidate allocation data from a target device corresponding to the current execution node;
a distribution correctness determining step: comparing the distribution feedback with the parameter of the current executing node, and determining whether the candidate distribution data is correctly distributed;
a context updating step: in response to the candidate allocation data being correctly allocated, updating the execution context based on the out-of-reference of the current execution node and determining whether an unexecuted execution node exists according to the updated context and the execution flow;
The currently executing node updating step: in response to the existence of an unexecuted execution node, updating the current execution node according to the execution context, and turning to the candidate allocation data acquisition step;
and (3) a cycle ending step: and ending executing the execution flow in response to the absence of the unexecuted execution node.
8. The method of claim 7, wherein the context updating step comprises:
in response to the feedback indicating that the candidate allocation data allocation is correct, adding the current execution node to a historical execution tree, the historical execution tree being used to indicate a historical execution order of the at least one execution node;
determining the current execution progress based on the execution tree and the execution flow;
and updating the context according to the current execution progress.
9. The method of claim 8, further comprising:
and (3) a rollback step: in response to incorrect allocation of the candidate allocation data, determining a rolling node and a rolling sequence based on a rolling tree, and executing the rolling node according to the rolling sequence to realize rolling of the allocated data, wherein the rolling node is used for rolling the data allocated to the target device from the target device, and the rolling tree indicates the rolling flow of at least one rolling node;
And (3) updating the rollback tree: and determining a roll-back node according to the context information in response to the fact that the candidate allocation data is correctly allocated, adding the roll-back node to the roll-back tree, and updating the roll-back flow of at least one roll-back node in the roll-back tree based on the rule that the later allocated data is rolled back first.
10. The method of claim 1, wherein obtaining data to be allocated according to the data allocation requirements comprises:
form parameters are determined in a visual mode according to the data distribution requirements;
generating a form for data collection based on the form parameters;
and acquiring data to be distributed by using the form.
11. The method of claim 10, wherein said visually determining form parameters based on said data allocation requirements comprises:
presenting a form parameter input interface, wherein the form parameter input interface comprises a content parameter input control and a rendering parameter input control;
acquiring the content parameters of the form from the content parameter input control, and acquiring the rendering parameters of the form from the rendering parameter input control, wherein the content parameters of the form contain information of data to be collected by the form, and the rendering parameters of the form contain rendering information of the form;
Form parameters are determined based on the content parameters and the rendering parameters.
12. The method of claim 10, wherein the utilizing the form to obtain data to be allocated comprises:
serializing the information of the form into form data in a preset format;
the form data of the preset format is sent to the terminal equipment so as to collect the data to be distributed through the terminal equipment,
receiving the data to be allocated from the terminal device,
wherein the collecting, by the terminal device, data to be distributed includes:
running the form data in the preset format on the terminal equipment to render the form, wherein the form comprises one or more information input controls;
obtaining entry data from the one or more information input controls of the form to generate a form comprising entry data;
the data to be allocated is extracted from a form comprising the entered data.
13. A data distribution device, the device comprising:
a first acquisition module configured to acquire data allocation requirements of at least one target device;
the second acquisition module is configured to acquire data to be distributed according to the data distribution requirement;
a determination module configured to visually determine a data allocation policy for the data to be allocated, comprising: based on the data distribution requirement, at least one execution node for data distribution is determined in a visual mode, and each execution node comprises parameter entering information, execution information and parameter exiting information; and visually determining an execution flow of the at least one execution node, the execution flow indicating an execution order of each of the at least one execution node;
An allocation module configured to allocate at least a portion of the data to be allocated to the at least one target device according to the data allocation policy.
14. A computing device, comprising:
a memory configured to store computer-executable instructions; and
a processor configured to perform the method according to any of claims 1-12 when the computer executable instructions are executed by the processor.
15. A computer readable storage medium storing computer executable instructions which when executed implement the method of any one of claims 1-12.
16. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the method according to any one of claims 1 to 12.
CN202210608443.3A 2022-05-31 2022-05-31 Data distribution method, device, computing equipment and storage medium Pending CN117193740A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210608443.3A CN117193740A (en) 2022-05-31 2022-05-31 Data distribution method, device, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210608443.3A CN117193740A (en) 2022-05-31 2022-05-31 Data distribution method, device, computing equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117193740A true CN117193740A (en) 2023-12-08

Family

ID=89003961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210608443.3A Pending CN117193740A (en) 2022-05-31 2022-05-31 Data distribution method, device, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117193740A (en)

Similar Documents

Publication Publication Date Title
US11847438B2 (en) Offline capabilities for live applications in a cloud collaboration platform
CN107370786B (en) General information management system based on micro-service architecture
US8577904B2 (en) Composite copy and paste for composite user interfaces
CA2684822C (en) Data transformation based on a technical design document
TW413764B (en) Method for generating display control information and computer
US7433887B2 (en) Method and apparatus for metadata driven business logic processing
US9047346B2 (en) Reporting language filtering and mapping to dimensional concepts
US10419568B2 (en) Manipulation of browser DOM on server
US20100229052A1 (en) Resolving asynchronous validation errors
US20120254118A1 (en) Recovery of tenant data across tenant moves
CN107357593A (en) Source code file construction method, device, electric terminal and readable storage medium storing program for executing
US20070204216A1 (en) System and method for creating layouts using a layout editor
US20150178065A1 (en) Dynamic delivery and integration of static content into cloud
CN102810057A (en) Log recording method
Ellis et al. Computer science and office information systems
WO2023040143A1 (en) Cloud service resource orchestration method and apparatus, and device and storage medium
CN111427577A (en) Code processing method and device and server
US11526367B1 (en) Systems and methods for translation of a digital document to an equivalent interactive user interface
CN116956825A (en) Form conversion method and server
CN110837359A (en) MVC Web framework realized by GOLANG language
CN117193740A (en) Data distribution method, device, computing equipment and storage medium
US20170161359A1 (en) Pattern-driven data generator
CN112381509A (en) Management system for major special topic of national science and technology for creating major new drug
US20220244975A1 (en) Method and system for generating natural language content from recordings of actions performed to execute workflows in an application
CN106547626A (en) For the method and server of peer-to-peer architecture equalization server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination