CN117407041A - Docking method, electronic device, and storage medium - Google Patents

Docking method, electronic device, and storage medium Download PDF

Info

Publication number
CN117407041A
CN117407041A CN202311436165.9A CN202311436165A CN117407041A CN 117407041 A CN117407041 A CN 117407041A CN 202311436165 A CN202311436165 A CN 202311436165A CN 117407041 A CN117407041 A CN 117407041A
Authority
CN
China
Prior art keywords
task
client
task flow
configuration file
docking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311436165.9A
Other languages
Chinese (zh)
Inventor
邢志辉
蔡仲彪
王云腾
刘均胜
何红杰
莫元武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
eBaoTech Corp
Original Assignee
eBaoTech Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by eBaoTech Corp filed Critical eBaoTech Corp
Priority to CN202311436165.9A priority Critical patent/CN117407041A/en
Publication of CN117407041A publication Critical patent/CN117407041A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Computer Security & Cryptography (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Stored Programmes (AREA)

Abstract

The embodiment of the application provides a docking method, electronic equipment and a computer readable storage medium, which are applied to a server side and can set a target configuration file for docking between a first client of a service system and a second client of a target system. The generation process of the target configuration file comprises the following steps: generating a butt joint task flow of the target configuration file based on the acquired configuration file, selecting a corresponding atomic component from a preset atomic component library according to the butt joint task flow, mounting the corresponding atomic component into a task node of the butt joint task flow to obtain a second task flow, determining an access interface of the target configuration file according to the butt joint task flow, configuring the second task flow into an access port of a target system, and packaging the second task flow to be the target configuration file. When the first client needs to be in butt joint with the second client, a call request sent by the first client or the second client is obtained to call the access interface, and the butt joint of the first client and the second client can be achieved.

Description

Docking method, electronic device, and storage medium
Technical Field
The invention relates to the technical field of intelligent terminals, in particular to a docking method, electronic equipment and a computer readable storage medium.
Background
In the digital transformation process in the insurance field, the connection requirements between different ecological partners exist. For example, a large insurance company may have a stable insurance business system, however, ecological partners around the insurance company may be involved in different industries (e.g., e-commerce, retail, automotive, banking, post office, rental car company, etc.). In the process of docking the system of the ecological partner with the insurance business system, docking difficulty is often caused by different configurations of the system and the insurance business system. For example, the data file format is different from that of the insurance business system, which can cause difficulty in information interaction between the business system and the system of the ecological partner, and the transmitted file is difficult to read by a receiver, and finally, the docking failure is caused.
In the process of interfacing between an insurance company and an ecological partner, the systems of the insurance company and the ecological partner generally have respective data format rules, and the interfacing of the two systems requires that the system of one party be mapped to the data structure of the system of the other party. It will be appreciated that the system typically complies with the data rules format of the other party for the vulnerable party. For example, if a small insurance company requests a large bank to sell a product instead, the small insurance company needs to follow the data rule format of the large bank. For another example, where a large insurance company interfaces with a small travel company that assists in selling insurance products, the small travel company may be required to follow the data rules format of the large insurance company. It is usually possible to adapt the system of the partner by hard coding, but this requires modifying the own system code, and adjusting the system code by hard coding may have an influence on the stability of the system that is difficult to predict, so it is often difficult for both parties to reach a consensus on the interfacing policy in the case where both parties adhere to their own data rule format. This can make interfacing and management between systems very complex.
Disclosure of Invention
The embodiment of the application provides a docking method, electronic equipment and a computer readable storage medium, which are applied to a server side and can set a target configuration file for docking between a first client of a service system and a second client of an access system. The generation process of the target configuration file comprises the following steps: generating a butt joint task flow of the target configuration file based on the acquired configuration file, selecting a corresponding atomic component from a preset atomic component library according to the butt joint task flow, mounting the corresponding atomic component into a task node of the butt joint task flow to obtain a second task flow, determining an access interface of the target configuration file according to the butt joint task flow, configuring the second task flow into an access port of a target system, and packaging the second task flow to be the target configuration file. When the first client needs to be in butt joint with the second client, a call request sent by the first client or the second client is obtained to call the access interface, and the butt joint of the first client and the second client can be achieved.
In a first aspect, an embodiment of the present application provides a docking method, applied to a server, where the method includes: acquiring a configuration file, and determining a first task flow according to the acquired configuration file; determining an atomic component corresponding to the task content of the first task flow from a preset atomic component library, and mounting the atomic component into the first task flow to obtain a second task flow and an access address of the second task flow; packaging the second task flow as a target configuration file; and acquiring a call request of the first client or the second client to the target configuration file, and completing the butt joint of the first client and the second client.
The method comprises the steps that a configuration file is obtained from a server side, a first task flow (namely a butt joint task flow) is determined according to the configuration file, then an atomic component corresponding to a butt joint task in the first task flow is selected from a preset atomic component library, the atomic component is mounted in the first task flow, the mounted first task flow comprises an atomic component capable of executing each task node, a second task flow (namely a target task flow) is obtained, and an access address of the second task flow is configured. The second task flow is encapsulated as a target configuration file, for example, the second task flow is encapsulated as a JSON-format target configuration file, and data information required by all the docking processes is described in the target configuration file.
In some embodiments, the access address of the second task flow may be configured as a single access interface exposed by the second task flow, and the docking task corresponding to the second task flow may be loaded by calling the single access interface, so as to run the docking task to complete docking between the first client and the second client.
Therefore, in the process of obtaining the target configuration file, only the needed atomic component is selected from the preset atomic component library, online programming is not needed, and the acquisition difficulty of the docking file is reduced. In addition, any code is not required to be locally added to the system terminal to be docked, the system stability of the system to be docked is not affected, any local storage resource and local computing resource are not required to be occupied, and the docking method is simple to realize and can be applied to docking between systems carried by lightweight terminals (such as wearable equipment and the like).
In a possible implementation manner of the first aspect, acquiring a configuration file, determining a first task flow according to the acquired configuration file includes: acquiring a configuration file, wherein the configuration file is used for representing the butt joint flow of the first client and the second client; and determining a first task flow according to the acquired configuration file.
I.e. the configuration file may be configuration data that the user sends to the server via the client for designing the docking procedure.
In some embodiments, the user may determine the docking task flow required by the system to be docked by answering the questionnaire, and then store the docking task flow as a configuration file in the server.
In one possible implementation manner of the first aspect, determining an atomic component corresponding to the task content of the first task flow from a preset atomic component library, and mounting the atomic component to the first task flow to obtain a second task flow, where the method includes: selecting one or more first atomic components from a preset atomic component library according to task content of each task node in the first task flow; and mounting one or more first atomic components into each task node in the first task stream to obtain a second task stream.
That is, a preset atomic component library may be stored in the server, where the preset atomic component library may provide a plurality of atomic components, each atomic component is a component corresponding to a single minimum function, and a docking function corresponding to the docking task stream may be implemented by combining the atomic components. Furthermore, the docking task flow can be split into a plurality of task nodes, each task node is corresponding to a corresponding docking function, and the docking function can be obtained by combining at least one atomic component. Therefore, the server only needs to select the atomic component corresponding to the first task flow from the preset atomic component library, and mount the selected atomic component into the first task flow according to the first task flow in sequence, so that the docking function which can be executed by the first task flow can be realized through at least one atomic component.
In one possible implementation manner of the first aspect, mounting one or more first atomic components into each task node in the first task flow to obtain a second task flow includes: detecting the drag operation of a user on the instantiated first atomic assembly, and modifying attribute parameters of the first atomic assembly corresponding to the drag operation according to the drag operation; a second task flow is determined based on the modified one or more first atomic components.
In some embodiments, a visual task flow editing interface may be provided for a terminal used by a user to develop or maintain an interface, where a preset atomic assembly library interface area is provided in the editing interface, and the interface area includes an icon corresponding to an instantiated atomic assembly. The user can configure attribute parameters of the instantiated atomic assembly in a dragging, clicking and other modes, and further visually splice the icon of the instantiated atomic assembly into each task node of the first task stream.
In a possible implementation manner of the first aspect, the determining manner of the access address of the second task flow includes: the access interface of the second task stream is configured based on the first atomic component within the second task stream.
The interface endpoint exposed by the second task flow is an interface call address of the docking request, and can be used for receiving data transmitted from the access party, for example, can be used for acquiring policy data. In some embodiments, an externally exposed single access interface may be configured for one or more access addresses corresponding to one or more first atomic components in the second task flow, so that a user may call the single access interface to access one or more first atomic components in the second task flow, and operate the second task flow to implement system docking.
In a possible implementation manner of the first aspect, mounting an atomic component to the first task flow to obtain a second task flow, further includes: and configuring the first access address of the first client and/or the second access address of the second client as the access address of the related routing task in the first task stream to obtain a second task stream.
The access address of the system to be docked can be configured as the access address of the related routing task in the first task flow in the process of mounting the atomic component to the first task flow to obtain the second task flow, so that the system to be docked can be directly accessed in the process of executing the routing task, namely, the routing task can directly access the first client or the second client when the second task flow is operated.
In a possible implementation manner of the first aspect, encapsulating the second task flow as the target configuration file includes: and packaging the second task stream into a target configuration file in a JSON format.
That is, the second task flow can be packaged into a JSON-format target configuration file, which is convenient for calling.
In some embodiments, the server may create a text document (dock file) that includes all commands and descriptions for creating the image by the user, and may package the configured second task flow and all atomic components mounted thereon into a service (dock) image and push the service (dock) image to the image repository. And then, selecting a runtime environment site, calling an API for deploying the preset cloud platform, deploying the text to the runtime site and running, calling a registration API of the gateway to register an access interface of the target configuration file to the gateway.
In other embodiments, each task node in the API execution task stream corresponding to the calling target configuration file (i.e., executing each docking step) may record log information, and the gateway may collect the log information and then collect the log information. Therefore, the middle layer can realize the functions of online configuration, verification, packaging, release, verification after release, error checking or log checking and the like of the system docking interface, forms a one-stop online docking platform, does not need coding, reduces the loss of engineering links, and reduces the inefficiency brought by a hard coding docking mode.
In a possible implementation of the first aspect, the atomic assembly includes at least: the system comprises a data rule checking component, a data routing and transmitting component and a data format conversion component.
I.e. the atomic component comprises at least a data rule checking function, a data routing function and a data format conversion function.
In a second aspect, embodiments of the present application further provide an electronic device, including: one or more processors; one or more memories; the one or more memories store one or more programs that, when executed by the one or more processors, cause the electronic device to perform the docking method provided by the first aspect and various possible implementations described above.
In a third aspect, embodiments of the present application further provide a computer readable storage medium, where instructions are stored on the storage medium, which when executed on a computer, cause the computer to perform the docking method provided by the first aspect and various possible implementations.
In a fourth aspect, embodiments of the present application further provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements the docking method provided by the first aspect and various possible implementations described above.
Drawings
FIG. 1 illustrates a system docking schematic;
FIG. 2 illustrates a system docking scenario diagram, according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of an implementation of a docking method according to an embodiment of the present application;
FIG. 4 illustrates a scenario diagram of an intermediate layer generation target profile, according to an embodiment of the present application;
FIG. 5A illustrates a flow diagram of a method for obtaining a target profile based on an intermediate layer, according to some embodiments of the present application;
FIG. 5B illustrates a visual atomic component editing interface schematic in accordance with some embodiments of the present application;
FIG. 6 illustrates a flow diagram of a dual system docking workflow, according to an embodiment of the present application;
fig. 7 illustrates a block diagram of a server 203, according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be described in detail below with reference to the accompanying drawings and specific embodiments of the present application.
Illustrative embodiments of the present application include, but are not limited to, docking methods, electronic devices, and computer-readable storage media, among others.
It will be appreciated that the electronic device to which the present application is applicable may be a server, where the applicable server may be a cloud server, a physical server, a large bandwidth server, a high security server, a private line server, or a group server, etc. of a lease type, and furthermore, the applicable server may be a complex instruction set computing (complex instruction set computer, CISC) architecture server or a reduced instruction set computing (reduced instruction set computer, RISC) architecture server, which is not limited herein.
A scenario for implementing system interfacing based on hard coding is described in detail below in conjunction with fig. 1.
Fig. 1 shows a system docking schematic.
Referring to fig. 1, if the service system S-1 of the insurance company and the access system S-2 of the bank need to be docked, a hard coding docking layer is written in the service system S-1 in a hard coding manner, so that the service system S-1 and the data standard of the access system S-2 are docked, and the service system S-1 and the access system S-2 can be successfully docked.
In some embodiments, a hard coded docking layer may also be written in the access system S-2 of the bank to access the business system S-1 of the insurance company to obtain its calculation engine to realize the agency sales insurance.
It will be appreciated that if multiple access systems need to be docked, different docking layers need to be customized for the different access systems. In addition, the code corresponding to the docking layer is often mixed with the code of the service system, if the code of the docking layer needs to be modified and maintained, the stability of the service code can be affected, and frequent docking requirement changes can cause the stability of the service system and the access system to be greatly affected. For example, if an access layer written in a hard coding mode needs to be modified, the code of the access layer needs to be pulled to a local storage of a terminal for developing a system, and after the coding is completed in the terminal, the modified access layer code is submitted, packaged and issued. It will be appreciated that the overall process of maintaining, modifying and updating the docking layer is relatively long, making the maintenance, modification and updating of the docking layer relatively costly.
In addition, if both systems of the docking wish to maintain their own data rules, it is difficult to realize the docking process.
In view of this, in order to solve the problem of complex docking process between the service system and the access system, the embodiment of the application provides a docking method, which is applied to the server, and can set a target configuration file for docking between the first client of the service system and the second client of the access system. The generation process of the target configuration file comprises the following steps: generating a butt joint task stream (namely a first task stream) of the target configuration file based on the acquired configuration file, selecting a corresponding atomic component from a preset atomic component library according to the butt joint task stream, mounting the corresponding atomic component into a task node of the butt joint task stream to obtain a second task stream, determining an access interface of the target configuration file according to the butt joint task stream, configuring the second task stream into an access port of a target system, and packaging the second task stream to be the target configuration file. When the first client needs to be in butt joint with the second client, a call request sent by the first client or the second client is obtained to call the access interface, and the butt joint of the first client and the second client can be achieved. It should be appreciated that the target profile is an ordered set of tasks performed during the interfacing of the first client and the second client.
It should be appreciated that the target profile is decoupled from the code of the first client and the second client, and that the target profile can be independently written, modified, and maintained. For example, assuming that a first client of a service system and a second client of an access system are respectively provided in different terminals, referring to fig. 2, a first client 011 of the service system may be provided in the terminal 201, and a second client 021 of the access system may be provided in the terminal 202. Corresponding to the docking scenario, the first client 011 and the second client 021 may call the target configuration file 031 proposed in the embodiment of the present application through a remote communication manner to complete the docking process. And the target profile 031 may be stored in the server 203. For example, the second client 021 may call the algorithm logic of the service system S-1 corresponding to the target profile 031. Because the code of the target configuration file exists independently and is decoupled with the code of the service system client or the code of the access system client, the code of the service system or the access system is not affected when the target configuration file is modified, and the maintenance, modification and updating costs of the target configuration file are reduced.
In some embodiments, the server 203 may include a preset atom assembly library, where an atom assembly for implementing the above functions may be provided, for example, an atom assembly having a data rule checking function, a data structure conversion function, a switching function, or a routing function is provided, and each assembly corresponds to a function, that is, a single atom assembly is an assembly corresponding to a single minimum function, so as to facilitate design and composition of task flows, and facilitate multiplexing and sharing of the single function. For example, a data rule checking function is instantiated as a reusable atomic component, and when the function is reused, the checking function can be added into a task flow only by mounting the atomic component on a task node which needs to multiplex the checking function in the task flow. When a user wants to change the data rule checking function, the user can modify all task nodes using the checking function in the task flow only by modifying the atomic assembly, so that time and labor are saved.
It should be appreciated that tasks involved in interfacing the first client 011 to the second client 021 may be spelled out based on a plurality of atomic components. For example, a task flow for interfacing the first client 011 and the second client 021 may be determined based on the configuration file, then a required atomic component may be selected according to a specific task of the task flow, and the selected atomic component may be arranged and combined according to the task flow to obtain the target configuration file.
The following describes in detail the implementation flow of a docking method in the embodiment of the present application with reference to fig. 3. It can be understood that the execution subject of each step in the flowchart shown in fig. 3 may be the server 203, and the description of the execution subject of a single step will not be repeated.
S301, acquiring a configuration file.
Illustratively, the configuration file is used to define a docking task flow between the first client 011 and the second client 021, that is, define which data processing needs to be performed between the first client 011 and the second client 021 to implement the docking between the two.
In some embodiments, the configuration file may be configuration data that a user sends to server 203 through a client that designed the docking procedure. For example, when the first client 011 is an insurance business system, the second client 021 is a banking system. In the scenario where the insurance business system interfaces with the banking system, the server 203 may obtain the configuration files for interfacing locally from other servers or servers 203. For example, the configuration file may include a format conversion task for converting a data format of premium data received by the insurance service system into a data format readable by the banking system. When the data format of the premium data is inconsistent with the data format specified by the banking system, the server 203 may convert the data format of the premium data into a data format readable by the banking system.
In some embodiments, the user may determine the docking task flow required by the system to be docked by answering the questionnaire, and then save the docking task flow as a configuration file, which is stored in the server 203. The questionnaire may be given a number of options to be presented for the docking task, for example: "do data format conversion necessary? "yes" for option a and "no" for option B. And acquiring a selection result of a user on the docking task flow through a questionnaire, determining a task flow required by the docking system based on the selection result, and storing the task flow as a configuration file. The configuration file may generate a plurality of task nodes in the docking task stream, for example, in the example to be selected, if the user selects option a "yes", it may be determined that the docking task stream includes the task node of "data format conversion", so as to provide a data format conversion function for the docking system.
S302, determining a butt joint task flow according to the acquired configuration file.
For example, the server 203 may read the obtained configuration file, and determine a docking task flow defined by the configuration file, where the docking task flow is an ordered set of docking tasks that need to be performed when the first client 011 docks with the second client 021. For example, reading the configuration file may determine each task node contained in the docking task stream, each task node corresponding to a single docking task, and may perform a portion of the docking functions.
In some embodiments, the configuration file may be a user-entered docking task flow that defines the docking tasks that need to be performed for the first client 011 and the second client 021 to dock.
In other embodiments, the configuration file may be a docking task flow pre-stored by the first client 011 for docking the second client 021.
In still other embodiments, the configuration file may be a docking task flow determined by the server 203 according to the configuration data of the first client 011 and the configuration data of the second client 021.
S303, mounting the corresponding atomic assembly into the butt joint task stream from a preset atomic assembly library to obtain a target task stream and a corresponding access address.
For example, a preset atomic component library may be stored in the server 203, where the preset atomic component library may provide a plurality of atomic components, each atomic component is a component corresponding to a single minimum function, and a docking function corresponding to the docking task stream may be implemented by combining the atomic components.
In some embodiments, the above-mentioned docking task flow may be split into a plurality of task nodes, each task node corresponding to a respective docking function, where the docking function may be obtained by combining at least one atomic component. Therefore, the server 203 only needs to select the atomic components corresponding to the task flow from the preset atomic component library, and mount the selected atomic components into the task flow according to the task flow, so as to realize the docking function that can be executed by the task flow through at least one atomic component.
It is to be appreciated that the atomic component can have a corresponding access address, whereby the access address of the docking task stream can be configured. For example, each atomic component has corresponding processing logic that needs to be accessed to implement the function to which the atomic component corresponds. Therefore, the atomic component is mounted in the task node, and the access address of the atomic component can be used as the attribute parameter of the atomic component to be added into the butt-joint task stream together, so as to further configure the target task stream (namely the second task stream) and the access address corresponding to the target task stream.
In some embodiments, the server 203 may configure a single access address exposed to the outside for a plurality of atomic components in the target task flow, for example, configure a single API (singular API) for obtaining access requests and an endpoint corresponding to the API for the target task flow. Then, when detecting that any system calls the single API (i.e. when accessing the endpoint corresponding to the API), the server 203 may run each task node in the docking task flow to implement the inter-system docking process.
S304, the target task flow is packaged as a target configuration file.
For example, the server 203 may package the target task stream into a target configuration file in a preset format and store it locally on the server 203. For example, the server 203 may encapsulate the target task flow into a target configuration file in JavaScript object notation (JavaScript object notation, JSON) format, within which is described the data information required for all docking procedures.
In some embodiments, the target configuration file may be stored in the server 203, so that the subsequent first client 011 or the second client 021 calls the target configuration file through a single API.
In other embodiments, server 203 may configure the access address of the docking system into the target task stream. For example, if a banking system desires access to an insurance business system to obtain computing engine related data for the insurance business system, the banking system may run the target profile by calling a single API.
S305, acquiring a call request of the first client or the second client to the target configuration file, and completing the butt joint of the first client and the second client.
It may be appreciated that the server 203 may receive a call request from the first client 011 or the second client 021 for an access address of the target task stream, and run a stored target configuration file based on the call request, so as to implement docking between the first client 011 and the second client 021.
In some embodiments, server 203 may run a docking engine, load the target configuration file using the docking engine, and wait for a call request from either first client 011 or second client 021. And if the server 203 detects a call request to the regular API, executing the target configuration file to realize the butt joint processing of the first client 011 and the second client 021.
It can be understood that in steps S301 to S305 in the embodiment of the present application, the docking task flow is determined through the configuration file, the atomic components mounted in the docking task flow are selected from the preset atomic component library, the access address of the docking task flow is determined according to the mounted atomic components, and further the packaged target configuration file is obtained, and the docking process for the first client 011 and the second client 021 is implemented through the target configuration file. It can be appreciated that the target configuration file has a data structure expected by an access party (for example, the second client 021), can be called by the first client 011 and the second client 021, and can realize the docking between systems through a series of call chains corresponding to the encapsulated docking task flows, so that the use is convenient.
In addition, in the process of obtaining the target configuration file, only a required atomic component is selected from a preset atomic component library, online programming is not needed, and the acquisition difficulty of the docking file is reduced. In addition, any code does not need to be locally added to the terminal 201 where the first client 011 is located or the terminal 202 where the second client 021 is located, system stability of the first client 011 and the second client 021 is not affected, any local storage resource and local computing resource are not occupied, and the method is simple to realize and can be applied to butt joint between systems mounted on lightweight terminals (such as wearable devices and the like).
In some embodiments, referring to fig. 4, the server 203 may provide a middle layer 030 for implementing the steps S301 to S304, where the middle layer 030 may generate the target configuration file 031 according to the configuration file and store the target configuration file 031 in the server 203 in a package, or may issue the target configuration file 031, so that the first client 011 and the second client 021 may call the target configuration file 031 to dock. It can be understood that the middle layer 030 does not affect the first client 011 and the second client 021 participating in the docking process, and the middle layer 030 can provide functions of configuring a component, verifying and testing the component, configuring and verifying a task flow, packaging and publishing a target configuration file 031, and/or logging, i.e. the middle layer 030 provides a one-stop generation platform of the target configuration file 031 for a user, so that cost waste caused by waiting each link in software engineering management is effectively reduced. Also, the middle tier 030 is easy to interface with cloud infrastructure, such as a server cloud infrastructure, e.g., lambda.
In some embodiments, the middle layer 030 may employ cloud native language and its open source framework, while expanding the open source framework. The intermediate layer deliverables developed by intermediate layer 030 (e.g., target profiles) are small, typically no more than 20MB, and significantly reduce the occupancy of runtime resources.
The process of obtaining the target profile 031 by the middle layer 030 is described in detail below with reference to fig. 5A and 5B.
FIG. 5A is a flow chart of a method for obtaining a target profile based on an intermediate layer according to some embodiments of the present application.
It is to be understood that the execution subject of each step in the flowchart shown in fig. 5A may be the middle layer 030 provided by the server 203 or the service 203, and the description of the execution subject of a single step will not be repeated.
S501, creating a task stream of an integrated task of the docking computation engine on line.
It will be appreciated that server 203 or middle tier 030 provided by server 203 may create an empty task stream on-line for interfacing with the integrated tasks of the computing engine of first client 011, facilitating subsequent configuration of at least one interfacing task in the task stream based on a user entered configuration file.
In some embodiments, the created empty task stream may be instantiated as a visualized task stream icon to facilitate the mounting of the instantiated atomic component into the task stream to form a visualized docking task stream. It can be understood that the instantiated docking task stream is convenient for the user to visually add, delete, modify and search, and the user can quickly modify the docking task stream in the visual editing area provided by the server 203, so that the operation of modifying the docking task stream is intuitive and convenient.
S502, acquiring configuration files input by a user, and generating each task node of the task flow.
It will be appreciated that the configuration file may be entered by the user via the first client 011 into the middle tier 030 provided by the server 203 or server 203, or may be entered by the user via the second client 021 into the middle tier 030 provided by the server 203 or server 203. The configuration file may be a configuration attribute and an attribute parameter for the first client 011 to dock, or may be a configuration attribute and an attribute parameter for the second client 021 to dock. The server 203 or the middle layer 030 provided by the server 203 may determine a flow required for the first client 011 and the second client 021 to dock through the configuration file, and determine each task node of the task flow based on the flow. For example, when the first client 011 accesses the second client 021, a data rule verification process is required, and there will be a data rule verification process in the flow, and the middle layer 030 provided by the server 203 or the server 203 adds the process step to the task flow, so as to form a task node for data rule verification.
In some embodiments, a target profile editing application for generating a target profile may be provided within server 203, which may provide a target profile editing interface for a user within a client of a user design and development docking procedure. It should be appreciated that the user described above may be an interface developer or a system maintainer. In the target configuration file editing interface, each task node in the task stream can be instantiated and displayed in the editing area of the editing interface to form a visual task stream, so that a user can conveniently edit the target configuration file.
S503, selecting a required atomic component from a preset atomic component library, and mounting the selected atomic component into each task node.
After determining each task node of the task flow, the capability required for executing each task node may be split into at least one minimum functional unit, and a required atomic component may be selected from a preset atomic component library according to the determined minimum functional unit, and the selected atomic component may be mounted in each task node, so that at least one atomic component cooperates with a docking function required for implementing the corresponding task node. It should be understood that the task flow at this time becomes an ordered set including at least one atomic component, and the task flow can be executed by sequentially calling the atomic components, so as to implement the docking process for the first client 011 and the second client 021.
It should be appreciated that the configuration data for all atomic components has tenant attributes and whether or not attributes are shared. And each atomic component can accept data in JSON or extensible markup language (extensible markup language, XML) format as input data, and the output format can be any one of JSON or XML. For example, input data in JSON format may be input, output data in JSON or XML format may be obtained; input data in an XML format can be input, and output data in a JSON or XML format can be obtained. It is understood that the input data of the atomic component may be any data compliant with JSON or XML protocol, and is not limited herein.
It should be appreciated that the above-described atomic components may provide a wide variety of functions, and the processing logic corresponding to the functions may be diverse, such as verifying input data, converting input data formats, input data routing processes, and the like. Atomic components of different functions whose output results correspond to their functions. For example, the atomic component for model conversion may be a converter. The converter may be applied to convert an XML protocol data model formulated by the insurance data standard Association (ACORD) into a JSON protocol data model required by the premium calculation engine in a docking scenario where the first client 011 is an insurance business system and the second client 021 is a banking system.
It should be understood that the above purpose of converting the ACORD model into the JSON model is achieved by the atomic component of the converter, and the protocol format between the two can be modified, so that the data content of the input data is not changed. For example, if the address includes street a as input data, the data rule format of "< address value=address 1>" and "< street > a" may be changed to "" address ": the data rule format of "a" has a change in the delimiter and the statement expression rule in the statement, but the attribute value is unchanged, i.e., the content is not changed substantially. Thus realizing the docking process for the system applying different data rule formats.
In other embodiments, the converter may also be used for data structure conversion and mapping of fields.
In some embodiments, during the configuration of the atomic assembly described above, the unit test may be accomplished by online verification by server 203.
In some embodiments, the atomic components may include atomic components with multiple different functions, such as a converter, a verifier, a router, and the like, and the atomic components with different functions may be classified and managed through a preset atomic component library, and a unified page for maintaining the atomic components with different functions may be provided for a user, so that the user may generate and maintain the atomic components conveniently.
In some embodiments, the middle layer 030 may provide a visual task flow editing interface for a terminal used by a user to develop or maintain an interface, where a preset atomic component library interface area is provided in the editing interface, and the interface area includes an icon corresponding to an instantiated atomic component. The user can configure attribute parameters of the instantiated atomic assembly in a dragging, clicking and other modes, and further visually splice the icon of the instantiated atomic assembly into each task node of the butt joint task stream.
Referring to fig. 5B, a preset atomic component library interface region R-1 is provided on the left side of the visual atomic component editing interface R, and an atomic component editing region R-2 is provided on the right side. The middle layer 030 detects that the user drags the icon of the part 1 and the icon of the part 2 in the R-1 to drag the icon of the part 1 and the icon of the part 2 to the atom assembly editing area R-2, and then determines the value logic of the corresponding assembly configuration parameters according to the track corresponding to the user drags the operation, the position where the operation is finished, and other operation parameters, so as to determine the configuration parameters of the part 1 and the part 2, and set the part 1 and the part 2 in the atom assembly based on the configuration parameters, so as to realize the functions required to be realized by the atom assembly. The visualized editing of the instantiated atomic component icons facilitates user maintenance of historical target profiles.
It should be appreciated that the above-described parts are modules having finer granularity of function than atomic assemblies, such as parts for date acquired, parts for comparing two numbers, assigned parts, and the like. The above parts can be used for configuring the complete functions required by the atomic assembly.
S504, configuring an interface endpoint corresponding to the computing engine in the first client.
It should be appreciated that if only atomic components are configured within the docking task stream, only the components for executing the task nodes can be determined. But also needs to configure the pushing party of the execution result obtained by each execution node. For example, if a task node needs to perform path switching, not only the path switching component needs to be mounted, but also a target path of the path switching needs to be determined. Therefore, the interface endpoint of the first client 011 for accessing can be configured in the docking task stream, so that the interface endpoint of the first client 011 can be called to realize data interaction with the first client 011 when the docking task stream is executed.
For example, the server 203 may determine, according to each task node in the task flow, an interface access address of the computing engine of the first client 011 to be invoked, that is, an interface endpoint of the first client 011 to be invoked, so as to facilitate implementing corresponding docking processing by using the first client 011 in the task flow. In some embodiments, during the process of executing the task flow by the server 203, the task flow includes a docking process of switching the path of the data in the task node 3 to the first client 011 in the task node 4, for example, transmitting the data obtained by the task node 3 to the premium calculation engine of the first client 011 to obtain the return information of the premium calculation engine interface, so as to facilitate the data rule checking process in the subsequent task node 5. It is therefore necessary to configure the interface endpoint corresponding to the computing engine of the first client 011 as the attribute parameter of the task node 4 described above. It should be understood that, corresponding to the case where the interface endpoint of the premium calculation engine is configured for the task node 4, when the server 203 executes the task node 4, the configured interface endpoint may be called to send the docking data to the premium calculation engine of the first client 011, so as to obtain the return information, so as to facilitate execution of the subsequent task, and implement the inter-system docking process.
S505, configuring an externally exposed interface endpoint of the whole target task flow.
It can be understood that the externally exposed interface endpoint of the target task flow is an interface call address of the docking request, and may be used for receiving data incoming from the access party, for example, may be used for acquiring policy data.
S506, storing the whole target task stream as a target configuration file in a JSON format.
It may be appreciated that the above target task stream may be packaged into a JSON-format target configuration file and stored locally on the server 203, so that when the first client 011 interfaces with the second client 021, the target configuration file may be run by calling the configuration API, to implement the docking process.
In some embodiments, the server 203 may create a text document (dock file) that includes all commands and descriptions for creating a mirror image by the user, and may package the configured target task stream and all atomic components mounted thereon into a service (dock) mirror image and push the service (dock) mirror image to the mirror image repository.
S507, the target configuration file is published and registered to the gateway.
In some embodiments, after the target configuration file is packaged into a text mirror image and pushed to the mirror image warehouse, a runtime environment site can be selected, an API for presetting cloud platform deployment is called, the text is deployed to the runtime site and operated, and a registration API of the gateway is called to register an access interface of the target configuration file to the gateway.
Because the target configuration file is registered to the gateway, input data in the target task stream can be automatically extracted as an API request, so that a user can conveniently submit the API request online and call an API corresponding to the target configuration file, such as singular API (singular API), through the gateway.
In some embodiments, each task node (i.e., executing each docking step) in the API execution target task flow corresponding to the call target configuration file may record log information, and the gateway may collect the log information and then collect the log information. In some implementations, the server 203 can provide the user with an online query page for the user to query for logged information.
It will be appreciated that, through steps S501 to S507 described above, a target profile may be constructed on the server 203 or the middle tier 030 provided by the server 203 by atomic components and profiles, and the target profile may be published and registered with the gateway. At this time, the middle layer 030 can realize the functions of on-line configuration, verification, packaging, release, verification after release, error checking or log checking and the like of the system docking interface, and forms a one-stop on-line docking platform without coding, thereby reducing the loss of engineering links and reducing the inefficiency brought by a hard coding docking mode.
In addition, the API exposed by the middle layer 030 can be registered on the gateway, so that the party to be docked can complete the docking process by only configuring the endpoint calling the API, and the system codes of the two parties to be docked are not involved, thereby effectively reducing the risk of the systems of the two parties to be docked.
In some embodiments, the middle layer 030 may adopt a cloud native language and its related framework, so as to effectively reduce resource utilization, and has fast running speed and strong capacity expansion capability. For example, a multi-tenant mechanism may be supported, and reusable atomic components may be shared among tenants, meeting the many-to-many needs between ecosystems in a digital connection.
The process of interfacing an application interface is described in detail below with respect to the associated figures.
Fig. 6 illustrates a flow diagram of a dual system docking workflow, according to an embodiment of the present application. It is to be understood that the execution subject of each step in the flowchart shown in fig. 6 may be the server 203 or the middle layer 030 provided by the server 203, and the description of the execution subject of a single step will not be repeated.
Referring to fig. 6, in accordance with some embodiments of the present application, a workflow is designed with a scenario where an insurance broker (brooker) system interfaces to a premium computing system, then the workflow may include:
S-1, receiving input data of the insurance brokerage system through the singular number API (singular API), so that an access party transmits request information of the access party in a manner of calling an API.
It will be appreciated that this singular API is a configured target profile access interface and may be identified as insurance brokerage system ACORD singular API (BrokerACORDSingularAPI).
S-2, transmitting the received request data to an uploading checker (inbound validator), checking whether the transmitted data is valid or not through the uploading checker, and if the uploading checker is successful, continuing to execute the step S-3.
Wherein the upload verifier may be identified as a insurance broker system ACORD verifier (broker acordverifier).
S-3, converting the data protocol and the data format which are input by the access party into the data protocol and the data format which are expected by the target computing engine by using an uploading converter (inbound transformer).
For example, the upload converter is used to convert the request structure of the XML protocol under the ACORD model provided by the insurance broker system into the data protocol and data format desired by the target computing engine. The upload converter may be identified as a insurance brokerage system actrd converter (broker).
S-4, the routing (router) component transmits the received converted data to an API provided by a computing engine of the premium computing system and receives return information of a target computing engine API. Wherein the API provided by the computing engine of the premium computing system may be identified as "instrumentatingapi", and thus the routing component may be identified as "Route to InsureMORatingAPI".
S-5, the download verifier (outbound validator) verifies the return information of the computing engine API, and if the return information is valid, the step S-6 is continuously executed. Wherein the download verifier may be identified as an "instrumentation verifier".
S-6, the download converter converts the information returned by the calculation engine API into the data protocol and format expected by the insurance brokerage system. Wherein the download converter may be identified as an instreermoratingconverter.
It can be seen that the accessed insurance broker system still maintains its own data structure as ACORD XML request structure, which is still used in the process of calling the insurance broker system ACORD singular API (i.e. the access interface of the target configuration file generated by the task flow) exposed by the target task flow.
It will be appreciated that through steps S-1 to S-6 described above, the insurance broker system ACORD singular API may pass ACORD XML request data to the insurance broker system ACORD verifier component to verify the legitimacy of the incoming data, and if not, the target task flow ceases. If it is legal, the ACORD XML request data continues to be passed to the insurance broker system ACORD converter component, which converts ACORD XML request to a request format acceptable to the premium calculation engine InsureMO Rating API. The converted data is then passed to a routing component that can route the request in the request format described above to the IndureMORATingAPI of the target premium calculation engine. It should be appreciated that the access address of the IndureMORATingAPI may be specified in the task flow by a configuration file. Further, insureMO Rating response information returned by the IndustingAPI is acquired and passed to the next validation component for validation. And if the verification result is legal, the next converter component is transmitted to perform conversion, and the next converter component is converted into a ACORD XML response structure and returned to the insurance broker system.
It follows that the data structure returned by the premium calculation engine received by the insurance brokerage system is also the data structure expected by the insurance brokerage system. Thus, the ACORD XML request structure and ACORD XML response structure of the insurance broker system are completely multiplexed, and the insurance broker system does not need to write new codes to adapt the data format of the premium calculation engine or write codes to perform data verification and data conversion. The insurance broker system only needs to add a trigger logic for data interaction with the target profile, for example, the task flow corresponding to the ACORD XML request to the target profile is the exposed sine API of the insurance broker system, and the ACORD XML response format return is received. Therefore, the application target configuration file realizes the inter-system docking, and the influence on other program processing of the systems of the two parties can be effectively avoided.
Fig. 7 illustrates a block diagram of a server 203, according to an embodiment of the present application. In some embodiments, the server 203 may include one or more processors 804, system control logic 808 coupled to at least one of the processors 804, system memory 812 coupled to the system control logic 808, non-volatile memory (NVM) 816 coupled to the system control logic 808, and a network interface 820 coupled to the system control logic 808.
In some embodiments, processor 804 may include one or more single-core or multi-core processors. In some embodiments, processor 804 may include any combination of general-purpose and special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In embodiments where the server 203 employs an enhanced node b (eNB) 101 or a radio access network (radio access network, RAN) controller 102, the processor 804 may be configured to perform various conforming embodiments.
In some embodiments, the system control logic 808 may include any suitable interface controller to provide any suitable interface to at least one of the processors 804 and/or any suitable device or component in communication with the system control logic 808.
In some embodiments, the system control logic 808 may include one or more memory controllers to provide an interface to the system memory 812. The system memory 812 may be used for loading and storing data and/or instructions. In some embodiments, memory 812 of server 203 may include any suitable volatile memory, such as a suitable Dynamic Random Access Memory (DRAM).
NVM/memory 816 may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions. In some embodiments, NVM/memory 816 may include any suitable nonvolatile memory, such as flash memory, and/or any suitable nonvolatile storage device, such as at least one of a Hard Disk Drive (HDD), compact disc drive (CD) drive, digital versatile disc (digital versatile disc, DVD) drive.
NVM/memory 816 may include a portion of the storage resources on the device on which server 203 is installed, or it may be accessed by, but not necessarily part of, the device. For example, NVM/storage 816 may be accessed over a network via network interface 820.
In particular, system memory 812 and NVM/storage 816 may each include: a temporary copy and a permanent copy of instructions 824. The instructions 824 may include: instructions that when executed by at least one of the processors 804 cause the server 203 to implement the above-described construction method. In some embodiments, instructions 824, hardware, firmware, and/or software components thereof may additionally/alternatively be disposed in system control logic 808, network interface 820, and/or processor 804.
The network interface 820 may include a transceiver to provide a radio interface for the server 203 to communicate with any other suitable device (e.g., front end module, antenna, etc.) over one or more networks. In some embodiments, the network interface 820 may be integrated with other components of the server 203. For example, network interface 820 may be integrated with at least one of processor 804, system memory 812, nvm/storage 816, and a firmware device (not shown) having instructions that, when executed by at least one of processor 804, implement the interfacing method described above by server 203.
The network interface 820 may further include any suitable hardware and/or firmware to provide a multiple-input multiple-output radio interface. For example, network interface 820 may be a network adapter, a wireless network adapter, a telephone modem, and/or a wireless modem.
In one embodiment, at least one of the processors 804 may be packaged together with logic for one or more controllers of the system control logic 808 to form a System In Package (SiP). In one embodiment, at least one of the processors 804 may be integrated on the same die with logic for one or more controllers of the system control logic 808 to form a system on a chip (SoC).
The server 203 may further include: input/output (I/O) devices 832. The I/O devices 832 may include a user interface to enable a user to interact with the server 203; the design of the peripheral component interface enables the peripheral components to also interact with the server 203. In some embodiments, server 203 further comprises a sensor for determining at least one of environmental conditions and location information associated with server 203.
In some embodiments, the user interface may include, but is not limited to, a display (e.g., a liquid crystal display, a touch screen display, etc.), a speaker, a microphone, one or more cameras (e.g., still image cameras and/or video cameras), a flashlight (e.g., light emitting diode flash), and a keyboard.
In some embodiments, the peripheral component interface may include, but is not limited to, a non-volatile memory port, an audio jack, and a power interface.
In some embodiments, the sensors may include, but are not limited to, gyroscopic sensors, accelerometers, proximity sensors, ambient light sensors, and positioning units. The positioning unit may also be part of the network interface 820 or interact with the network interface 820 to communicate with components of a positioning network, such as Global Positioning System (GPS) satellites.
Embodiments disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the present application may be implemented as a computer program or program code that is executed on a programmable system including at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this application, a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), microcontroller, application Specific Integrated Circuit (ASIC), or microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in the present application are not limited in scope to any particular programming language. In either case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed over a network or through other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including but not limited to floppy diskettes, optical disks, read-only memories (CD-ROMs), magneto-optical disks, read-only memories (ROMs), random Access Memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or tangible machine-readable memory for transmitting information (e.g., carrier waves, infrared signal digital signals, etc.) in an electrical, optical, acoustical or other form of propagated signal using the internet. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some embodiments, these features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the present application, each unit/module is a logic unit/module, and in physical aspect, one logic unit/module may be one physical unit/module, or may be a part of one physical unit/module, or may be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logic unit/module itself is not the most important, and the combination of functions implemented by the logic unit/module is the key to solve the technical problem posed by the present application. Furthermore, to highlight the innovative part of the present application, the above-described device embodiments of the present application do not introduce units/modules that are less closely related to solving the technical problems presented by the present application, which does not indicate that the above-described device embodiments do not have other units/modules.
It should be noted that in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
While the application has been shown and described with reference to certain embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the application.

Claims (10)

1. A docking method applied to a server, the method comprising:
acquiring a configuration file, and determining a first task flow according to the acquired configuration file;
determining an atomic component corresponding to the task content of the first task flow from a preset atomic component library, and mounting the atomic component into the first task flow to obtain a second task flow and an access address of the second task flow;
packaging the second task flow as a target configuration file;
and acquiring a call request of the first client or the second client to the target configuration file, and completing the butt joint of the first client and the second client.
2. The docking method according to claim 1, wherein the obtaining the configuration file, determining the first task flow according to the obtained configuration file, includes:
acquiring a configuration file, wherein the configuration file is used for representing the docking flow of the first client and the second client;
and determining a first task flow according to the acquired configuration file.
3. The docking method according to claim 1, wherein determining an atomic component corresponding to the task content of the first task flow from a preset atomic component library, and mounting the atomic component to the first task flow to obtain a second task flow includes:
Selecting one or more first atomic components from the preset atomic component library according to the task content of each task node in the first task flow;
and mounting one or more first atomic components into each task node in the first task flow to obtain a second task flow.
4. A docking method according to claim 3 wherein said mounting one or more of said first atomic components into each task node in said first task stream results in a second task stream comprising:
detecting a drag operation of a user on the instantiated first atomic assembly, and modifying attribute parameters of the first atomic assembly corresponding to the drag operation according to the drag operation;
the second task flow is determined based on the modified one or more first atomic components.
5. The docking method according to claim 4, wherein the determining the access address of the second task flow includes:
an access interface of the second task flow is configured based on the first atomic component within the second task flow.
6. The docking method of claim 1, wherein the mounting the atomic assembly into the first task stream results in a second task stream, further comprising:
And configuring a first access address of the first client and/or a second access address of the second client as access addresses of related routing tasks in the first task flow to obtain a second task flow.
7. The docking method of claim 1, wherein said encapsulating the second task stream as a target profile comprises:
and packaging the second task stream into a target configuration file in a JSON format.
8. Docking method according to claim 1, characterized in that the atomic assembly comprises at least: the system comprises a data rule checking component, a data routing and transmitting component and a data format conversion component.
9. An electronic device, comprising: one or more processors; one or more memories; the one or more memories store one or more instructions that, when executed by the one or more processors, cause the electronic device to perform the docking method of any of claims 1-8.
10. A computer readable storage medium having stored thereon instructions which, when executed on a computer, cause the computer to perform the docking method of any one of claims 1 to 8.
CN202311436165.9A 2023-10-31 2023-10-31 Docking method, electronic device, and storage medium Pending CN117407041A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311436165.9A CN117407041A (en) 2023-10-31 2023-10-31 Docking method, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311436165.9A CN117407041A (en) 2023-10-31 2023-10-31 Docking method, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN117407041A true CN117407041A (en) 2024-01-16

Family

ID=89497767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311436165.9A Pending CN117407041A (en) 2023-10-31 2023-10-31 Docking method, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN117407041A (en)

Similar Documents

Publication Publication Date Title
US11150893B2 (en) Collaborative software development tool for resolving potential code-change conflicts in real time
US11237822B2 (en) Intelligent discovery and application of API changes for application migration
US8271609B2 (en) Dynamic service invocation and service adaptation in BPEL SOA process
US20200329114A1 (en) Differentiated smart sidecars in a service mesh
CA2557111C (en) System and method for building mixed mode execution environment for component applications
CN100545851C (en) The remote system administration of utility command row environment
US8539514B2 (en) Workflow integration and portal systems and methods
US20120233589A1 (en) Software development kit for blended services
US11481243B1 (en) Service access across Kubernetes clusters
JP2006512694A (en) System and method for building and running a platform neutral generic service client application
CN104317591A (en) OSGi (open service gateway initiative)-based web interface frame system and web business processing method thereof
EP2257887A1 (en) Method and system for rules based workflow of media services
CN110658794A (en) Manufacturing execution system
CN114205342B (en) Service debugging routing method, electronic equipment and medium
US11023558B1 (en) Executing functions on-demand on a server utilizing web browsers
US20190018867A1 (en) Rule based data processing
US20230216895A1 (en) Network-based media processing (nbmp) workflow management through 5g framework for live uplink streaming (flus) control
US20080229274A1 (en) Automating Construction of a Data-Source Interface For Component Applications
CN110750243A (en) Project code development method and system
CN114296933A (en) Implementation method of lightweight container under terminal edge cloud architecture and data processing system
CN112787999A (en) Cross-chain calling method, device, system and computer readable storage medium
US20220103555A1 (en) Service deployment method, device, system, and computer-readable storage medium
CN114489622A (en) Js application, electronic device, and storage medium
US11030015B2 (en) Hardware and software resource optimization
CN117407041A (en) Docking method, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination