CN116185581A - Task processing method and device, nonvolatile storage medium and electronic equipment - Google Patents

Task processing method and device, nonvolatile storage medium and electronic equipment Download PDF

Info

Publication number
CN116185581A
CN116185581A CN202211686106.2A CN202211686106A CN116185581A CN 116185581 A CN116185581 A CN 116185581A CN 202211686106 A CN202211686106 A CN 202211686106A CN 116185581 A CN116185581 A CN 116185581A
Authority
CN
China
Prior art keywords
subtask
request
execution result
execution
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211686106.2A
Other languages
Chinese (zh)
Inventor
夏建明
杨戉
刘毅
王斌
颜凤辉
盛振明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202211686106.2A priority Critical patent/CN116185581A/en
Publication of CN116185581A publication Critical patent/CN116185581A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a task processing method and device, a nonvolatile storage medium and electronic equipment. The method comprises the following steps: acquiring a main task request through a preset interface, wherein the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask; sending a first subtask request to a first target interface according to the first subtask; receiving a first execution result returned by the first target server for executing the first subtask request; sending a second subtask request to a second target interface according to the first execution result and the second subtask; and taking the first execution result and the second execution result of the second sub-task request executed by the second target server as the main task execution result of the main task request. The invention solves the technical problem of poor expansibility of the existing API gateway.

Description

Task processing method and device, nonvolatile storage medium and electronic equipment
Technical Field
The present invention relates to the field of computers, and in particular, to a task processing method and apparatus, a nonvolatile storage medium, and an electronic device.
Background
The API gateway is arranged, namely a mechanism is realized through the API gateway, and the mechanism can flexibly call and assemble the back-end native API, so that the front-end can customize the aggregate API required by the page according to the service requirement.
The API gateway needs to interface with a wide variety of external service systems, including some old ones. But often the technology used by older systems is too old and the interfaces are not standard, many problems may be encountered in interfacing. It would be a disaster if the gateway would need to modify the code each time such a system is docked. This also means that the gateway architecture is not scalable and cannot accommodate various market demands.
Aiming at the problem of poor expansibility of the existing API gateway, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the invention provides a task processing method and device, a nonvolatile storage medium and electronic equipment, which are used for at least solving the technical problem of poor expansibility of the conventional API gateway.
According to an aspect of the embodiment of the present invention, there is provided a task processing method, including: acquiring a main task request through a preset interface, wherein the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask; sending a first subtask request to a first target interface according to the first subtask, wherein the first target interface is connected with a first target server for processing the first subtask request; receiving the first execution result returned by the first target server for executing the first subtask request; sending a second subtask request to a second target interface according to the first execution result and the second subtask, wherein the second target interface is connected with a second target server for processing the second subtask request; and taking the first execution result and the second execution result of the second sub-task request executed by the second target server as main task execution results of the main task request.
Optionally, sending the first subtask request to the first target interface according to the first subtask includes: identifying a first execution requirement of the first subtask; generating the first subtask request according to the first execution requirement; and sending the first subtask request to the first target interface.
Optionally, generating the first subtask request according to the first execution requirement includes: querying a preset cache library for a first requirement parameter indicated by the first execution requirement; and generating the first subtask request according to the first demand parameter.
Optionally, sending a second subtask request to a second target interface according to the first execution result and the second subtask includes: identifying a second execution requirement of the second subtask; generating the second subtask request according to the first execution result and the second execution requirement; and sending the second subtask request to the second target interface.
Optionally, generating the second subtask request according to the first execution result and the second execution requirement includes: querying a preset cache library for a second requirement parameter indicated by the second execution requirement; acquiring a third requirement parameter indicated by the second execution requirement from the first execution result; and generating the second subtask request according to the second demand parameter and the third demand parameter.
Optionally, the step of using the first execution result and the second execution result of the second target server to execute the second subtask request as a main task execution result of the main task request includes: acquiring a first timestamp indicated by the first execution result; acquiring a second timestamp indicated by the second execution result; and splicing the first execution result and the second execution result according to the first timestamp and the second timestamp to generate the main task execution result.
Optionally, the task processing method is applied to a multi-interface gateway.
According to another aspect of the embodiment of the present invention, there is also provided a task processing device, including: the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a main task request through a preset interface, wherein the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask; the first sending module is used for sending a first subtask request to a first target interface according to the first subtask, wherein the first target interface is connected with a first target server for processing the first subtask request; the receiving module is used for receiving the first execution result returned by the first target server for executing the first subtask request; the second sending module is used for sending a second subtask request to a second target interface according to the first execution result and the second subtask, wherein the second target interface is connected with a second target server for processing the second subtask request; and the determining module is used for taking the first execution result and the second execution result of the second sub-task request executed by the second target server as main task execution results of the main task request.
According to another aspect of the embodiment of the present invention, there is further provided a nonvolatile storage medium, where a program is stored in the nonvolatile storage medium, and when the program runs, the device where the nonvolatile storage medium is controlled to execute the task processing method described above.
According to another aspect of the embodiment of the present invention, there is also provided an electronic device, including: a memory and a processor for running a program stored in the memory, wherein the program when run performs the task processing method as claimed in the above.
In the embodiment of the invention, a main task request is acquired through a preset interface, wherein the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask; sending a first subtask request to a first target interface according to the first subtask, wherein the first target interface is connected with a first target server for processing the first subtask request; receiving a first execution result returned by the first target server for executing the first subtask request; sending a second subtask request to a second target interface according to the first execution result and the second subtask, wherein the second target interface is connected with a second target server for processing the second subtask request; the first execution result and the second execution result of the second subtask request executed by the second target server are used as the main task execution result of the main task request, so that the main task between the preset interface and the target interface is processed through the API gateway, the main task is split into a combination of a plurality of subtasks, and further the technical effect of expanding the API gateway interface is achieved by adjusting the subtasks between the preset interface and the target interface, and the technical problem of poor expansibility of the conventional API gateway is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a flow chart of a task processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a storageStore API definition according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a storageLoad API definition in accordance with an embodiment of the invention;
FIG. 4 is a schematic diagram of an API BUB financial statement rack, in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of a financial short message according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a task processing device according to an embodiment of the present invention;
fig. 7 is a block diagram of a computer terminal according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, a task processing method embodiment is provided, it being noted that the steps shown in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order other than that shown or described herein.
Fig. 1 is a flowchart of a task processing method according to an embodiment of the present invention, as shown in fig. 1, the method includes the steps of:
step S102, a main task request is obtained through a preset interface, wherein the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask;
step S104, a first subtask request is sent to a first target interface according to the first subtask, wherein the first target interface is connected with a first target server for processing the first subtask request;
step S106, a first execution result returned by the first target server for executing the first subtask request is received;
step S108, a second subtask request is sent to a second target interface according to the first execution result and the second subtask, wherein the second target interface is connected with a second target server for processing the second subtask request;
step S110, the first execution result and the second execution result of the second sub-task request executed by the second target server are used as the main task execution result of the main task request.
In the embodiment of the invention, a main task request is acquired through a preset interface, wherein the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask; sending a first subtask request to a first target interface according to the first subtask, wherein the first target interface is connected with a first target server for processing the first subtask request; receiving a first execution result returned by the first target server for executing the first subtask request; sending a second subtask request to a second target interface according to the first execution result and the second subtask, wherein the second target interface is connected with a second target server for processing the second subtask request; the first execution result and the second execution result of the second subtask request executed by the second target server are used as the main task execution result of the main task request, so that the main task between the preset interface and the target interface is processed through the API gateway, the main task is split into a combination of a plurality of subtasks, and further the technical effect of expanding the API gateway interface is achieved by adjusting the subtasks between the preset interface and the target interface, and the technical problem of poor expansibility of the conventional API gateway is solved.
As an alternative embodiment, the task processing method described above is applied to a multi-interface gateway API HUB.
It should be noted that, the API HUB defines a programming language suitable for the micro-service architecture to write the call flow, and makes a programming combination on the calls of various back-end native APIs, so as to implement the call of various customized APIs. No matter how complex the calling request of the API is, the calling of the API HUB can be completed once by compiling the programming file, so that the calling complexity of a user is simplified, and different API requests can be prevented from being supported by modifying codes.
As an alternative example, the API HUB gateway orchestrator supports multiple orchestration approaches:
1) A single HTTP API call.
2) The HTTP APIs are arranged according to the flow after the flow.
3) Multiple flows or HTTP APIs execute concurrently.
As an alternative example, the API HUB needs to configure single or multiple HTTP API combined flows or other orchestrated files when executing external API requests. A single HTTP API may implement a single external API call; for complex external API requests, especially calls of multiple APIs, it is necessary to configure flow files to implement orchestration calls between multiple APIs.
As an alternative example, the API HUB gateway customizes a piece of orchestration file based on JSON structure.
Optionally, the HTTP API file defines a specific API information, including url of the API, HTTP request type, method, required parameters, and the like.
Alternatively, the flow file is a combined layout among multiple HTTP APIs, so that the layout can simplify the complexity of user call, and multiple APIs can be realized by only calling one flow, and are core solutions of API HUB gateway layout.
Optionally, a schedule file, multiple HTTP APIs or concurrent modes of flow may also be used to implement timing function triggers, pressure performance tests, and the like.
As an alternative example, HTTP API main data structure defines:
Figure BDA0004021225450000051
/>
Figure BDA0004021225450000061
wherein From: representing the position of acquiring parameter values, supporting original character strings, query parameters in http, header parameters, key file reading, message body reading, template generation, or internal definition function call and the like; content: representing a parameter name, a function name, or a template name; args: input parameters representing func when from is func; json: an input value representing json; url: a url address indicating a specific access to the external API; method of: representing a specific HTTP access method, GET, POST, or others. In: query, header, body, vars, the values of the first three can be put into a sending message and can be accessed through a template, wherein the vars only enter; name: representing the parameter name; value: parameter values, standard value format; cache: indicating whether the API call supports the caching function, if so, the cached data may be returned directly upon the next request.
As an alternative example, the Flow primary data structure defines:
Figure BDA0004021225450000062
/>
Figure BDA0004021225450000071
optionally, resultKey: the index name when the result is saved is executed so as to obtain an intermediate result in the following steps; step: a step of calling the API in series; each step of the orchestration flow is called a step, and in this flow structure, the step of making an API call is composed.
As an alternative example, schedule primary data structure definition:
Figure BDA0004021225450000072
optionally, resultKey: names corresponding to the API or the FLOW execution result; concurentnum: a maximum number of parallel executions allowed; concurentloopnum: the number of parallel executions within the maximum allowed loop; step: a schedule task list.
In the step S106, the first execution result may be stored in the cache space, and in a subsequent use process, the first execution result may be directly called from the cache space.
In the above step S108, the second subtask may be acquired through the second interface.
As an optional example, a first subtask request is obtained through a first preset interface, wherein the first subtask request carries a first subtask; sending a first subtask request to a first target interface according to the first subtask, wherein the first target interface is connected with a first target server for processing the first subtask request; receiving a first execution result returned by the first target server for executing the first subtask request; acquiring a second subtask request through a first preset interface, wherein the second subtask request carries a second subtask; and sending a second subtask request to a second target interface according to the first execution result and the second subtask, wherein the second target interface is connected with a second target server for processing the second subtask request, and the first execution result and the second execution result of the second subtask request executed by the second target server are used as target task execution results of the second subtask request.
Optionally, the first subtask indicates to obtain first data from the first time to the second time, the first subtask indicates to obtain second data from the first time to the third time, and the second time is located between the first time and the third time, so that after the first subtask is executed, the second subtask can be adjusted to obtain third data from the second time value to the third time on the basis that the first data from the first time to the second time has been obtained, and then the first data and the third data are spliced, so that the second data from the first time to the third time indicated by the second subtask can be obtained.
It should be noted that many system API requests need to pass some fixed parameters in url, such as time stamp, MD5 encryption, buffering API operation results, etc. Some require real-time calculation of parameters, such as time stamps; some require persistence to facilitate the next use, such as the expiration time of the previous query, the next start time, etc.
As an alternative example, the API HUB defines an internal API as the underlying method of orchestration. Each API implements a specific function such as: caching the current result, reading the last cached result, creating a reply message, acquiring the current time, checking a return field and the like. With these internal functions, the API HUB can implement function calls by laying out json formatted files. Therefore, the code can be not modified, and the difficulty of docking an old system is solved in a non-invasive way.
An alternative example teaches the implementation of the internal function in terms of MD5 functions. By defining the configuration json format, as call entry:
Figure BDA0004021225450000081
Figure BDA0004021225450000091
wherein, "header": indicating that this parameter should be placed in the http request header; "func": representing the invocation of an internal function; "md5": the representation function is MD5; "apikey X-CurTime X-Param": indicating that the MD5 function receives three parameters.
Optionally, defining the internal function is:
Figure BDA0004021225450000092
wherein params represents an array of parameters, and the calculated md5 check code is returned, so that the returned check code is filled into the header of the http request.
It should be noted that, persisting the last result to facilitate the next request is a problem frequently encountered by old systems, and taking these two functions as examples, how the API HUB is implemented by filing is described below.
As an alternative example, the API HUB provides two APIs to implement storing and reading information, where the two APIs may store and read various intermediate values through a configuration file, and the description is given below taking the case of storing the UTC timestamp and reading the UTC timestamp stored last time as an example:
FIG. 2 is a schematic diagram of a storage API definition of a storageStore according to an embodiment of the present invention, as shown in FIG. 2, by a storageStore indication to store information in a storage medium such as a memory, a database, or a cache space, where "command": a name representing this API; "args": representing parameters required to call this API; "key": storing an index of the result, and acquiring a name in an input parameter of the origin by using the key; "index": storing (UTC timestamp) key information; "Source": a data base which indicates what medium is stored with information, wherein 'local' indicates to a memory and can be configured to mongolidb and the like; "content": the stored content is UTC, the acquisition method is "func", and the acquisition method represents the system time obtained by calling a func function.
FIG. 3 is a schematic diagram of a storageLoad API definition, as shown in FIG. 3, indicating that the cached results (UTC time stamps) are to be read from the storage medium, specifically defining a reference to a storageStore, according to an embodiment of the present invention.
As an alternative embodiment, sending the first subtask request to the first target interface in accordance with the first subtask includes: identifying a first execution requirement of a first subtask; generating a first subtask request according to a first execution requirement; a first subtask request is sent to a first target interface.
Optionally, the first execution requirement indicates data content that needs to be used in executing the first sub-task, e.g., a server identification of a first target server executing the first sub-task, an execution timestamp of the first sub-task, etc.
In the above embodiment of the present invention, in the process of generating the first subtask request according to the first subtask, parameters required by the first subtask request may be obtained according to the first execution requirement of the first subtask, so as to generate the first subtask request according to the obtained parameters, thereby realizing the generation of the first subtask request.
As an alternative embodiment, generating the first subtask request according to the first execution requirement includes: inquiring a first demand parameter indicated by the first execution demand in a preset cache; and generating a first subtask request according to the first demand parameter.
In the above embodiment of the present invention, in generating the first subtask request according to the first subtask, the first requirement parameter indicated by the first execution requirement of the first subtask may be obtained in the cache space of the multi-interface gateway API HUB, and then the first subtask request is generated based on the first requirement parameter, so as to implement the generation of the first subtask request.
As an alternative embodiment, sending the second subtask request to the second target interface according to the first execution result and the second subtask includes: identifying a second execution requirement of a second subtask; generating a second subtask request according to the first execution result and the second execution requirement; and sending a second subtask request to a second target interface.
In the above embodiment of the present invention, in the process of generating the second subtask request according to the second subtask, parameters required by the second subtask request may be obtained according to the second execution requirement of the second subtask, so that the second subtask request is generated according to the obtained parameters and the first execution result, thereby realizing the generation of the second subtask request.
As an alternative embodiment, generating the second subtask request according to the first execution result and the second execution requirement includes: inquiring a second demand parameter indicated by a second execution demand in a preset cache; acquiring a third requirement parameter indicated by the second execution requirement from the first execution result; and generating a second subtask request according to the second demand parameter and the third demand parameter.
In the above embodiment of the present invention, in generating the second subtask request according to the second subtask, the second requirement parameter indicated by the second execution requirement of the second subtask may be obtained in the cache space of the multi-interface gateway API HUB, and the third requirement parameter indicated by the second execution requirement of the second subtask may be obtained in the first execution result, and the second subtask request may be generated based on the second requirement parameter and the third requirement parameter, so as to implement the generation of the second subtask request.
As an alternative embodiment, taking the first execution result and the second execution result of the second sub-task request by the second target server as the main task execution result of the main task request includes: acquiring a first timestamp indicated by a first execution result; acquiring a second timestamp indicated by a second execution result; and splicing the first execution result and the second execution result according to the first timestamp and the second timestamp to generate a main task execution result.
Optionally, the first timestamp may represent time information of the first execution result, the second timestamp represents time information of the second execution result, and the first execution result and the second execution result are spliced according to the first timestamp and the second timestamp, so that the first execution result and the second execution result can be spliced according to a time dimension.
As an alternative embodiment, after obtaining the primary task request through the preset interface, the method further includes: identifying a first subtask and a second subtask carried in a main task request; a first target server for performing the first subtask is determined, and a second target server for performing the second subtask is determined.
According to the embodiment of the invention, the main task request can be split into a plurality of sub-tasks to be executed, each sub-task can be executed by the same target server or different target servers, and the multifunctional expansion and combined calling of the main task can be realized by adjusting the combination of different sub-tasks and the execution sequence of different sub-tasks.
Alternatively, the primary task request may be performed by scheduling a plurality of different target servers, wherein the primary task may be split into a first sub-task performed by a first target server and a second sub-task performed by a second server.
For example, the first server is a data query server, and the first subtask instructs the data query server to query target data from a target database; the second server is a short message server, and the second subtask indicates to send the inquired target data through the short message server.
Alternatively, the primary task request may be performed by scheduling the same target server multiple times, where the primary task may be split into a first sub-task performed by the first target server and a second sub-task performed by the first server.
For example, the first server is a data query server, and the first subtask instructs the data query server to query the target database for first target data; the second sub-task instructs the data query server to query the target database for second target data.
The present invention also provides a preferred embodiment that provides an API gateway product API HUB gateway orchestrator.
The design of the invention realizes an API gateway product API BUB gateway composer. The API BUB gateway considers the difficulty at the beginning of design, and creatively adopts a low-code and non-invasive mode to solve the scheme of calling the external information of the old system. Specifically, a set of configurable formatting files is designed to configure all external service requests, and the invocation of the external requests can be realized without modifying any code logic. This approach achieves good results in practice.
Optionally, the API gateway solves various adapting access problems brought by interfacing with the old system in a low-code and non-invasive manner by formulating a set of rules capable of orchestrating JSON format and writing configuration files.
As an alternative example, a financial reimbursement message reminding example is provided, taking an API HUB for docking with a financial reimbursement message reminding as an example, the current financial reimbursement system supports a mail message reminding function, but has no message reminding function, and in this example, it is hoped to realize message reminding of a bill by calling the API HUB.
Optionally, the financial message alerts main requirements are as follows:
1) Acquiring a fixed type account bill in a financial reimbursement system by calling an API HUB, wherein the http request of a financial server requires the request inquiry start time and the request inquiry end time as parameters;
2) Acquiring the account statement includes acquiring full amount information (all account statement information) and increment information (newly generated account statement information after the last time point, etc.)
3) And sending a prompt and receipt short message to the applicant of the bill according to the acquired information such as the mobile phone number, the bill number and the like on the bill.
4) The full amount information and the increment information are two API HUB requests, and the next increment call needs to use the last end time as the start time and the current time as the end time. The key and difficulty of the problem is how to implement writing and reading of various intermediate information such as time by means of not modifying the code.
FIG. 4 is a schematic diagram of an API HUB financial statement rack, as shown in FIG. 4, comprising: API HUB gateway cluster, user list, financial server, SMS server and mobile phone terminal list. Wherein the API HUB cluster comprises several components: API HUB cluster service 1-N, cache medium (database).
Optionally you, as shown in fig. 4, 2 users A, B call different financial query interfaces of the API HUB, respectively, one after the other. First user a invokes the full query and then user B invokes the incremental query.
Alternatively, the full-volume query may obtain a first execution result and the incremental query may obtain a second execution result.
As an alternative example, the full-volume query service flow includes:
step S1, a user A initiates an http request of a full-volume query to an API HUB cluster service;
step S2, the API HUB calculates the current time through an internal utc function to serve as the end time of the query;
step S3, the API HUB calls a storageStore interface to write time into a database;
step S4, the API HUB initiates a real http request to the financial server;
s5, returning a full query result to the API HUB cluster by the financial server;
and S6, calling and sending a short message http request to a short message server by the API HUB according to the mobile phone number and the account number in the returned result list, and sending the short message to the mobile phone terminal through the short message server.
As an alternative example, the incremental query service flow differs from the full-volume query in the following parts: aiming at the step S3, calling a utc timestamp stored in the storageLoad reading database full-volume query as the starting time of the incremental query; step S3' is added between steps S3 and S4, calling the storageStore stores the current latest utc time in the database for other call fetches.
As an optional example, according to the account number of the account number mobile phone and the account number in the queried financial information list, the API HUB gateway sends the reminding information to the account number mobile phone by calling the short message server.
Fig. 5 is a schematic diagram of a financial short message according to an embodiment of the present invention, as shown in fig. 5, in which "BXX-JXXXX210010001" is a newspaper number.
According to the technical scheme provided by the invention, the API HUB gateway well solves the trouble of docking an old system by adopting a low-code and non-invasive mode through arranging the file, thereby not only reducing the workload of a developer, but also enabling the use of a user to be more convenient.
According to the embodiment of the present invention, there is also provided an embodiment of a task processing device, and it should be noted that the task processing device may be used to execute the task processing method in the embodiment of the present invention, and the task processing method in the embodiment of the present invention may be executed in the task processing device.
FIG. 6 is a schematic diagram of a task processing device according to an embodiment of the present invention, as shown in FIG. 5, the device may include: the obtaining module 61 is configured to obtain a main task request through a preset interface, where the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask; a first sending module 63, configured to send a first subtask request to a first target interface according to a first subtask, where the first target interface is connected to a first target server for processing the first subtask request; a receiving module 65, configured to receive a first execution result returned by the first target server when executing the first subtask request; the second sending module 67 is configured to send a second subtask request to a second target interface according to the first execution result and the second subtask, where the second target interface is connected to a second target server for processing the second subtask request; the determining module 69 is configured to take the first execution result and the second execution result of the second sub-task request by the second target server as a main task execution result of the main task request.
It should be noted that, the acquiring module 61 in this embodiment may be used to perform step S102 in the embodiment of the present application, the first transmitting module 63 in this embodiment may be used to perform step S104 in the embodiment of the present application, the receiving module 65 in this embodiment may be used to perform step S106 in the embodiment of the present application, the second transmitting module 67 in this embodiment may be used to perform step S108 in the embodiment of the present application, and the determining module 69 in this embodiment may be used to perform step S110 in the embodiment of the present application. The above modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the above embodiments.
In the embodiment of the invention, a main task request is acquired through a preset interface, wherein the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask; sending a first subtask request to a first target interface according to the first subtask, wherein the first target interface is connected with a first target server for processing the first subtask request; receiving a first execution result returned by the first target server for executing the first subtask request; sending a second subtask request to a second target interface according to the first execution result and the second subtask, wherein the second target interface is connected with a second target server for processing the second subtask request; the first execution result and the second execution result of the second subtask request executed by the second target server are used as the main task execution result of the main task request, so that the main task between the preset interface and the target interface is processed through the API gateway, the main task is split into a combination of a plurality of subtasks, and further the technical effect of expanding the API gateway interface is achieved by adjusting the subtasks between the preset interface and the target interface, and the technical problem of poor expansibility of the conventional API gateway is solved.
As an alternative embodiment, the first transmitting module includes: the first identification unit is used for identifying a first execution requirement of the first subtask; the first generation unit is used for generating a first subtask request according to a first execution requirement; and the first sending unit is used for sending the first subtask request to the first target interface.
As an alternative embodiment, the first generating unit comprises: a first query subunit, configured to query a preset cache for a first requirement parameter indicated by a first execution requirement; the first generation subunit is configured to generate a first subtask request according to the first requirement parameter.
As an alternative embodiment, the second transmitting module includes: the first identification unit is used for identifying a second execution requirement of a second subtask; the second generating unit is used for generating a second subtask request according to the first executing result and the second executing requirement; and the second sending unit is used for sending a second subtask request to the second target interface.
As an alternative embodiment, the second generating unit comprises: a first query subunit, configured to query a preset cache for a second requirement parameter indicated by a second execution requirement; the acquisition subunit is used for acquiring a third requirement parameter indicated by the second execution requirement from the first execution result; and the second generation subunit is used for generating a second subtask request according to the second demand parameter and the third demand parameter.
As an alternative embodiment, the determining module includes: the first acquisition sub-module is used for acquiring a first timestamp indicated by the first execution result; the second acquisition sub-module is used for acquiring a second timestamp indicated by a second execution result; the generation sub-module is used for splicing the first execution result and the second execution result according to the first timestamp and the second timestamp to generate a main task execution result.
As an alternative embodiment, the apparatus further comprises: the identification sub-module is used for identifying a first subtask and a second subtask carried in the main task request after the main task request is acquired through a preset interface; the determining sub-module is used for determining a first target server for executing a first sub-task and a second target server for executing a second sub-task.
As an alternative embodiment, the task processing device is a multi-interface gateway.
Embodiments of the present invention may provide a computer terminal, which may be any one of a group of computer terminals. Alternatively, in the present embodiment, the above-described computer terminal may be replaced with a terminal device such as a mobile terminal.
Alternatively, in this embodiment, the above-mentioned computer terminal may be located in at least one network device among a plurality of network devices of the computer network.
In this embodiment, the above-mentioned computer terminal may execute the program code of the following steps in the task processing method: acquiring a main task request through a preset interface, wherein the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask; sending a first subtask request to a first target interface according to the first subtask, wherein the first target interface is connected with a first target server for processing the first subtask request; receiving a first execution result returned by the first target server for executing the first subtask request; sending a second subtask request to a second target interface according to the first execution result and the second subtask, wherein the second target interface is connected with a second target server for processing the second subtask request; and taking the first execution result and the second execution result of the second sub-task request executed by the second target server as the main task execution result of the main task request.
Alternatively, fig. 7 is a block diagram of a computer terminal according to an embodiment of the present invention. As shown in fig. 7, the computer terminal 70 may include: one or more (only one is shown) processors 72, and memory 74.
The memory may be used to store software programs and modules, such as program instructions/modules corresponding to the task processing methods and apparatuses in the embodiments of the present invention, and the processor executes the software programs and modules stored in the memory, thereby executing various functional applications and data processing, that is, implementing the task processing methods described above. The memory may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory remotely located relative to the processor, which may be connected to the terminal 70 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: acquiring a main task request through a preset interface, wherein the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask; sending a first subtask request to a first target interface according to the first subtask, wherein the first target interface is connected with a first target server for processing the first subtask request; receiving a first execution result returned by the first target server for executing the first subtask request; sending a second subtask request to a second target interface according to the first execution result and the second subtask, wherein the second target interface is connected with a second target server for processing the second subtask request; and taking the first execution result and the second execution result of the second sub-task request executed by the second target server as the main task execution result of the main task request.
Optionally, the above processor may further execute program code for: identifying a first execution requirement of a first subtask; generating a first subtask request according to a first execution requirement; a first subtask request is sent to a first target interface.
Optionally, the above processor may further execute program code for: inquiring a first demand parameter indicated by the first execution demand in a preset cache; and generating a first subtask request according to the first demand parameter.
Optionally, the above processor may further execute program code for: identifying a second execution requirement of a second subtask; generating a second subtask request according to the first execution result and the second execution requirement; and sending a second subtask request to a second target interface.
Optionally, the above processor may further execute program code for: inquiring a second demand parameter indicated by a second execution demand in a preset cache; acquiring a third requirement parameter indicated by the second execution requirement from the first execution result; and generating a second subtask request according to the second demand parameter and the third demand parameter.
Optionally, the above processor may further execute program code for: acquiring a first timestamp indicated by a first execution result; acquiring a second timestamp indicated by a second execution result; and splicing the first execution result and the second execution result according to the first timestamp and the second timestamp to generate a main task execution result.
Optionally, the above processor may further execute program code for: after a main task request is acquired through a preset interface, identifying a first subtask and a second subtask carried in the main task request; a first target server for performing the first subtask is determined, and a second target server for performing the second subtask is determined.
By adopting the embodiment of the invention, a task processing scheme is provided. Acquiring a main task request through a preset interface, wherein the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask; sending a first subtask request to a first target interface according to the first subtask, wherein the first target interface is connected with a first target server for processing the first subtask request; receiving a first execution result returned by the first target server for executing the first subtask request; sending a second subtask request to a second target interface according to the first execution result and the second subtask, wherein the second target interface is connected with a second target server for processing the second subtask request; the first execution result and the second execution result of the second subtask request executed by the second target server are used as the main task execution result of the main task request, so that the main task between the preset interface and the target interface is processed through the API gateway, the main task is split into a combination of a plurality of subtasks, and further the technical effect of expanding the API gateway interface is achieved by adjusting the subtasks between the preset interface and the target interface, and the technical problem of poor expansibility of the conventional API gateway is solved.
It will be appreciated by those skilled in the art that the configuration shown in fig. 7 is only illustrative, and the computer terminal may be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palm-phone computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 7 is not limited to the structure of the electronic device. For example, the computer terminal 70 may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 7, or have a different configuration than shown in FIG. 7.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
Embodiments of the present invention also provide a nonvolatile storage medium. Alternatively, in the present embodiment, the above-described storage medium may be used to store the program code executed by the task processing method provided in the above-described embodiment.
Alternatively, in this embodiment, the above-mentioned nonvolatile storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Optionally, in the present embodiment, the non-volatile storage medium is arranged to store program code for performing the steps of: acquiring a main task request through a preset interface, wherein the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask; sending a first subtask request to a first target interface according to the first subtask, wherein the first target interface is connected with a first target server for processing the first subtask request; receiving a first execution result returned by the first target server for executing the first subtask request; sending a second subtask request to a second target interface according to the first execution result and the second subtask, wherein the second target interface is connected with a second target server for processing the second subtask request; and taking the first execution result and the second execution result of the second sub-task request executed by the second target server as the main task execution result of the main task request.
Optionally, in the present embodiment, the non-volatile storage medium is arranged to store program code for performing the steps of: identifying a first execution requirement of a first subtask; generating a first subtask request according to a first execution requirement; a first subtask request is sent to a first target interface.
Optionally, in the present embodiment, the non-volatile storage medium is arranged to store program code for performing the steps of: inquiring a first demand parameter indicated by the first execution demand in a preset cache; and generating a first subtask request according to the first demand parameter.
Optionally, in the present embodiment, the non-volatile storage medium is arranged to store program code for performing the steps of: identifying a second execution requirement of a second subtask; generating a second subtask request according to the first execution result and the second execution requirement; and sending a second subtask request to a second target interface.
Optionally, in the present embodiment, the non-volatile storage medium is arranged to store program code for performing the steps of: inquiring a second demand parameter indicated by a second execution demand in a preset cache; acquiring a third requirement parameter indicated by the second execution requirement from the first execution result; and generating a second subtask request according to the second demand parameter and the third demand parameter.
Optionally, in the present embodiment, the non-volatile storage medium is arranged to store program code for performing the steps of: acquiring a first timestamp indicated by a first execution result; acquiring a second timestamp indicated by a second execution result; and splicing the first execution result and the second execution result according to the first timestamp and the second timestamp to generate a main task execution result.
Optionally, in the present embodiment, the non-volatile storage medium is arranged to store program code for performing the steps of: after a main task request is acquired through a preset interface, identifying a first subtask and a second subtask carried in the main task request; a first target server for performing the first subtask is determined, and a second target server for performing the second subtask is determined.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (10)

1. A method of task processing, comprising:
acquiring a main task request through a preset interface, wherein the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask;
sending a first subtask request to a first target interface according to the first subtask, wherein the first target interface is connected with a first target server for processing the first subtask request;
receiving the first execution result returned by the first target server for executing the first subtask request;
sending a second subtask request to a second target interface according to the first execution result and the second subtask, wherein the second target interface is connected with a second target server for processing the second subtask request;
and taking the first execution result and the second execution result of the second sub-task request executed by the second target server as main task execution results of the main task request.
2. The method of claim 1, wherein sending a first subtask request to a first target interface in accordance with the first subtask comprises:
identifying a first execution requirement of the first subtask;
generating the first subtask request according to the first execution requirement;
and sending the first subtask request to the first target interface.
3. The method of claim 2, wherein generating the first subtask request in accordance with the first execution requirement comprises:
querying a preset cache library for a first requirement parameter indicated by the first execution requirement;
and generating the first subtask request according to the first demand parameter.
4. The method of claim 1, wherein sending a second subtask request to a second target interface in accordance with the first execution result and the second subtask comprises:
identifying a second execution requirement of the second subtask;
generating the second subtask request according to the first execution result and the second execution requirement;
and sending the second subtask request to the second target interface.
5. The method of claim 2, wherein generating the second subtask request based on the first execution result and the second execution requirement comprises:
Querying a preset cache library for a second requirement parameter indicated by the second execution requirement;
acquiring a third requirement parameter indicated by the second execution requirement from the first execution result;
and generating the second subtask request according to the second demand parameter and the third demand parameter.
6. The method of claim 1, wherein the taking the first execution result and the second execution result of the second sub-task request by the second target server as the main task execution result of the main task request comprises:
acquiring a first timestamp indicated by the first execution result;
acquiring a second timestamp indicated by the second execution result;
and splicing the first execution result and the second execution result according to the first timestamp and the second timestamp to generate the main task execution result.
7. The method according to any of claims 1-6, wherein the task processing method is applied to a multi-interface gateway.
8. A task processing device, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a main task request through a preset interface, wherein the main task request at least carries a first subtask and a second subtask, and the second subtask needs to use a first execution result of the first subtask;
The first sending module is used for sending a first subtask request to a first target interface according to the first subtask, wherein the first target interface is connected with a first target server for processing the first subtask request;
the receiving module is used for receiving the first execution result returned by the first target server for executing the first subtask request;
the second sending module is used for sending a second subtask request to a second target interface according to the first execution result and the second subtask, wherein the second target interface is connected with a second target server for processing the second subtask request;
and the determining module is used for taking the first execution result and the second execution result of the second sub-task request executed by the second target server as main task execution results of the main task request.
9. A non-volatile storage medium, wherein a program is stored in the non-volatile storage medium, and wherein the program, when executed, controls a device in which the non-volatile storage medium is located to perform the task processing method according to any one of claims 1 to 7.
10. An electronic device, comprising: a memory and a processor for executing a program stored in the memory, wherein the program executes the task processing method according to any one of claims 1 to 7.
CN202211686106.2A 2022-12-27 2022-12-27 Task processing method and device, nonvolatile storage medium and electronic equipment Pending CN116185581A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211686106.2A CN116185581A (en) 2022-12-27 2022-12-27 Task processing method and device, nonvolatile storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211686106.2A CN116185581A (en) 2022-12-27 2022-12-27 Task processing method and device, nonvolatile storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116185581A true CN116185581A (en) 2023-05-30

Family

ID=86433572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211686106.2A Pending CN116185581A (en) 2022-12-27 2022-12-27 Task processing method and device, nonvolatile storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116185581A (en)

Similar Documents

Publication Publication Date Title
CN109582303B (en) General component calling method, device, computer equipment and storage medium
US10282191B2 (en) Updating web resources
US9020949B2 (en) Method and system for centralized issue tracking
CN110838071B (en) Policy data processing method, device and server
CN110716783A (en) Front-end page generation and deployment method and device, storage medium and equipment
CN1650596B (en) A communication system, mobile device therefor and methods of storing pages on a mobile device
CN109871354B (en) File processing method and device
CN114862381A (en) Transfer method and device based on information sharing, electronic equipment and storage medium
CN104954363A (en) Method and device for generating interface document
CN112732547B (en) Service testing method and device, storage medium and electronic equipment
WO2019043462A1 (en) Systems and methods for creating automated interface transmission between heterogeneous systems in an enterprise ecosystem
US8280950B2 (en) Automatic client-server code generator
CN113645260A (en) Service retry method, device, storage medium and electronic equipment
CN110865973B (en) Data processing method and equipment and related device
CN116185581A (en) Task processing method and device, nonvolatile storage medium and electronic equipment
CN114301970B (en) Service calling method, device, electronic equipment and storage medium
CN113254819B (en) Page rendering method, system, equipment and storage medium
CN113779122B (en) Method and device for exporting data
CN113626392A (en) Method and device for updating document data, electronic equipment and storage medium
US20220129332A1 (en) Handling of Metadata for Microservices Processing
CN112688980B (en) Resource distribution method and device, and computer equipment
CN111506644B (en) Application data processing method and device and electronic equipment
CN107045549B (en) Method and device for acquiring page number of electronic book
CN112784195A (en) Page data publishing method and system
EP1720285B1 (en) Apparatus and method for processing messages in a network management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination