CN117692518A - Service scheduling method, device, equipment and storage medium - Google Patents

Service scheduling method, device, equipment and storage medium Download PDF

Info

Publication number
CN117692518A
CN117692518A CN202311796310.4A CN202311796310A CN117692518A CN 117692518 A CN117692518 A CN 117692518A CN 202311796310 A CN202311796310 A CN 202311796310A CN 117692518 A CN117692518 A CN 117692518A
Authority
CN
China
Prior art keywords
service
result
external
processing
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311796310.4A
Other languages
Chinese (zh)
Inventor
黎升杰
高孟阳
郭政
梁志颖
黄佳欣
陈伟潮
邓灿奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Quyan Network Technology Co ltd
Original Assignee
Guangzhou Quyan Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Quyan Network Technology Co ltd filed Critical Guangzhou Quyan Network Technology Co ltd
Priority to CN202311796310.4A priority Critical patent/CN117692518A/en
Publication of CN117692518A publication Critical patent/CN117692518A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a service scheduling method, a device, equipment and a storage medium, wherein the method comprises the following steps: when a service request of a service end for processing service data is received, inquiring whether a service result obtained by processing the service data by external service exists in a buffer area; if yes, sending the service result in the buffer area to a service end; if not, calling external service to process service data according to the service request; waiting for a service result obtained by processing service data by an external service; if the waiting service result is overtime, the failure information is sent to the service end, so that the service end triggers the service request for processing the same service data and continues to wait for the service result; and if the service result is waited after the timeout, storing the service result in the buffer area. The method and the device realize slow response, realize one-time operation of scheduling external services aiming at multiple service requests of the service end, greatly improve the utilization efficiency of resources and reduce the burden of a system.

Description

Service scheduling method, device, equipment and storage medium
Technical Field
The present invention relates to the field of network communications technologies, and in particular, to a service scheduling method, apparatus, device, and storage medium.
Background
In a system cluster, external services provided for the system cluster are often requested to implement certain services, such as identity authentication, binding bank cards, account cross-domain login using other systems, and so forth.
These external services may be unstable and may also cause delays in invoking other systems, all of which may negatively impact the performance of the system cluster, particularly when there are a large number of concurrent requests from the system cluster.
In order to ensure the performance of the system cluster, the current methods such as a current limiting algorithm, a load balancing algorithm and the like are mostly used for carrying out peak clipping on the request, and for a single request, a timeout mechanism is mostly arranged, and when the timeout occurs, the request is ended.
However, ending the request means that the operations of the system cluster and the external service are invalid, resources are wasted, and when the same request is reinitiated later, the system cluster and the external service repeat the same operations, which increases the burden and results in lower response efficiency of the request.
Disclosure of Invention
The invention provides a service scheduling method, a device, equipment and a storage medium, which are used for solving the problem of how to improve the efficiency of calling external services.
According to an aspect of the present invention, there is provided a service scheduling method, including:
when a service request of a service end for processing service data is received, inquiring whether a service result obtained by processing the service data by an external service exists in a buffer area;
if yes, the service result in the buffer area is sent to the service end;
if not, calling external service to process the service data according to the service request;
waiting for a service result obtained by the external service processing the service data;
if the service result is waited for overtime, the failure information is sent to the service end, so that the service end triggers and processes the service request of the same service data, and the service result is waited continuously;
and if the service result is waited after the timeout, storing the service result in the buffer area.
According to another aspect of the present invention, there is provided a service scheduling apparatus comprising:
the buffer area inquiring module is used for inquiring whether a service result obtained by processing the service data by external service exists in the buffer area or not when a service request of the service end for processing the service data is received; if yes, executing the service result sending module, and if not, executing the service calling module;
the service result sending module is used for sending the service result in the buffer area to the service end;
the service calling module is used for calling external service to process the service data according to the service request;
the service result waiting module is used for waiting for a service result obtained by the external service processing the service data;
the failure processing module is used for sending failure information to the service end if the service result is waited to be overtime, so that the service end triggers and processes the service request of the same service data and continues to wait for the service result;
and the service result storage module is used for storing the service result in the buffer area if waiting for the service result after overtime.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the service scheduling method of any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing a computer program for causing a processor to implement the service scheduling method according to any one of the embodiments of the present invention when executed.
In this embodiment, when a service request for processing service data by a service end is received, whether a service result obtained by processing service data by an external service exists is queried in a buffer area; if yes, sending the service result in the buffer area to a service end; if not, calling external service to process service data according to the service request; waiting for a service result obtained by processing service data by an external service; if the waiting service result is overtime, the failure information is sent to the service end, so that the service end triggers the service request for processing the same service data and continues to wait for the service result; and if the service result is waited after the timeout, storing the service result in the buffer area. When the primary request is overtime, the request is actively disconnected, the response of long-time waiting for remote scheduling of external service is avoided, the response efficiency can be ensured, the coupling degree of the service end and the external service is reduced, the influence of the external service on the service end is weakened, the external service is asynchronously kept to process service data, the service result obtained by processing the service data by the external service is stored in a buffer zone, the response service result is quickly obtained when the secondary request is performed, the slow response is realized, the operation of scheduling the external service once is realized for the service request of the service end for multiple times, the utilization efficiency of resources is greatly improved, and the burden of a system is lightened.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a service scheduling method according to a first embodiment of the present invention;
FIG. 2 is a schematic view of a peak shaver according to a first embodiment of the invention;
fig. 3 is a schematic structural diagram of a service scheduling apparatus according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein are capable of being practiced otherwise than as specifically illustrated and described. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a service scheduling method according to a first embodiment of the present invention, where the method may be performed by a service scheduling device, which may be implemented in hardware and/or software, and the service scheduling device may be configured in an electronic device, where an asynchronous timeout control mechanism (Asynchronous Timeout Control Mechanism) is used to process a situation that a service end schedules an external service. As shown in fig. 1, the square
The method comprises the following steps:
step 101, when a service request of a service end for processing service data is received, inquiring whether a service result obtained by processing the service data by external service exists in a buffer area; if yes, go to step 102, if no, go to step 103.
As shown in fig. 2, there are a service end (also called service system) and a peak clipper in a system cluster (especially a distributed system), and there are a plurality of services (i.e., external services) outside the system cluster, and these external services may be exposed to the plurality of service ends in the form of APIs (Application Programming Interface, application programming interfaces).
The peak clipping device is independent of the service end and also independent of the external service, is a public service of part or all of the service ends, supports multiplexing, provides standard and unified public service, can provide a function of scheduling part or all of the external service for part or all of the service ends, and achieves the purpose of peak clipping.
The peak clipping device shields the differences of API interfaces of different external services in the bottom layer communication, achieves interoperation, provides communication support for the service end, reduces the development workload of the service end, and shortens the development time of the service end.
The peak clipping device is provided with a scheduler, and the scheduler is realized by a thread or the like, and the scheduler is also called a main thread because the scheduler plays a role of main scheduling in the peak clipping device.
The scheduler can receive a service request of the service end when processing the service data, wherein the service request is used for requesting corresponding external service to process the service data, and a service result is obtained.
For example, for an identity authentication service, the service data is identity information (such as name, certificate type, certificate number, etc.) of the user, the identity information is provided to a system cluster by the user, and the system cluster distributes the identity information to a service end responsible for identity authentication, and the service end invokes an external service responsible for identity authentication to authenticate the identity information of the user.
Further, part of service ends may be deployed on the cloud end of the third party, and according to the specification of the cloud end, when the service ends realize certain services, the external services provided by the cloud end are uniformly called, and the external services call the system which is independent of the cloud end and provides the services again, so that the scheduled link is longer, the unpredictability is higher, and the conditions of abnormal service quality, overtime and the like are easy to occur.
In this regard, the present embodiment provides an asynchronous timeout control mechanism, in which a buffer is provided in the peak shaver, where the buffer is a storage space reserved in a memory space for the peak shaver, and these storage spaces are used to asynchronously buffer service results obtained by processing service data by external services.
The scheduler can receive a service request of the service end for processing the service data, wherein the service request can be the first request of the service end for processing the service data, or the service end can process the same service data without the first request, the scheduler does not care about the condition of the service request, and the service result obtained by processing the service data can be queried in the buffer zone.
In a specific implementation, when a service end generates a service request according to the specification of a peak shaver, algorithms such as MD5 (Message Digest Algorithm, information summarization algorithm), SHA (Secure Hash Algorithm ) and the like are used to generate a unique identification value according to service data, and the service data and the unique identification value are written into the service request.
In addition, a double-row set queue composed of the identification value and the service result (data load) is stored in the buffer.
Then, the scheduler may read the service identifier value generated according to the service data at the designated position in the service request, and query whether there is a service result obtained by processing the service data by the external service in the buffer area with the service identifier value as an index, that is, traverse the service identifier stored in the buffer area, and determine whether the service identifier stored in the buffer area is the same as the service identifier of the current service request.
If a certain service identifier stored in the buffer area is the same as the service identifier of the current service request, it can be determined that a service result obtained by processing service data by an external service exists in the buffer area, that is, the service result corresponding to the service identifier value is the service result obtained by processing the service data by the external service.
If any service identifier stored in the buffer is the same as the service identifier of the current service request, it may be determined that there is no service result obtained by processing the service data by the external service in the buffer.
Step 102, sending the service result in the buffer area to the service end.
As shown in fig. 2, if it is queried that there is a service result obtained by processing service data by an external service in the buffer, the scheduler may read the service result obtained by processing the service data by the external service from the buffer and send the service result to the service end.
Step 103, calling external service to process service data according to the service request.
As shown in fig. 2, if the service result obtained by processing the service data by the external service is not queried in the buffer, the scheduler may call the service data submitted by the corresponding external service processing service terminal according to the service request.
In a specific implementation, a thread pool (thread pool) is provided in the peak clipper, and the thread pool is a thread usage mode. The scheduler may serve as a supervisor and may issue tasks for invoking external services to the thread pool, while the thread pool maintains a plurality of threads, waiting for the supervisor to allocate tasks for invoking external services, and the threads maintained by the thread pool may be referred to as sub-threads with respect to the scheduler (main thread).
The use of the thread pool to schedule external services can avoid the cost of creating and destroying threads when processing short-time tasks, ensure the full utilization of the kernel, prevent excessive scheduling and meet a large number of concurrent service requests.
The scheduler may read service identification information (e.g., ID, API name, etc.) and traffic data for the external service in the traffic request, allocate a sub-thread from the thread pool for the current traffic data (in terms of traffic identification information), and the line Cheng Chizhong may maintain a relationship between the external service and the protocol, considering that different external services communicate using different protocols, e.g., HTTP (Hypertext Transfer Protocol ), gRPC (Remote Procedure Call, remote procedure call protocol), or other protocols.
Then, the sub-thread may be invoked to send the traffic data to the external service (API interface) for processing using a protocol adapted to the external service according to the service identification information.
And 104, waiting for a service result obtained by processing the service data by the external service.
As shown in fig. 2, the scheduler may wait for a service result obtained by processing the service data by the external service within a preset waiting time while invoking the external service to process the service data according to the service request.
In a specific implementation, a peak timer is arranged in the peak shaver, and the peak timer can realize part of functions of the scheduler, so that slow response is realized.
Different external services have different performance and stability, so a reasonable waiting time can be set in advance according to the scheduling information of the external services as a timeout water level (Timeout Watermark).
The scheduler may then read service identification information of the external service in the service request, query a waiting time configured for the external service according to the service identification information, and send the waiting time to the peak timer for timing.
During the peak timer timing, the scheduler may wait for the service result obtained by the external service processing the service data, i.e. wait for the sub-thread to return the service result obtained by the external service processing the service data.
Step 105, if the waiting service result is overtime, the failure information is sent to the service end, so that the service end triggers again the service request for processing the same service data, and continues to wait for the service result.
As shown in fig. 2, if the scheduler waits until the service result obtained by processing the service data by the external service is timed out, that is, the sub-thread returns the service result obtained by processing the service data by the external service to the scheduler before reaching the waiting time, the scheduler may send the service result to the service end.
In the asynchronous timeout control mechanism provided in this embodiment, a timeout mechanism is introduced in asynchronous operation, if the scheduler waits for a timeout of a service result obtained by processing service data by an external service, that is, when a sub-thread arrives at a waiting time, the service result obtained by processing the service data by the external service is not returned to the scheduler, the scheduler may interrupt the waiting, send failure information (a status code indicating failure in invoking the external service) to the service end, prevent performance degradation of the service end due to long-time incomplete service request, and ensure that response to the service end is not affected by the external service.
Further, the service request of the interrupt service end is an operation in logic, the sub-thread still keeps scheduling the operation of the external service, and the peak clipper still continues to wait for the service result obtained by the external service processing the service data.
When the service end receives the failure information, the service end can decide whether to trigger the service request for processing the same service data again according to the code logic of the service end.
When the number of times of triggering the service requests for processing the same service data reaches a threshold value, the service operation irrelevant to the external service is in error, the service is interrupted, and the like, the service requests for processing the same service data can be determined not to be triggered, and at the moment, alarm information is generated to prompt operation and maintenance personnel to process abnormality.
When the number of times of triggering the service request for processing the same service data does not reach the threshold value, the service operation which is irrelevant to the external service is normal, the service is normal, and the like, the service end can determine the time for triggering the service request for processing the same service data according to the code logic of the service end, and when the time is reached, a new service request is generated according to the same service data and is sent to the peak clipping device.
In a specific implementation, if the scheduler receives a message that the peak timer finishes counting, which indicates that waiting time is out, the scheduler may mount a task waiting for a service result to the peak timer, at this time, the scheduler notifies the peak timer of a task sent to the thread pool for scheduling external service processing service data, and notifies the thread pool to modify a task for scheduling external service processing service data, modifies a supervision manager to the peak timer, and the peak timer maintains a task for scheduling external service processing service data, and continues to wait for a service result obtained by external service processing service data, that is, continues to wait for a sub-thread to return a service result obtained by external service processing service data.
In addition, the scheduler may send failure information to the service end, so that the service end ends up invoking external services, continues to perform service operations, and triggers service requests to process the same service data when some or all of the service operations (indicating arrival opportunities) are completed.
In general, more service operations are in the services realized by different service ends, the external service is called to be one of the service operations, when the service end receives failure information, the service end can suspend the service operation related to the external service, pause execution and continue to execute the service operation unrelated to the external service, and when waiting for the external service to process service data, some prepositioned service operations are completed, so that the service processing efficiency is improved.
And 106, if the service result is waited after the timeout, storing the service result in the buffer area.
As shown in fig. 2, if the service result obtained by processing the service data by the external service is waited after the timeout, that is, the service result obtained by processing the service data by the external service is waited for being returned to the sub-thread, at this time, the service result may be stored in the buffer, and when the service end requests to process the same service data next time, the corresponding service result is directly returned, so as to realize slow response.
In a specific implementation, when the peak timer receives a service result returned by the sub-thread and obtained by processing service data by an external service, a Hook Function (Hook Function) configured for the buffer area can be triggered, so that the Hook Function is executed, and the service result is stored in the buffer area in an Event Callback (Event Callback) mode.
The hook function refers to a predefined function that can be called when a specific event or state occurs, and is generally used to expand or customize the behavior of the peak clipper (e.g., store business results in a buffer).
Event callbacks refer to the notification of a related module or object (i.e., buffer) by triggering a predefined callback function to perform a specific logic or operation (i.e., store business results) after an asynchronous operation is completed.
When the service result is stored, the service identification value generated according to the service data in the service request can be queried, and the service result is stored in the buffer zone by taking the service identification value as an index, so that the relationship between the service data and the service result can be conveniently distinguished.
Further, in this embodiment, an Observer Pattern (Observer Pattern) may be used to implement a mechanism for asynchronously responding to callbacks.
Wherein the observer pattern is a design pattern for defining one-to-many dependency relationships, and when the state of one object changes, all objects depending on it are notified and automatically updated.
In the watcher mode, a topic interface can be defined that includes methods of registering watchers, removing watchers, notifying watchers, and initiating asynchronous requests.
In the observer mode, an observer interface may be defined, which contains an update method for receiving notifications when the status of the subject changes.
In observer mode, a theme class may be defined that notifies the internally maintained observer when the external service responds successfully and returns the dispatch thread to the thread pool.
In the observer mode, an observer class may be defined, an observer interface implemented, and how to update itself when the subject state changes.
In a specific implementation, the observer mode may be pre-started to invoke the theme class to define the peak timer as the theme, invoke the observer interface to define the observer in the buffer, and invoke the theme interface to register the observer in the theme.
When the peak timer receives a service result (representing a change of the subject state) returned by the sub-thread and obtained by processing the service data by the external service, a publisher in the observer is called, and a message for receiving the service result is published to the observer.
When the message is received by the observer through the updating method, a hook function in the observer is triggered, so that the hook function is executed, and a service result is stored in a buffer area through an event callback mode.
In this embodiment, when a service request for processing service data by a service end is received, whether a service result obtained by processing service data by an external service exists is queried in a buffer area; if yes, sending the service result in the buffer area to a service end; if not, calling external service to process service data according to the service request; waiting for a service result obtained by processing service data by an external service; if the waiting service result is overtime, the failure information is sent to the service end, so that the service end triggers the service request for processing the same service data and continues to wait for the service result; and if the service result is waited after the timeout, storing the service result in the buffer area. When the primary request is overtime, the request is actively disconnected, the response of long-time waiting for remote scheduling of external service is avoided, the response efficiency can be ensured, the coupling degree of the service end and the external service is reduced, the influence of the external service on the service end is weakened, the external service is asynchronously kept to process service data, the service result obtained by processing the service data by the external service is stored in a buffer zone, the response service result is quickly obtained when the secondary request is performed, the slow response is realized, the operation of scheduling the external service once is realized for the service request of the service end for multiple times, the utilization efficiency of resources is greatly improved, and the burden of a system is lightened.
Example two
Fig. 3 is a schematic structural diagram of a service scheduling device according to a second embodiment of the present invention.
As shown in fig. 3, the apparatus includes:
the buffer area query module 301 is configured to query, when a service request for processing service data by a service end is received, whether a service result obtained by processing the service data by an external service exists in a buffer area; if yes, executing a service result sending module 302, and if not, executing a service calling module 303;
a service result sending module 302, configured to send the service result in the buffer to the service end;
a service calling module 303, configured to call an external service to process the service data according to the service request;
a service result waiting module 304, configured to wait for a service result obtained by the external service processing the service data;
a failure processing module 305, configured to send failure information to the service end if waiting for the service result to timeout, so that the service end triggers a service request for processing the same service data, and continues waiting for the service result;
and the service result storage module 306 is configured to store the service result in the buffer if waiting for the service result after timeout.
In one embodiment of the present invention, the buffer query module 301 includes:
a service identification value reading module for reading a service identification value generated according to the service data in the service request;
the index inquiry module is used for inquiring whether a service result obtained by processing the service data by an external service exists or not by taking the service identification value as an index in the buffer area;
the business result storage module 306 includes:
the service identification value query module is used for querying a service identification value generated according to the service data in the service request;
and the index storage module is used for storing the service result in the buffer zone by taking the service identification value as an index.
In one embodiment of the present invention, the service invocation module 303 includes:
the service identification information reading module is used for reading the service identification information of the external service in the service request;
the sub-thread allocation module is used for allocating a sub-thread for the service data from a thread pool;
and the sub-thread calling module is used for calling the sub-thread to send the service data to the external service for processing by using a protocol adapted to the external service according to the service identification information.
In one embodiment of the present invention, the service result waiting module 304 includes:
the service identification information reading module is used for reading the service identification information of the external service in the service request;
the waiting time inquiry module is used for inquiring the waiting time configured for the external service according to the service identification information;
the timing notification module is used for sending the waiting time to a peak timer for timing;
and the timing waiting module is used for waiting for the service result obtained by the service data processed by the external service during timing.
In one embodiment of the present invention, the failure processing module includes:
the task mounting module is used for mounting the task waiting for the service result to the peak timer if the message of the end of the timing of the peak timer is received;
and the failure information sending module is used for sending the failure information to the service end so that the service end can end up calling the external service and continue to execute the service operation, and triggering the service request for processing the same service data when part or all of the service operation is completed.
In one embodiment of the present invention, the business result storage module 306 includes:
the hook function triggering module is used for triggering a hook function configured for the buffer area when the peak timer receives the service result;
and the hook function executing module is used for executing the hook function and storing the business result in the buffer zone.
In one embodiment of the present invention, the hook function triggering module includes:
an observer mode starting module for starting an observer mode to define the peak timer as a theme, define an observer in the buffer, and register the observer into the theme;
the message issuing module is used for calling an issuer in the observer when the peak timer receives the service result and issuing a message for receiving the service result to the observer;
and the observer triggering module is used for triggering a hook function in the observer when the message is received in the observer.
The service scheduling device provided by the embodiment of the invention can execute the service scheduling method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the service scheduling method.
Example III
Fig. 4 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the service scheduling method.
In some embodiments, the service scheduling method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the service scheduling method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the service scheduling method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
Example IV
Embodiments of the present invention also provide a computer program product comprising a computer program which, when executed by a processor, implements a service scheduling method as provided by any of the embodiments of the present invention.
Computer program product in the implementation, the computer program code for carrying out operations of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method of service scheduling, comprising:
when a service request of a service end for processing service data is received, inquiring whether a service result obtained by processing the service data by an external service exists in a buffer area;
if yes, the service result in the buffer area is sent to the service end;
if not, calling external service to process the service data according to the service request;
waiting for a service result obtained by the external service processing the service data;
if the service result is waited for overtime, the failure information is sent to the service end, so that the service end triggers and processes the service request of the same service data, and the service result is waited continuously;
and if the service result is waited after the timeout, storing the service result in the buffer area.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the step of inquiring whether the service result obtained by processing the service data by the external service exists in the buffer zone comprises the following steps:
reading a service identification value generated according to the service data in the service request;
inquiring whether a service result obtained by processing the service data by an external service exists or not by taking the service identification value as an index in a buffer zone;
said storing said business result in said buffer comprises:
inquiring a service identification value generated according to the service data in the service request;
and storing the service result in the buffer zone by taking the service identification value as an index.
3. The method of claim 1, wherein invoking an external service to process the business data in accordance with the business request comprises:
reading service identification information of external service in the service request;
distributing a sub-thread for the service data from a thread pool;
and calling the sub-thread to send the service data to the external service for processing by using a protocol adapted to the external service according to the service identification information.
4. A method according to any one of claims 1-3, wherein waiting for a service result from processing the service data by the external service comprises:
reading service identification information of external service in the service request;
inquiring waiting time configured for the external service according to the service identification information;
sending the waiting time to a peak timer for timing;
and waiting for a service result obtained by the external service for processing the service data during timing.
5. The method of claim 4, wherein if waiting for the service result to timeout, sending failure information to the service end to cause the service end to trigger a service request for processing the same service data, and continuing to wait for the service result, comprises:
if the message of ending the timing of the peak timer is received, the task waiting for the service result is mounted to the peak timer;
and sending failure information to the service end, so that the service end finishes calling the external service, continues to execute service operation, and triggers a service request for processing the same service data when part or all of the service operation is completed.
6. The method of claim 5, wherein storing the business result in the buffer if waiting for the business result after timeout comprises:
triggering a hook function configured for the buffer area when the peak timer receives the service result;
executing the hook function and storing the business result in the buffer.
7. The method of claim 6, wherein triggering the hook function configured for the buffer when the peak timer receives the traffic result comprises:
initiating an observer mode to define the peak timer as a topic, define an observer in the buffer, register the observer in the topic;
when the peak timer receives the service result, calling a publisher in the observer, and publishing a message for receiving the service result to the observer;
when the message is received in the observer, a hook function in the observer is triggered.
8. A service scheduling apparatus, comprising:
the buffer area inquiring module is used for inquiring whether a service result obtained by processing the service data by external service exists in the buffer area or not when a service request of the service end for processing the service data is received; if yes, executing the service result sending module, and if not, executing the service calling module;
the service result sending module is used for sending the service result in the buffer area to the service end;
the service calling module is used for calling external service to process the service data according to the service request;
the service result waiting module is used for waiting for a service result obtained by the external service processing the service data;
the failure processing module is used for sending failure information to the service end if the service result is waited to be overtime, so that the service end triggers and processes the service request of the same service data and continues to wait for the service result;
and the service result storage module is used for storing the service result in the buffer area if waiting for the service result after overtime.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the service scheduling method of any one of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program for causing a processor to implement the service scheduling method of any one of claims 1-7 when executed.
CN202311796310.4A 2023-12-25 2023-12-25 Service scheduling method, device, equipment and storage medium Pending CN117692518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311796310.4A CN117692518A (en) 2023-12-25 2023-12-25 Service scheduling method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311796310.4A CN117692518A (en) 2023-12-25 2023-12-25 Service scheduling method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117692518A true CN117692518A (en) 2024-03-12

Family

ID=90136989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311796310.4A Pending CN117692518A (en) 2023-12-25 2023-12-25 Service scheduling method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117692518A (en)

Similar Documents

Publication Publication Date Title
US10671458B2 (en) Epoll optimisations
US7634542B1 (en) System and method to manage service instances for request processing
US11907762B2 (en) Resource conservation for containerized systems
CN111427751B (en) Method and system for processing business based on asynchronous processing mechanism
Ouyang et al. Reducing late-timing failure at scale: Straggler root-cause analysis in cloud datacenters
CN115525411A (en) Method, device, electronic equipment and computer readable medium for processing service request
CN108628677B (en) Distributed task processing system, method and device
CN116661960A (en) Batch task processing method, device, equipment and storage medium
CN115964153A (en) Asynchronous task processing method, device, equipment and storage medium
WO2022095862A1 (en) Thread priority adjusting method, terminal, and computer readable storage medium
CN109388501B (en) Communication matching method, device, equipment and medium based on face recognition request
CN111198753A (en) Task scheduling method and device
CN111290842A (en) Task execution method and device
CN117573355A (en) Task processing method, device, electronic equipment and storage medium
CN117667144A (en) Annotating hot refreshing method, annotating hot refreshing device, annotating hot refreshing equipment and annotating hot refreshing medium
CN115686813A (en) Resource scheduling method and device, electronic equipment and storage medium
US7797473B2 (en) System for executing system management interrupts and methods thereof
CN111597056A (en) Distributed scheduling method, system, storage medium and device
CN117692518A (en) Service scheduling method, device, equipment and storage medium
CN111488373A (en) Method and system for processing request
CN112448977A (en) System, method, apparatus and computer readable medium for assigning tasks
CN114374657A (en) Data processing method and device
CN114466079B (en) Request processing method, device, proxy server and storage medium
CN114924806B (en) Dynamic synchronization method, device, equipment and medium for configuration information
CN115878290A (en) Job processing method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination