CN113778642A - Network service scheduling method and device, intelligent terminal and storage medium - Google Patents

Network service scheduling method and device, intelligent terminal and storage medium Download PDF

Info

Publication number
CN113778642A
CN113778642A CN202110921267.4A CN202110921267A CN113778642A CN 113778642 A CN113778642 A CN 113778642A CN 202110921267 A CN202110921267 A CN 202110921267A CN 113778642 A CN113778642 A CN 113778642A
Authority
CN
China
Prior art keywords
function
business
service
flow function
engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110921267.4A
Other languages
Chinese (zh)
Other versions
CN113778642B (en
Inventor
于洋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202110921267.4A priority Critical patent/CN113778642B/en
Publication of CN113778642A publication Critical patent/CN113778642A/en
Application granted granted Critical
Publication of CN113778642B publication Critical patent/CN113778642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

The application discloses a network service scheduling method, a device, an intelligent terminal and a storage medium, wherein the network service scheduling method comprises the following steps: starting at least one engine process and loading at least one business process function; determining the service flow function as a parent service flow function, and loading at least one sub-service flow function of the parent service flow function into an engine process of the service flow function; determining a sub-business flow function as a father business flow function, and repeatedly executing the step of loading at least one sub-business flow function of the father business flow function into an engine process of the business flow function until the formed network business of the single service does not accord with the set condition; and running the network service of the single service. By the scheme, the network service scheduling method can effectively reduce the cost of network operation and maintenance service, reduce the waste of network communication on the performance of the whole hardware, and reduce the corresponding request response time delay.

Description

Network service scheduling method and device, intelligent terminal and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method and an apparatus for scheduling a network service, an intelligent terminal, and a storage medium.
Background
In the early stage of internet development, background network architectures are almost all single services, that is, all business logic is written in one or several background processes. However, the single service has the defects of difficult multi-user cooperation, failure isolation and inconvenience for horizontal expansion, and as the complexity of internet business is higher and higher, the single service has the defects that the single service is gradually enlarged and gradually replaced by a new micro-service architecture, that is, a network architecture which divides the business into small blocks according to two dimensions, namely horizontal and vertical dimensions, each small block of business is realized by a service process, and the whole business process is completed by the cooperation of the processes through network communication.
With the development and popularity of micro-service architecture in the internet technology field, replacing single-service architecture by micro-service architecture gradually becomes mainstream, mainly because micro-service architecture practically solves the problems that single-service multi-user cooperation is difficult, faults cannot be isolated, and horizontal expansion is not facilitated.
However, the micro-service architecture also has some defects: the high operation and maintenance cost caused by the excessive number of service processes, the waste of the overall hardware performance caused by redundant network communication and the high request response time delay.
With the failure of moore's law, the hardware performance is improved and enters the bottleneck, and the explosive growth of internet user groups leads to more and more background servers of each large internet company, and some companies even reach the scale of hundreds of thousands of servers. In such a massively context, the overhead cost of micro-service architecture to waste hardware performance has become very large.
With the rapid advance of the internet industry, the service complexity has to be further improved due to the intense competitive environment, so that the number of background micro-service processes of each large internet company is increased explosively, the difficulty of operation and maintenance work is higher, and the operation and maintenance cost is higher and higher.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a network service scheduling method, a network service scheduling device, an intelligent terminal and a storage medium, so as to solve the problems that in the prior art, a single service has difficulty in multi-user cooperation, failure cannot be isolated, and horizontal expansion is not facilitated, and a micro-service architecture has the problems of high operation and maintenance cost caused by excessive service processes, waste of redundant network communication on the performance of the whole hardware, and high request response time delay.
In order to solve the above problem, a first aspect of the present application provides a method for scheduling a network service, where the method for scheduling a network service includes: starting at least one engine process and loading at least one business process function; determining the service flow function as a parent service flow function, and loading at least one sub-service flow function of the parent service flow function into an engine process of the service flow function; determining a sub-business flow function as a father business flow function, and repeatedly executing the step of loading at least one sub-business flow function of the father business flow function into an engine process of the business flow function until the formed network business of the single service does not accord with the set condition; and running the network service of the single service.
Before starting at least one engine process and loading at least one business process function, the method further comprises the following steps: acquiring an initial code meeting a standard vertebra interface specification; and constructing the initial code into an executable business process function.
The method for constructing the initial code into the executable business process function comprises the following steps: performing source code analysis on the initial code to obtain a function name and a function signature of the initial code, and correspondingly generating an initial function with a fixed signature; adding an initial function to the tail end of a source code of an initial code to obtain a primary processing source code; and compiling the primary processing source code into a business flow function in an executable byte code form, and storing the business flow function in a database.
The starting at least one engine process and loading at least one business process function comprises the following steps: and downloading at least one business process function from the database so as to load the business process function into at least one engine process which is correspondingly started.
The loading at least one sub-business process function of the parent business process function into an engine process of the business process function comprises the following steps: and loading at least one sub-business flow function in the called relation in the stable calling relation into an engine process corresponding to the parent business flow function in the calling relation according to the network address of each business flow function.
The method for determining the business process function as a parent business process function after the number of the business process functions loaded into the engine process is at least two, starting at least one engine process, determining the business process function as a parent business process function after the at least one business process function is loaded, and before the at least one child business process function of the parent business process function is loaded into the engine process of the business process function, comprises the following steps: acquiring call relation data of each business process function for calling other business process functions; correspondingly forming at least two business flow functions into a calling relation tree based on the calling relation data; the calling relation tree comprises at least one calling chain, the service flow function at the most upstream of each calling chain is a root node of the calling relation tree, and the service flow function at the downstream of each calling chain is a child node of the calling relation tree; loading at least one sub-business process function of the parent business process function into an engine process of the business process function, comprising: and traversing each business flow function in the call relation tree so as to load the child business flow function corresponding to at least one child node into the engine process of the parent business flow function corresponding to the root node corresponding to the child node.
In the engine process, the child node of the calling relation tree with more called times is positioned at one side of the root node of the calling relation tree, and the child node of the calling relation tree with less called times is positioned at the opposite side of the root node of the calling relation tree.
The method comprises the following steps that until the formed network service of the single service does not accord with the set condition: until the formed network traffic of the single service reaches the upper load limit of the single process.
The method for scheduling the network service is applied to a processor, the processor is in communication connection with each server in a server cluster, at least one engine process runs on the server, at least one engine process is started, and at least one business process function is loaded, and the method comprises the following steps: acquiring the resource occupancy rate of each server in the server cluster; wherein the resource occupancy rate is the utilization rate of the server; and starting at least one engine process on a server with the resource occupancy rate lower than the first preset resource occupancy rate in the server cluster, and loading at least one business process function.
The network service scheduling method further comprises the following steps: migrating part of engine processes running on a server with the resource occupancy rate exceeding a second preset resource occupancy rate in the server cluster to other servers, and ensuring that the resource occupancy rate of each server in the server cluster is lower than a first preset resource occupancy rate; and the second preset resource occupancy rate is greater than the first preset resource occupancy rate.
The network service scheduling method further comprises the following steps: and when the resource occupancy rate of each server in the server cluster is lower than a third preset resource occupancy rate, all engine processes running on the server with the minimum resource occupancy rate are migrated to other servers.
Starting at least one engine process on a server with the resource occupancy rate lower than a first preset resource occupancy rate in the server cluster to correspondingly load each business process function, and then the method further comprises the following steps: sequencing each server in the server cluster according to the resource occupancy rate of each server; at least one engine process is started on any server 10% after ranking, and each newly acquired business process function is loaded.
In order to solve the above problem, a second aspect of the present application provides a scheduling apparatus for a network service, wherein the scheduling apparatus for a network service includes: the processing module is used for starting at least one engine process and loading at least one business process function; the scheduling module is used for determining the service flow function as a parent service flow function, loading at least one sub-service flow function of the parent service flow function into an engine process of the service flow function so as to determine the sub-service flow function as the parent service flow function, and repeatedly executing the step of loading at least one sub-service flow function of the parent service flow function into the engine process of the service flow function until the formed network service of the single service does not accord with the set condition; and the operation module is used for operating the network service of the single service.
In order to solve the above problem, a third aspect of the present application provides an intelligent terminal, where the intelligent terminal includes a memory and a processor coupled to each other, and the processor is configured to execute program instructions stored in the memory to implement the scheduling method of the network service according to the first aspect.
In order to solve the above problem, a fourth aspect of the present application provides a computer-readable storage medium on which program instructions are stored, and the program instructions, when executed by a processor, implement the scheduling method of the network service of the first aspect.
The invention has the beneficial effects that: different from the prior art, the scheduling method of the network service of the application is characterized in that after at least one engine process is started and at least one service flow function is loaded, the service flow function is determined as a parent service flow function, at least one sub-service flow function of the parent service flow function is loaded into the engine process of the service flow function, the sub-service flow function is further determined as the parent service flow function, and the step of loading at least one sub-service flow function of the parent service flow function into the engine process of the service flow function is repeatedly executed until the formed network service of the single service does not accord with the set conditions, so that the sub-service flow function and the parent service flow function which belong to different engine processes and have a calling relationship can be sequentially called into the same engine process, and the network communication between the sub-service flow function and the parent service flow function can be converted into in-process communication, the network service which can correspondingly convert the network service of the micro service into the network service of the single service is operated, so that the cost of the network operation and maintenance service is effectively reduced, the waste of the network communication on the performance of the whole hardware is reduced, and the corresponding request response time delay is also reduced.
Drawings
Fig. 1 is a schematic flowchart of a first embodiment of a scheduling method for network services according to the present application;
FIG. 2 is a block diagram of an embodiment of a network management platform;
FIG. 3 is a schematic flow chart of one embodiment of S11 of FIG. 1;
FIG. 4 is a flowchart illustrating a second embodiment of a method for scheduling web services according to the present application;
FIG. 5 is a schematic flow chart diagram illustrating one embodiment of S22 of FIG. 4;
FIG. 6 is a flowchart illustrating a third embodiment of a method for scheduling web services according to the present application;
FIG. 7 is a block diagram of an embodiment of a scheduling apparatus for web services;
FIG. 8 is a block diagram of an embodiment of a smart terminal according to the present application;
FIG. 9 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The inventor finds that, in the early stage of internet development, the background architecture is almost a single service: all business logic is written in one or several background engine processes.
The single service has the advantages of extremely high utilization rate of hardware performance and extremely low request response time delay, and has the defects of difficult multi-user cooperation, failure isolation and unfavorable horizontal expansion.
The moore law still works in the early years, the hardware performance is continuously doubled, the complexity of internet business is higher and higher, the advantages of single services are less and less obvious, and the defects are slowly amplified. Until the micro service architecture is popularized in a large scale, namely, the business is split into small blocks according to two dimensions of horizontal and vertical, each small block of business is realized by a service process, and the whole business process network service architecture is completed by the cooperation of the processes through network communication.
As micro service architectures have been popular in the industry for over ten years, micro service architectures are becoming mainstream gradually replacing single service architectures, mainly because it actually solves some pain points of single service.
The advantage of the micro-service architecture is that it is extremely efficient for multi-person collaboration, controllable in fault range, extremely easy to expand horizontally, but it also has some drawbacks: the high operation and maintenance cost caused by the excessive number of service processes, the waste of the overall hardware performance caused by redundant network communication and the high request response time delay.
However, as moore's law fails, hardware performance is improved and enters a bottleneck, but as the internet user group explosively increases, the number of background servers of each large internet company increases more and more, and some companies even reach the scale of hundreds of thousands of servers. In such a massively background, the extra cost overhead of micro-service architecture to waste of hardware performance has been significant.
In addition, the internet industry is suddenly and fiercely advanced, the service complexity has to be further improved due to the fierce competitive environment, the number of micro-service processes in the background of each large internet company is increased explosively, the difficulty of operation and maintenance work is higher, and the operation and maintenance cost is higher and higher.
About 2014, a large-scale internet company has proposed a concept of "service governance": one or more sets of extremely complex systems are used for helping operation and maintenance and developers to finish the work of combing the relationship among services, collecting logs and calling data, generating data reports, automatically reporting faults, analyzing fault links and the like, so that the burden of mantreatment is reduced. This usually requires tens or even hundreds of developers to complete the development and to perform the maintenance for a long time, which is very expensive and not as effective as desired.
Meanwhile, large internet companies have come online with lambda (anonymous function): a Serverless development platform; let the background service go further on the way "tear down", tearing down to the function level. The nops (no operation and maintenance) strategy proposed by Serverless solves the problem of difficult operation and maintenance, but the implementation of lambda brings about more serious "combination" problem: the functions are difficult to interact, and resultant force cannot be formed to finish complex business logic. This is not as smart as lambda developers, but as a business project, lambda's initial business position is in terms of "simple business", "business binder", targeting the need for "non-professional back-end developers" to develop simple back-end business. While Serverless does not address the pain of microservices, but instead becomes part of the microservices architecture, let us see a hope from it.
In order to solve the problems of high operation and maintenance cost caused by excessive service processes of a micro-service architecture, waste of overall hardware performance caused by redundant network communication and high request response time delay, the application provides a network service scheduling method. The present application will be described in further detail with reference to the following drawings and examples. It is to be noted that the following examples are only illustrative of the present application, and do not limit the scope of the present application. Likewise, the following examples are only some examples and not all examples of the present application, and all other examples obtained by a person of ordinary skill in the art without any inventive work are within the scope of the present application.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a scheduling method of network services according to a first embodiment of the present application. Specifically, the method may include the steps of:
s11: and starting at least one engine process to load at least one business flow function, and determining the business flow function as a parent business flow function.
With the rapid development of micro service architecture, network services based on micro service architecture gradually replace monomer service architecture to become mainstream. For an internet company, on the premise of extremely high-efficiency multi-user cooperation, controllable fault range and extremely easy horizontal extension without changing a micro-service architecture, how to solve the problems of high operation and maintenance cost caused by excessive service processes, waste of redundant network communication on the performance of the whole hardware and high request response time delay is a key point influencing the operation cost of the internet company and a key factor influencing the network service enjoyed by a user, and the factor greatly influences the development of an internet platform.
Understandably, an Engine (Engine) is a core component of a development program or system on an electronic platform. By using the engine, the developer can quickly establish, lay out the functions required by the program, or utilize the operation of its auxiliary programs.
A Process (Process) is a running activity of a program in a computer on a data set, is a basic unit of resource allocation and scheduling of a system, and is the basis of an operating system structure. In early process-oriented design of computer architecture, processes were the basic execution entities of programs; in contemporary thread-oriented computer architectures, processes are containers for threads. A program is a description of instructions, data, and their organization, and a process is an entity of a program.
The service flow function is a program function capable of correspondingly completing a network service, that is, a Script (Script, which is an executable file written according to a certain format using a specific descriptive language) so as to automatically construct and generate an executable after being submitted to the background server. And when the function is called externally, the server starts one or more engine processes as required to load the corresponding function so as to provide service for the outside.
In the present embodiment, the engine process specifically refers to network traffic running on the server and corresponding to various types of micro service architectures.
Specifically, at least one engine process is started on the server to correspondingly and sequentially load at least one business process function corresponding to the current network service to be provided into one or more different engine processes.
It should be noted that adding the service flow function in one or more engine processes is specifically performed by gradually accumulating according to the sequence of the corresponding network service request, and based on the characteristics of the micro-service architecture, the network service is split into one small block according to two dimensions, namely horizontal and vertical dimensions, each small block of service is implemented by one service process, that is, the engine process, and the processes cooperate with each other through network communication to complete the whole service flow.
Therefore, for a large number of users, a large number of network request tasks of the same type may be issued to the server at the same time period, for example, a request for purchasing different commodities is made to the same online shopping store, so that a large number of different business process functions may exist in which at least two business process functions are in stable calling relationship with each other, but belong to different engine processes.
In order to effectively reduce the problems of high operation and maintenance cost caused by excessive number of engine processes started in sequence, waste of overall hardware performance caused by redundant network communication, and high request response delay, in this embodiment, a service flow function added to an engine process first is specifically determined as a parent service flow function.
S12: and loading at least one sub business process function of the parent business process function into an engine process of the business process function.
Further, after a service flow function is loaded in an engine process and determined as a parent service flow function, a child service flow function located downstream of the parent service flow function may be sequentially loaded into the engine process according to a possible call relationship between the parent service flow function and a subsequently sequentially added service flow function, so as to avoid restarting another engine process to load the child service flow function.
It should be noted that the parent business process function and the corresponding child business process function can be specifically understood as two business process functions in a stable calling upstream and downstream relationship, and the child business process function is in a called relationship.
In the process of sequentially loading the sub-business flow functions into the engine process of the parent business flow function, different engine processes which are originally required to be started can be converted into a single service by the micro-service architecture which respectively loads the parent business flow function and the sub-business flow functions, only one engine process is correspondingly started, and the originally required network communication can be converted into the in-process communication by the scheduling communication between different engine processes, so that the cost of the network operation and maintenance service is effectively reduced, the waste of the network communication on the overall hardware performance is reduced, and the corresponding request response time delay is also reduced.
S13: and judging whether the network service of the single service formed by the engine process meets the set conditions.
Understandably, limited by the load capacity of the service flow function that can be accommodated by the single engine process, after sequentially loading at least one sub-service flow function into the engine process of the corresponding parent service flow function to form a single service, it is also necessary to judge whether the network service of the single service meets a set condition, for example, to determine whether the network service of the single service reaches the load upper limit of the single process; or, whether the corresponding sub-business process function is not inquired or does not exist in the set time; or, whether the correspondingly formed network traffic of the single service reaches any reasonable scenario setting such as the upper limit of the accommodation of any one or more factors of the single-process processor, the memory, and the bandwidth, which is not limited in this application.
Wherein, if the network traffic of the single service formed by the engine process does not meet the set condition, executing S14, and if the network traffic of the single service formed by the engine process meets the set condition, executing S15.
S14: and determining the child business process function as a parent business process function.
Specifically, after the child business process function is loaded into the engine process of the corresponding parent business process function, and the network business of the single service formed by the engine process does not meet the set condition, the child business process function can be determined as the parent business process function, so that the subsequent called relation, that is, the more downstream child business process function can be loaded into the engine process, and after S14 is executed, S13 needs to be repeatedly executed until the network business of the corresponding single service does not meet the set condition, and the execution is terminated.
S15: and running the network service of the single service.
Specifically, after the network service of the single service formed correspondingly meets the set condition, the network service of the single service can be correspondingly operated, so as to provide the corresponding network service for the user.
Understandably, when the business process function is needed to be loaded subsequently, an engine process can be restarted to load the business process function.
Therefore, a developer can specifically write a plurality of service flow functions (which may also be called scripts) which are completed in a mutual cooperation manner according to an agreed standard format, so as to respectively submit the service flow functions to the background server system, and the background server system automatically constructs and generates the executable. And when the business flow function is called externally, the server starts one or more engine processes as required to load the corresponding business flow function so as to provide service for the outside.
When the calling relation between the service flow function and the service flow function is relatively stable, the engine process at the upstream of the calling relation can additionally load the function at the downstream of the calling relation into the same engine process, the communication mode of the two service flow functions is changed from network communication into in-process communication, and the step is repeated, so that all the functions on the whole calling link can be loaded into the same (or a small number of) engine processes to form a huge single service. Since such a single service is intelligently assembled by a scheduling system, and there is no such service in a code library of a developer, in this application, it may be referred to as a "virtual service", and an architecture composed of the above systems, i.e., a "virtual service architecture".
In the development stage, the splitting granularity of the code unit is fine enough, the cost in the multi-person cooperation aspect is extremely low, meanwhile, a development framework does not need to be learned, and the development cost is lower than that of a micro-service framework. Multi-language collaboration is also made very simple by virtue of the runtime assembly of code.
In operation, the virtual service is used as a single service accommodating the whole call chain, and has the same high performance and ultra-low response delay as the traditional single service architecture.
In the aspect of operation and maintenance, the intelligent scheduling system can automatically start and close the engine process as required, so that NoOps (no operation and maintenance) is really realized, developers do not need to be concerned with the problem of deployment, the mental burden can be reduced, and the operation and maintenance labor cost is not positively correlated with the service quantity.
Therefore, the virtual service architecture can solve a series of problems of high operation and maintenance cost, hardware performance waste, request response time delay and the like of the micro service architecture popular in the industry at present, and is expected to replace the micro service architecture to become a new generation of internet background architecture.
In the scheme, the service flow function is determined as a parent service flow function, at least one sub-service flow function of the parent service flow function is loaded into the engine process of the service flow function, the sub-service flow function is further determined as the parent service flow function, and the step of loading at least one sub-service flow function of the parent service flow function into the engine process of the service flow function is repeatedly executed until the formed network service of the single service does not accord with the set condition, so that the sub-service flow function and the parent service flow function which belong to different engine processes and have calling relation can be called into the same engine process in sequence, the network communication between the sub-service flow function and the parent service flow function can be converted into in-process communication, the network service which can correspondingly convert the network service of the micro-service into the single service can be operated, and the cost of the network operation and maintenance service is effectively reduced, and the waste of the network communication to the overall hardware performance is reduced, and the corresponding request response time delay is also reduced.
In a specific embodiment, as shown in fig. 2, fig. 2 is a schematic diagram of a framework of an embodiment of a network management platform. The engine process is specifically started and run through an execution engine, wherein the execution engine is specifically a carrier for loading and running functions, and is composed of four main modules: the system comprises a network communication module, a virtual machine, a routing module and a management module.
The network communication module, the management module and the routing module can specifically select a micro-service frame, wherein the network communication module and the routing module are used for calling up and down stream business process functions through network communication, and the management module is used for processing a user request sent to the execution engine so as to correspondingly enable the execution engine to load a new business process function to load or unload the business process function. The Virtual Machine provided in an open source LLVM (Low Level Virtual Machine) project can meet the bytecode loading and running requirements in the C + + language for downloading and executing the business process function.
As shown in fig. 2, the scheduling method of the network service correspondingly implemented by the engine process may correspond to the constructor process in fig. 2, so that the scheduling method can be further performed by other functional units of the network management platform, for example, a construction service, a routing service, a node management, a load service, a scheduling service, and the like, to perform corresponding scheduling and processing, so as to meet the network service requirement of the user.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating an embodiment of S11 in fig. 1. In an embodiment, the scheduling method of the network service of the present application further includes some more specific steps in addition to the above-mentioned steps S11-S15. Specifically, the step S11 may further include the following steps:
s111: and acquiring the resource occupancy rate of each server in the server cluster.
It can be understood that the scheduling method of the network service of the present application is specifically applied to a processor, and the processor may be understood as a control hub of a background scheduling system, and the processor is communicatively connected to each server in a server cluster, and at least one engine process runs on the server.
It should be noted that, when a user request comes, the resource scheduling system needs to select a suitable location (server) from an existing machine pool (server cluster) to start an execution engine process to load and run the corresponding business process function. Then, how to select a suitable position is one of the core problems to be solved by the algorithm of the resource scheduling system.
"selecting location" is a seemingly simple but very complex problem: firstly, a condition to be satisfied is that the upper limit of the core number and the upper limit of the memory load of a single-machine Central Processing Unit (CPU) cannot be exceeded, and simultaneously, various resources such as the disk capacity, the disk interface, the CPU-load, the network card bandwidth and the like of the machine need to be considered on the basis of the disk, and then the concurrence problem of the operation of selecting the position needs to be considered; finally, a trade-off between "load balancing of machines" and "total cost of the machine pool" is also required.
In addition, the "position selection" is a problem with a time dimension, and the currently selected optimal solution may not be the optimal solution after a period of time, and may even become a poor solution. Therefore, it is necessary to consider how to readjust the slowly deteriorating solution to a better solution.
Specifically, the resource occupancy rate of each server in a server cluster currently providing the network service is obtained.
The resource occupancy may be understood as the usage rate of the server, that is, how many computing resources for running the engine process are used.
S112: and starting at least one engine process on a server with the resource occupancy rate lower than the first preset resource occupancy rate in the server cluster, and loading at least one business process function.
It can be understood that, in order to ensure that the server can operate smoothly after starting the corresponding engine process, a single-machine load security line can be set for each server in the server cluster, so as to determine which server needs to be used currently to start the current engine process correspondingly based on the single-machine load security line.
Wherein, this load safety line needs to satisfy 1 stability requirement: when the load of the single machine is lower than the safety line, the service can stably run; this load safety line also needs to meet 1 economic requirement: the load security line typically determines the resource usage cap for the entire cluster.
Specifically, after the resource occupancy rate of each server in the server cluster is obtained, it can be determined that at least one engine process needs to be started on the server in which the resource occupancy rate is lower than a first preset resource occupancy rate, and then at least one service flow function is loaded in the engine process.
Therefore, the first preset resource occupancy rate is a security line for the stand-alone load, so that when a certain load of the server exceeds the security line, a new engine process is not allocated to the server.
Optionally, the first preset resource occupancy is any reasonable number such as 0.8, 0.85, or 0.9, which is not limited in this application.
Further, in an embodiment, in the above S112, the method may further include: and migrating part of engine processes running on the server with the resource occupancy rate exceeding the second preset resource occupancy rate in the server cluster to other servers, and ensuring that the resource occupancy rate of each server in the server cluster is lower than the first preset resource occupancy rate.
It can be understood that after a single machine load security line is set for each server, a single machine load migration line, i.e. a second preset resource occupancy rate, needs to be set for each resource, i.e. each server, for example: the cpu usage was set at 0.95. When a certain load of the server exceeds a migration line, selecting some engine processes to migrate to other servers: new engine processes are started on other servers and homogeneous engine processes on that server are shut down while routing data is modified to route requests to the new engine processes. And stopping the process migration work until the load of the server is lower than a safety line, namely the first preset resource occupancy rate, so as to ensure that the resource occupancy rate of each server in the server cluster is lower than the first preset resource occupancy rate.
The migration line must be higher than the security line, that is, the second preset resource occupancy rate is greater than the first preset resource occupancy rate, and 2 stability requirements also need to be satisfied: firstly, a certain fluctuation space can be reserved for service load by a buffer area above a safety line and below a migration line, and frequent and unnecessary migration is avoided; secondly, when the service fluctuates upwards, the buffer area above the migration line can reserve certain scheduling time for the system;
optionally, the second preset resource occupancy is greater than the first preset resource occupancy by any reasonable number, such as 0.3, 0.5, or 0.6, which is not limited in the present application.
Optionally, the second preset resource occupancy is any reasonable number such as 0.93, 0.95, or 0.96, which is not limited in this application.
Further, in an embodiment, in the above S112, the method may further include: and when the resource occupancy rate of each server in the server cluster is lower than a third preset resource occupancy rate, all engine processes running on the server with the minimum resource occupancy rate are migrated to other servers.
It can be understood that, in order to reasonably use each server in the server cluster, a cluster capacity reduction line may also be set for each resource, that is, a third preset resource, such as: the cpu usage was set at 0.4. And when the total utilization rate of each server in the server cluster is lower than the cluster capacity reduction line, selecting the server with the lightest load, completely migrating the engine process on the server to other servers, and returning the server to the provider.
The capacity reduction line determines a lower limit of the resource utilization rate of the server cluster, and the cluster capacity reduction can be triggered not only by a line drawing mode but also by more and better modes, which is not limited in the present application.
Optionally, the contraction capacity line, that is, the third preset resource is any reasonable number such as 0.35, 0.4, or 0.45, which is not limited in this application.
Further, in an embodiment, the step S13 may specifically include: sequencing each server in the server cluster according to the resource occupancy rate of each server; at least one engine process is started on any server 10% after ranking, and each newly acquired business process function is loaded.
Understandably, in order to avoid the possible server concurrency problem, when a user request comes, each server in the server cluster can be sequenced according to the resource occupancy rate of each server, so that at least one engine process is started on any server after 10% of ranking, and each newly acquired business process function is loaded; or, sorting the servers with loads below the safety line according to the remaining resources, and randomly selecting one server from the top 10% ranked servers to start the engine process so as to load each newly acquired business process function.
Referring to fig. 4, fig. 4 is a flowchart illustrating a scheduling method of network services according to a second embodiment of the present application. The scheduling method of network service in this embodiment is a flowchart of a detailed embodiment of the video playing method in fig. 1, and includes the following steps:
s21: an initial code is obtained that meets the standard vertebra interface specification.
Understandably, in the development stage of the service flow function, the network service provided by the network service scheduling method provided by the application has extremely low cost in the aspect of multi-person cooperation because the granularity of code unit splitting is fine enough, and meanwhile, a development framework does not need to be learned, so that the development cost is lower than that of a micro-service framework. And because the code is assembled in the runtime, the multi-language cooperation becomes very simple.
However, because a development scenario of multi-language collaboration needs to be considered, the initial code for constructing the corresponding business process function needs to be normalized. For example, by issuing a standardized interface specification to each developer, it is ensured that the initial code subsequently obtained from that developer can satisfy the corresponding standard vertebra interface specification.
It can be understood that the reference cone interface specification specifically means that the function signature, the parameter type, the return value type, and the compatibility upgrade of the service flow function corresponding to the initial code all have to comply with the JCE (Java Cryptography Extension) protocol specification (other cross-language communication protocols can also be selected), which is a fundamental stone for concise communication between functions.
For example:
Figure BDA0003207521810000151
this is the simplest and canonical compliant C + + function that performs the "multiply by 2" function, and can be submitted directly to the build system for construction, resulting in a function named multi2, which can be called by other functions or externally.
At this point another function can be rewritten to call it:
Figure BDA0003207521810000152
the function completes the function of multiplying by 4, and a function named multi4 is derived after the function is submitted to a construction system for construction, and the two functions of multi4 and multi2 form a calling chain and cooperate with each other to complete the user request.
S22: and constructing the initial code into an executable business process function.
Further, when the obtained initial code meets the standard vertebra interface specification, the initial code can be constructed into an executable business process function.
S23: and starting at least one engine process to load at least one business flow function, and determining the business flow function as a parent business flow function.
S24: and loading at least one sub business process function of the parent business process function into an engine process of the business process function.
S25: and judging whether the network service of the single service formed by the engine process meets the set conditions.
S26: and determining the child business process function as a parent business process function.
S27: and running the network service of the single service.
S23, S24, S25, S26 and S27 are the same as S11, S12, S13, S14 and S15 in fig. 1, and please refer to S11, S12, S13, S14 and S15 and their related text descriptions, which are not repeated herein.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating an embodiment of S22 in fig. 4. Specifically, the step S22 may further include the following steps:
s221: and analyzing the source code of the initial code to obtain a function name and a function signature of the initial code, and correspondingly generating the initial function with the fixed signature.
It can be understood that, in order to construct the initial code provided by the developer into the corresponding executable, the initial code needs to be further processed, for example, the source code of the initial code is parsed to obtain the function name and the function signature of the initial code, and the initial function with the fixed signature is correspondingly generated based on the initial code.
For convenience of understanding, in one embodiment, for example, after the multi2 function code is submitted, the source code may be first parsed according to the "C + + syntax standard" to generate AST, and the derived function name and function signature may be found to generate a fixed-signature function named export _ multi 2:
Figure BDA0003207521810000161
s222: and adding the initial function to the source code end of the initial code to obtain a primary processing source code.
Further, after obtaining an initial function with a fixed signature, the initial function is added to the source code end of the initial code to obtain a raw source code, so that the raw source code can be provided to an external call.
S223: and compiling the primary processing source code into a business flow function in an executable byte code form, and storing the business flow function in a database.
Still further, in order to enable the raw source code to be loaded and executed by the virtual machine of the server, the raw source code needs to be compiled into a business flow function in the form of executable bytecode, so as to be stored in a database of the server, and waiting for calling and loading is performed.
Further, in an embodiment, the step S23 may specifically include: and downloading at least one business process function from the database so as to load the business process function into at least one engine process which is correspondingly started.
Understandably, after the business process functions are stored in the database, when the server acquires the corresponding user request, the corresponding at least one business process function can be downloaded from the database, and the business process function is loaded into the at least one engine process which is started correspondingly.
Further, in an embodiment, the step S24 may specifically include: and loading at least one sub-business flow function in the called relation in the stable calling relation into an engine process corresponding to the parent business flow function in the calling relation according to the network address of each business flow function.
It can be understood that, after a plurality of service flow functions having a call relationship in the server are sequentially loaded into a plurality of different engine processes, it can be known that the different engine processes implement communication specifically in a network communication manner, and thus have different network addresses correspondingly, the server can load at least one sub-service flow function having a call relationship in a stable call relationship into an engine process corresponding to a parent service flow function having a call relationship according to the network address of each service flow function, so as to convert the network communication manner corresponding to the sub-service flow function into in-process communication, thereby effectively reducing the time delay of subsequent request response.
Referring to fig. 6, fig. 6 is a flowchart illustrating a scheduling method of network services according to a third embodiment of the present application. The scheduling method of network service in this embodiment is a flowchart of a detailed embodiment of the video playing method in fig. 1, and includes the following steps:
s31: at least one engine process is started to load at least two business process functions.
Specifically, after at least one engine process is started on a server, at least two business process functions which need to provide network services at present are loaded to one or more different engine processes which are started at present in sequence.
S32: and acquiring the calling relation data of each business process function calling other business process functions.
Further, after loading at least two business process functions into the engine process, in order to reduce the number of starting engine processes, call relationship data for each business process function to call other business process functions may be specifically obtained.
It is understood that the call relation data is the information of the upstream and downstream positions of each business process function and other business process functions in the call chain, that is, the data information of the call or called relation.
S33: and correspondingly forming at least two business flow functions into a calling relation tree based on the calling relation data.
Still further, after the call relation data is obtained, in order to clear or more clearly display the call relation corresponding to each business process function, at least two business process functions can be correspondingly formed into a call relation tree according to the call relation data, that is, at least two business process function correspondences are assembled into at least one tree structure according to the upstream and downstream relations of the call chain.
The calling relation tree comprises at least one calling chain, the service flow function at the most upstream of each calling chain is a root node of the calling relation tree, and the service flow function at the downstream of each calling chain is a child node of the calling relation tree;
that is, each call chain is at least two service flow functions connected in sequence according to a call relationship, and the most upstream of each call chain is used as a root node of the call relationship tree, and the downstream is used as a child node of the upstream.
It is understood that the root node of the call relation tree may specifically correspond to a parent business process function, and the child nodes corresponding to the root node correspond to corresponding child business process functions.
S34: and traversing each business flow function in the call relation tree so as to load the child business flow function corresponding to at least one child node into the engine process of the parent business flow function corresponding to the root node corresponding to the child node.
Understandably, after the call relation tree is correspondingly formed based on the call relation data, in order to reduce the total number of the engine processes actually started in the server, each service flow function in the call relation tree can be traversed, so that the child service flow function corresponding to each child node is sequentially loaded into the engine process of the parent service flow function corresponding to the root node corresponding to the child node.
Here, Traversal (Traversal) refers to sequentially making one visit to each node in the tree (or graph) along a search route. The operation performed by the access node depends on the specific application problem, and the specific access operation may be to check the value of the node, update the value of the node, and the like. Different traversal methods have different access node orders.
It should be noted that, traversal is performed on each call relation tree, specifically, breadth-first traversal is performed from a root node to accumulate required resources (cpu, memory, bandwidth, and the like) of each business process function until an upper limit of a single-engine process load is reached, and the traversed business process functions are all loaded into an engine process where a parent business process function corresponding to the root node is located. And taking the node which is not traversed as a new root, and repeating the steps until the whole call relation tree is traversed.
S35: and judging whether the network service of the single service formed by the engine process meets the set conditions.
S36: and determining the child business process function as a parent business process function.
S37: and running the network service of the single service.
S35, S36, and S37 are the same as S13, S14, and S15 in fig. 1, and please refer to S13, S14, and S15 and their related text descriptions, which are not described herein again.
Further, in an embodiment, in the above S33, the method may further include: in the engine process, the child node of the calling relation tree which is called for a large number of times is positioned at one side of the root node of the calling relation tree, and the child node of the calling relation tree which is called for a small number of times is positioned at the opposite side of the root node of the calling relation tree.
It can be understood that, in order to distinguish the service flow functions with different calling times to determine the priority loaded into the engine process, in the process of correspondingly forming the calling relationship tree, the child node which is called more frequently in the engine process can be formed on one side of the root node of the calling relationship tree, and the child node of the calling relationship tree which is called less frequently is formed on the opposite side of the root node of the calling relationship tree, for example, the child node which is called more frequently is placed on the left side, the child node which is called less frequently is placed on the right side, and so on until the tail end of the calling chain, so as to form the corresponding calling relationship tree.
Referring to fig. 7, fig. 7 is a schematic diagram of a framework of an embodiment of a scheduling apparatus for network services according to the present application. The scheduling device 41 for network services includes: the processing module 411 is configured to start at least one engine process and load at least one service flow function; the scheduling module 412 is configured to determine the service flow function as a parent service flow function, load at least one sub-service flow function of the parent service flow function into an engine process of the service flow function, determine the sub-service flow function as the parent service flow function, and repeatedly perform the step of loading at least one sub-service flow function of the parent service flow function into the engine process of the service flow function until the formed network service of the single service does not meet the set condition; and an operation module 413, configured to operate the network traffic of the single service.
In the scheme, the service flow function is determined as a parent service flow function, at least one sub-service flow function of the parent service flow function is loaded into the engine process of the service flow function, the sub-service flow function is further determined as the parent service flow function, and the step of loading at least one sub-service flow function of the parent service flow function into the engine process of the service flow function is repeatedly executed until the formed network service of the single service does not accord with the set condition, so that the sub-service flow function and the parent service flow function which belong to different engine processes and have calling relation can be called into the same engine process in sequence, the network communication between the sub-service flow function and the parent service flow function can be converted into in-process communication, the network service which can correspondingly convert the network service of the micro-service into the single service can be operated, and the cost of the network operation and maintenance service is effectively reduced, and the waste of the network communication to the overall hardware performance is reduced, and the corresponding request response time delay is also reduced.
In some embodiments, the scheduling module 412 may be further specifically configured to: acquiring an initial code meeting a standard vertebra interface specification; and constructing the initial code into an executable business process function.
In some embodiments, the scheduling module 412 may be further specifically configured to: performing source code analysis on the initial code to obtain a function name and a function signature of the initial code, and correspondingly generating an initial function with a fixed signature; adding an initial function to the tail end of a source code of an initial code to obtain a primary processing source code; and compiling the primary processing source code into a business flow function in an executable byte code form, and storing the business flow function in a database.
In some embodiments, the scheduling module 412 may be further specifically configured to: and downloading at least one business process function from the database so as to load the business process function into at least one engine process which is correspondingly started.
In some embodiments, the scheduling module 412 may be further specifically configured to: and loading at least one sub-business flow function in the called relation in the stable calling relation into an engine process corresponding to the parent business flow function in the calling relation according to the network address of each business flow function.
In some embodiments, the scheduling module 412 may be further specifically configured to: acquiring call relation data of each business process function for calling other business process functions; correspondingly forming at least two business flow functions into a calling relation tree based on the calling relation data; the calling relation tree comprises at least one calling chain, the service flow function at the most upstream of each calling chain is a root node of the calling relation tree, and the service flow function at the downstream of each calling chain is a child node of the calling relation tree; and traversing each business flow function in the call relation tree so as to load the child business flow function corresponding to at least one child node into the engine process of the parent business flow function corresponding to the root node corresponding to the child node.
In some embodiments, the scheduling module 412 may be further specifically configured to: in the engine process, the child node of the calling relation tree which is called for a large number of times is positioned at one side of the root node of the calling relation tree, and the child node of the calling relation tree which is called for a small number of times is positioned at the opposite side of the root node of the calling relation tree.
In some embodiments, the scheduling module 412 may be further specifically configured to: and determining the sub-business flow function as a parent business flow function, and repeatedly executing the step of loading at least one sub-business flow function of the parent business flow function into an engine process of the business flow function until the formed network business of the single service reaches the load upper limit of the single process.
In some embodiments, the scheduling module 412 may be further specifically configured to: acquiring the resource occupancy rate of each server in the server cluster; wherein the resource occupancy rate is the utilization rate of the server; and starting at least one engine process on a server with the resource occupancy rate lower than the first preset resource occupancy rate in the server cluster, and loading at least one business process function.
In some embodiments, the scheduling module 412 may be further specifically configured to: migrating part of engine processes running on a server with the resource occupancy rate exceeding a second preset resource occupancy rate in the server cluster to other servers, and ensuring that the resource occupancy rate of each server in the server cluster is lower than a first preset resource occupancy rate; and the second preset resource occupancy rate is greater than the first preset resource occupancy rate.
In some embodiments, the scheduling module 412 may be further specifically configured to: and when the resource occupancy rate of each server in the server cluster is lower than a third preset resource occupancy rate, all engine processes running on the server with the minimum resource occupancy rate are migrated to other servers.
In some embodiments, the scheduling module 412 may be further specifically configured to: sequencing each server in the server cluster according to the resource occupancy rate of each server; at least one engine process is started on any server 10% after ranking, and each newly acquired business process function is loaded.
Referring to fig. 8, fig. 8 is a schematic diagram of a framework of an embodiment of an intelligent terminal according to the present application. The intelligent terminal 51 comprises a memory 511 and a processor 512 coupled to each other, and the processor 512 is configured to execute the program instructions stored in the memory 511 to implement the steps of any of the above-described embodiments of the network service scheduling method.
In a specific implementation scenario, the intelligent terminal 51 may include, but is not limited to: any reasonable intelligent terminal such as a computer, a tablet computer, a smart phone and a smart watch is not limited in the application.
In particular, the processor 512 is configured to control itself and the memory 511 to implement the steps of any of the above-described embodiments of the video display method. Processor 512 may also be referred to as a CPU (Central Processing Unit). Processor 512 may be an integrated circuit chip having signal processing capabilities. The Processor 512 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Additionally, processor 512 may be implemented collectively by an integrated circuit chip.
Referring to fig. 9, fig. 9 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 61 stores program instructions 611 executable by the processor, the program instructions 611 being for implementing the steps of any of the above described embodiments of the method for scheduling of network services.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (15)

1. A method for scheduling a network service, the method comprising:
starting at least one engine process and loading at least one business process function;
determining the service flow function as a parent service flow function, and loading at least one child service flow function of the parent service flow function into the engine process of the service flow function;
determining the sub-business flow function as the parent business flow function, and repeatedly executing the step of loading at least one sub-business flow function of the parent business flow function into the engine process of the business flow function until the formed network business of the single service does not accord with the set condition;
and running the network service of the single service.
2. The method for scheduling web services according to claim 1, wherein before the starting at least one engine process and loading at least one business process function, the method further comprises:
acquiring an initial code meeting a standard vertebra interface specification;
and constructing the initial code into the executable business process function.
3. The method of claim 2, wherein the constructing the initial code as the executable business process function comprises:
performing source code analysis on the initial code to obtain a function name and a function signature of the initial code, and correspondingly generating an initial function with a fixed signature;
adding the initial function to the source code tail end of the initial code to obtain a primary processing source code;
and compiling the primary processing source code into the service flow function in the form of executable byte codes, and storing the service flow function in a database.
4. The method for scheduling web services according to claim 3, wherein said starting at least one engine process and loading at least one business process function comprises:
and downloading at least one business process function from the database so as to load the business process function into at least one correspondingly started engine process.
5. The method according to claim 3, wherein the loading at least one sub business process function of the parent business process function into the engine process of the business process function comprises:
and loading at least one child business process function in a called relation in a stable calling relation into the engine process corresponding to the parent business process function in the calling relation according to the network address of each business process function.
6. The method according to claim 1, wherein the number of the business process functions loaded into the engine process is at least two, and after the at least one engine process is started and at least one business process function is loaded, before the business process function is determined as a parent business process function and at least one child business process function of the parent business process function is loaded into the engine process of the business process function, the method comprises:
acquiring calling relation data of each business flow function calling other business flow functions;
correspondingly forming at least two business process functions into a calling relationship tree based on the calling relationship data; the calling relation tree comprises at least one calling chain, the service flow function at the most upstream of each calling chain is a root node of the calling relation tree, and the service flow function at the downstream of each calling chain is a child node of the calling relation tree;
the loading at least one sub business process function of the parent business process function into the engine process of the business process function includes:
traversing each business flow function in the call relation tree to load the child business flow function corresponding to at least one child node into the engine process of the parent business flow function corresponding to the root node corresponding to the child node.
7. The method of scheduling of network services according to claim 6,
in the engine process, the child node of the calling relationship tree which is called for a large number of times is located on one side of the root node of the calling relationship tree, and the child node of the calling relationship tree which is called for a small number of times is located on the opposite side of the root node of the calling relationship tree.
8. The method according to claim 1, wherein the network traffic up to the formed individual service does not meet a set condition, and comprises:
until the formed network service of the single service reaches the load upper limit of the single process.
9. The method for scheduling a web service according to claim 1, wherein the method for scheduling a web service is applied to a processor, the processor is communicatively connected to each server in a server cluster, at least one engine process runs on the server, and the starting at least one engine process and loading at least one business process function includes:
acquiring the resource occupancy rate of each server in the server cluster; wherein the resource occupancy rate is the utilization rate of the server;
and starting at least one engine process on a server in the server cluster, wherein the resource occupancy rate of the server cluster is lower than a first preset resource occupancy rate, and loading at least one business process function.
10. The method for scheduling network services according to claim 9, further comprising:
migrating part of the engine processes running on the servers with the resource occupancy rate exceeding a second preset resource occupancy rate in the server cluster to other servers, and ensuring that the resource occupancy rate of each server in the server cluster is lower than the first preset resource occupancy rate; and the second preset resource occupancy rate is greater than the first preset resource occupancy rate.
11. The method for scheduling network services according to claim 9, further comprising:
when the resource occupancy rate of each server in the server cluster is lower than a third preset resource occupancy rate, all engine processes running on the server with the minimum resource occupancy rate are migrated to other servers.
12. The method for scheduling a network service according to claim 9, wherein the step of starting at least one engine process on the server in the server cluster whose resource occupancy rate is lower than a first preset resource occupancy rate to load each business process function correspondingly further comprises:
sorting each of the servers in the server cluster according to the resource occupancy of each of the servers;
and starting at least one engine process on any server which is 10% of the ranked servers, and loading each newly acquired business process function.
13. A scheduling apparatus of a network service, the scheduling apparatus of the network service comprising:
the processing module is used for starting at least one engine process and loading at least one business process function;
the scheduling module is used for determining the business flow function as a parent business flow function, loading at least one sub business flow function of the parent business flow function into an engine process of the business flow function so as to determine the sub business flow function as the parent business flow function, and repeatedly executing the step of loading at least one sub business flow function of the parent business flow function into the engine process of the business flow function until the formed network business of the single service does not accord with the set condition;
and the operation module is used for operating the network service of the single service.
14. An intelligent terminal, characterized in that the intelligent terminal comprises a memory and a processor coupled with each other, the processor is used for executing program instructions stored in the memory to realize the scheduling method of network service according to any one of claims 1-12.
15. A computer-readable storage medium having stored thereon program instructions, which when executed by a processor, implement the scheduling method of a network service according to any one of claims 1-12.
CN202110921267.4A 2021-08-11 2021-08-11 Scheduling method and device for network service, intelligent terminal and storage medium Active CN113778642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110921267.4A CN113778642B (en) 2021-08-11 2021-08-11 Scheduling method and device for network service, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110921267.4A CN113778642B (en) 2021-08-11 2021-08-11 Scheduling method and device for network service, intelligent terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113778642A true CN113778642A (en) 2021-12-10
CN113778642B CN113778642B (en) 2024-04-19

Family

ID=78837525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110921267.4A Active CN113778642B (en) 2021-08-11 2021-08-11 Scheduling method and device for network service, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113778642B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110007957A (en) * 2018-12-17 2019-07-12 阿里巴巴集团控股有限公司 Call link construction method, device and equipment
CN111078325A (en) * 2019-12-17 2020-04-28 北京小米移动软件有限公司 Application program running method and device, electronic equipment and storage medium
US20200195528A1 (en) * 2018-12-17 2020-06-18 Cisco Technology, Inc. Time sensitive networking in a microservice environment
CN112532523A (en) * 2020-11-23 2021-03-19 福建顶点软件股份有限公司 In-process scheduling method based on sub-service routing and storage device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110007957A (en) * 2018-12-17 2019-07-12 阿里巴巴集团控股有限公司 Call link construction method, device and equipment
US20200195528A1 (en) * 2018-12-17 2020-06-18 Cisco Technology, Inc. Time sensitive networking in a microservice environment
CN111078325A (en) * 2019-12-17 2020-04-28 北京小米移动软件有限公司 Application program running method and device, electronic equipment and storage medium
CN112532523A (en) * 2020-11-23 2021-03-19 福建顶点软件股份有限公司 In-process scheduling method based on sub-service routing and storage device

Also Published As

Publication number Publication date
CN113778642B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
Warneke et al. Exploiting dynamic resource allocation for efficient parallel data processing in the cloud
CN102880503B (en) Data analysis system and data analysis method
CN108319656A (en) Realize the method, apparatus and calculate node and system that gray scale is issued
CN106547527B (en) JavaScript file construction method and device
Convolbo et al. GEODIS: towards the optimization of data locality-aware job scheduling in geo-distributed data centers
CN109614227A (en) Task resource concocting method, device, electronic equipment and computer-readable medium
CN109740765A (en) A kind of machine learning system building method based on Amazon server
CN112132530B (en) Visual dynamic flow arranging method and system
Valvåg et al. Cogset: a high performance MapReduce engine
CN110399200A (en) A kind of cloud platform resource regulating method and device
CN112035112A (en) Application program development method, system, medium and electronic device
CN110737425A (en) billing platform system application program establishing method and device
Schirmer et al. Fusionize: Improving serverless application performance through feedback-driven function fusion
CN104050001A (en) Resource processing method, device and equipment based on Android system
CN107168795B (en) Codon deviation factor model method based on CPU-GPU isomery combined type parallel computation frame
CN108614697B (en) Background Dex compiling control method and device
CN106570152B (en) Mass extraction method and system for mobile phone numbers
CN114579250A (en) Method, device and storage medium for constructing virtual cluster
CN113778642B (en) Scheduling method and device for network service, intelligent terminal and storage medium
CN105653334A (en) Rapid development framework for MIS system based on SAAS mode
CN113360215A (en) Program running method and device and computer readable storage medium
CN110275771B (en) Service processing method, Internet of things charging infrastructure system and storage medium
CN115291933A (en) Method, device and equipment for constructing small program data packet and storage medium
Cai et al. SMSS: Stateful Model Serving in Metaverse With Serverless Computing and GPU Sharing
CN113419957A (en) Rule-based big data offline batch processing performance capacity scanning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant