CN112035228B - Resource scheduling method and device - Google Patents

Resource scheduling method and device Download PDF

Info

Publication number
CN112035228B
CN112035228B CN202010889940.6A CN202010889940A CN112035228B CN 112035228 B CN112035228 B CN 112035228B CN 202010889940 A CN202010889940 A CN 202010889940A CN 112035228 B CN112035228 B CN 112035228B
Authority
CN
China
Prior art keywords
functions
function
interface
runtime
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010889940.6A
Other languages
Chinese (zh)
Other versions
CN112035228A (en
Inventor
亢占雷
姚鹏程
周晓英
黄佐伟
刘尔凯
丁永建
李璠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Everbright Technology Co ltd
Original Assignee
Everbright Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Everbright Technology Co ltd filed Critical Everbright Technology Co ltd
Priority to CN202010889940.6A priority Critical patent/CN112035228B/en
Publication of CN112035228A publication Critical patent/CN112035228A/en
Application granted granted Critical
Publication of CN112035228B publication Critical patent/CN112035228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The invention provides a resource scheduling method and a device, wherein the method comprises the following steps: packaging each interface of the micro service into a function to obtain a plurality of packaged functions, wherein each function is independently operable service and is used for completing a business function; deploying the interface of the micro service through the functions; the resource scheduling is carried out according to the functions, so that the problems that the server resource utilization rate of a micro-service development mode in the related technology is low and cannot be shared can be solved, the micro-service is packaged into the functions, the functions corresponding to the micro-service are uniformly managed, the resource scheduling is uniformly carried out through the functions, the server resource utilization rate is improved, and the effect that the development results can be shared is achieved.

Description

Resource scheduling method and device
Technical Field
The present invention relates to the field of data processing, and in particular, to a method and apparatus for scheduling resources.
Background
The current micro-service development modes in the market are roughly divided into a spring closed system and a service-mesh system, and the provided capabilities mainly comprise:
a registry for providing the functions of registration and discovery, health check, and failover of services.
Fusing degradation, a plurality of service layers are usually called in a micro-service architecture, and a cascading failure may be caused by a failure of basic service, so that the whole system is not available.
And the configuration center is provided with corresponding configuration files of each server along with the increasing of micro-services, so that the configuration files are too many and are not easy to manage, and the configuration files need to be extracted to be managed as independent services.
The service gateway provides a lightweight gateway for simplifying calling logic of the front end, and realizes logic capabilities of authentication, authorization, routing, current limiting and the like of the service.
Link tracking, as services increase, call relationships between services, call chains for certain requests, time consumption between calls, etc. all need to form a monitor in order to better discover problems in the system.
Other capabilities such as load balancing, gray scale distribution, etc.
Under the spring closed system, the capabilities need to be intrusive into the system, the capabilities are provided in a hard-coded manner, and a system developed by non-java cannot be used.
Since each system involves the development of the above functions, a new solution to service-mesh is created. The Service-mesh scheme provides a mode which is independent of the operation of the original application system, a set of agent programs are deployed on a host machine of the original system in a side car mode, the access flow of the Service is taken over, and a plurality of agents are connected with one another to form a set of network communication. All agents are taken over by the control surface of the center end, the control surface is responsible for distributing rules such as flow limiting, routing, authentication, log and the like to the agents, and the agents forward the flow of the original system through analyzing the rules, so that the invasiveness of the original system is realized, and meanwhile, a plurality of programming languages are supported. Due to the service-mesh, when the service function is developed, the content related to service management is not needed to be considered, and only the service logic development is needed to be focused, so that the development complexity is greatly reduced.
In the micro-service development mode, the resource utilization rate of the server is low. At present, each application applies for a virtual machine or a physical machine, but the CPU utilization rate of a central processing unit is only 6% -12%, the utilization rate of a memory is basically below 50%, and the utilization rate of a remote disaster recovery machine room is not high. With the increasing of new systems, more machines are difficult to accommodate in the machine room, so that enterprises have to build the machine room at different places, and the cost of the enterprises is increased continuously. The results cannot be shared, at present, all systems are independently developed, most of functions relate to repeated development, such as user management, authority management, encryption rules, mail service and the like, each system can develop one copy, from the global perspective, the work relates to repeated construction, and if upgrading requirements exist, the scope of coverage is wider, so that risks are brought to system upgrading.
Aiming at the problems that the server resource utilization rate of a micro-service development mode in the related technology is low and cannot be shared, no solution is proposed yet.
Disclosure of Invention
The embodiment of the invention provides a resource scheduling method and a resource scheduling device, which at least solve the problems that the server resource utilization rate of a micro-service development mode in the related technology is low and cannot be shared.
According to one embodiment of the present invention, there is provided a resource scheduling method, applied to a platform, including:
packaging each interface of the micro service into a function to obtain a plurality of packaged functions, wherein each function is independently operable service and is used for completing a business function;
deploying the interface of the micro service through the functions;
and carrying out resource scheduling according to the functions.
Optionally, deploying the interface of the micro-service through the plurality of functions includes:
receiving function codes of the functions uploaded after editing is completed, and acquiring a runtime frame selected when the functions are uploaded;
generating the functions according to the function codes and the corresponding runtime frameworks, and storing the functions;
and respectively configuring the run-time attributes of the functions to finish the deployment of the functions.
Optionally, configuring the runtime properties of the plurality of functions, respectively, and completing the deployment of the plurality of functions includes:
abstracting the runtime attribute of each of the plurality of functions into a function object;
and monitoring the functional object.
Optionally, after configuring the runtime properties of the plurality of functions, respectively, and completing the deployment of the plurality of functions, the method further includes:
receiving a notification message triggered when the existence of the resource is updated by monitoring the functional object;
and updating the functions according to the notification message.
Optionally, after configuring the runtime properties of the plurality of functions, respectively, and completing the deployment of the plurality of functions, the method further includes:
setting an initialization interface for the functions, executing function codes of the functions through the initialization interface, and generating function instances of the functions;
setting a calling interface for the functions, and calling function instances of the functions through the calling interface;
setting a destroying interface for the functions, and if the objective function in the functions is not called in a preset time period, destroying a function instance when the objective function is initialized through the destroying interface.
Optionally, after deploying the interface of the micro service through the plurality of functions, the method further comprises:
and controlling partial functions of the functions to cooperatively complete one or more business logics through functions of function arrangement.
Optionally, the method further comprises:
receiving alarm information initiated when the runtime resource of the objective function is detected to meet a preset rule, wherein the alarm information carries an alarm type;
judging whether the number of function instances in the running process of the objective function is larger than or equal to a preset threshold value or not under the condition that the alarm type is a capacity expansion alarm;
judging whether the occurrence frequency of the current alarm information is greater than a preset frequency threshold value or not under the condition that the judgment result is negative;
under the condition that the judgment result is negative, performing capacity expansion processing on the objective function;
judging whether the objective function is accessed in a preset time under the condition that the alarm type is a capacity-shrinking alarm;
and if the judgment result is negative, carrying out the capacity reduction processing on the objective function.
Optionally, after deploying the interface of the micro service through the plurality of functions, the method further comprises:
receiving a trigger instruction for triggering and adjusting the number of function instances of the functions in advance;
analyzing the use conditions of the functions according to the trigger instruction;
and adjusting the number of function instances of the functions at a preset time before the flow peak occurs.
Optionally, after adjusting the number of function instances of the plurality of functions at a preset time before the peak of flow occurs, the method further comprises:
receiving a call request for calling the functions;
the call request is routed into a function instance of the plurality of functions based on load balancing.
Optionally, the method further comprises:
generating a new function instance when detecting that the objective function instance is down or unavailable;
the objective function instance is rerouted into the new function instance.
According to another embodiment of the present invention, there is also provided a resource scheduling apparatus, applied to a platform, including:
the packaging module is used for packaging each interface of the micro service into a function to obtain a plurality of packaged functions, wherein each function is a service capable of independently running and is used for completing one business function;
the deployment module is used for deploying the interfaces of the micro service through the functions;
and the resource scheduling module is used for scheduling the resources according to the functions.
Optionally, the deployment module includes:
the acquisition sub-module is used for receiving the function codes of the functions uploaded after editing is completed and acquiring a runtime frame selected when the functions are uploaded;
A generating sub-module, configured to generate the plurality of functions according to the function codes and the corresponding runtime frameworks, and store the plurality of functions;
and the configuration submodule is used for respectively configuring the runtime attributes of the functions and completing the deployment of the functions.
Optionally, the configuration submodule includes:
an abstract unit, configured to abstract a runtime attribute of each of the plurality of functions into a function object;
and the monitoring unit is used for monitoring the functional object.
Optionally, the apparatus further comprises:
the receiving sub-module is used for receiving a notification message triggered when the presence update of the resource is determined by monitoring the functional object;
and the updating sub-module is used for updating the functions according to the notification message.
Optionally, the apparatus further comprises:
an execution sub-module, configured to set an initialization interface for the plurality of functions, execute function codes of the plurality of functions through the initialization interface, and generate function instances of the plurality of functions;
a calling sub-module, configured to set a calling interface for the plurality of functions, and call function instances of the plurality of functions through the calling interface;
And the destroying sub-module is used for setting destroying interfaces for the functions, and destroying function instances when the objective function is initialized through the destroying interfaces if the objective function in the functions is not called in a preset time period.
Optionally, the apparatus further comprises:
and the control module is used for controlling partial functions of the functions to cooperatively complete one or more business logics through functions of function arrangement.
Optionally, the apparatus further comprises:
the first receiving module is used for receiving alarm information initiated when the fact that the runtime resource of the objective function meets a preset rule is detected, wherein the alarm information carries an alarm type;
the first judging module is used for judging whether the number of function instances in the running process of the objective function is larger than or equal to a preset threshold value or not under the condition that the alarm type is capacity expansion alarm;
the second judging module is used for judging whether the occurrence frequency of the current alarm information is larger than a preset frequency threshold value or not under the condition that the judging result is negative;
the capacity expansion module is used for carrying out capacity expansion processing on the objective function under the condition that the judgment result is negative;
the third judging module is used for judging whether the objective function is accessed within a preset time under the condition that the alarm type is a capacity-shrinking alarm;
And the capacity shrinking module is used for carrying out capacity shrinking processing on the objective function under the condition that the judgment result is negative.
Optionally, the apparatus further comprises:
the second receiving module is used for receiving a trigger instruction for triggering and adjusting the number of function instances of the functions in advance;
the analysis module is used for analyzing the use conditions of the functions according to the trigger instruction;
and the adjusting module is used for adjusting the number of function instances of the functions at a preset time before the flow peak occurs.
Optionally, the apparatus further comprises:
a third receiving module, configured to receive a call request for calling the plurality of functions;
and the routing module is used for routing the call request to the function instances of the functions based on load balancing.
Optionally, the apparatus further comprises:
the generation module is used for generating a new function instance when detecting that the objective function instance is down or unavailable;
and the rerouting module is used for rerouting the target function instance into the new function instance.
According to a further embodiment of the invention, there is also provided a computer-readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the invention, there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the invention, each interface of the micro service is packaged as a function to obtain a plurality of packaged functions, wherein each function is independently operable service and is used for completing a business function; deploying the interface of the micro service through the functions; the resource scheduling is carried out according to the functions, so that the problems that the server resource utilization rate of a micro-service development mode in the related technology is low and cannot be shared can be solved, the micro-service is packaged into the functions, the functions corresponding to the micro-service are uniformly managed, the resource scheduling is uniformly carried out through the functions, the server resource utilization rate is improved, and the effect that the development results can be shared is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a resource scheduling method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a resource scheduling method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a resource scheduling platform according to an embodiment of the invention;
FIG. 4 is a schematic diagram of a functional base runtime according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a function runtime property deployment in accordance with an embodiment of the present invention;
FIG. 6 is a schematic diagram I of functional intelligent routing according to an embodiment of the present invention;
FIG. 7 is a schematic diagram II of functional intelligent routing according to an embodiment of the present invention;
FIG. 8 is a schematic diagram III of functional intelligent routing according to an embodiment of the present invention;
FIG. 9 is a schematic diagram IV of functional intelligent routing according to an embodiment of the present invention;
FIG. 10 is a schematic diagram I of an operation and maintenance node function instance rerouting in accordance with an embodiment of the present invention;
FIG. 11 is a schematic diagram II of an operation and maintenance node function instance rerouting in accordance with an embodiment of the present invention;
fig. 12 is a block diagram of a resource scheduling apparatus according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the drawings in conjunction with embodiments. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
Example 1
The method embodiment provided in the first embodiment of the present application may be executed in a mobile terminal, a computer terminal or a similar computing device. Taking a mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of a mobile terminal according to an embodiment of the present invention, where, as shown in fig. 1, the mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a microprocessor MCU or a programmable logic device FPGA, etc.) and a memory 104 for storing data, and optionally, a transmission device 106 for a communication function and an input/output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a resource scheduling method in an embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the above-mentioned method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
In this embodiment, a resource scheduling method running on the mobile terminal or the network architecture is provided, fig. 2 is a flowchart of a resource scheduling method according to an embodiment of the present invention, as shown in fig. 2, applied to a platform, where the flowchart includes the following steps:
step S202, each interface of the micro service is packaged into a function, and a plurality of packaged functions are obtained, wherein each function is a service capable of independently running and is used for completing a business function;
step S204, deploying the interface of the micro service through the functions;
and step S206, scheduling the resources according to the functions.
The functions are independent of each other, so that the functions can be rapidly developed and tested. Meanwhile, the functions manage the data, and the platform manages the functions, so that the platform has the capability of integrating all data, eliminates data islands and can better realize digital transformation for an assistant enterprise.
Packaging each interface of the micro service into a function through the steps S202 to S206, so as to obtain a plurality of packaged functions, wherein each function is a service capable of independently running and is used for completing a business function; deploying the interface of the micro service through the functions; the resource scheduling is carried out according to the functions, so that the problems that the server resource utilization rate of a micro-service development mode in the related technology is low and cannot be shared can be solved, the micro-service is packaged into the functions, the functions corresponding to the micro-service are uniformly managed, the resource scheduling is uniformly carried out through the functions, the server resource utilization rate is improved, and the effect that the development results can be shared is achieved.
In the embodiment of the present invention, the step S202 may specifically include: receiving function codes of the functions uploaded after editing is completed, and acquiring a runtime frame selected when the functions are uploaded; generating the functions according to the function codes and the corresponding runtime frameworks, and storing the functions; and respectively configuring the runtime attributes of the functions, completing the deployment of the functions, further abstracting the runtime attributes of each function in the functions into a function object, and monitoring the function object.
In an alternative embodiment, after the runtime attribute of the plurality of functions is configured separately and the deployment of the plurality of functions is completed, a notification message triggered when the presence update of the resource is determined by monitoring the function object is received; updating the functions according to the notification message, wherein updating specifically can comprise adding, deleting, modifying and the like.
In another optional embodiment, after the runtime properties of the plurality of functions are respectively configured and the deployment of the plurality of functions is completed, an initialization interface is set for the plurality of functions, function codes of the plurality of functions are executed through the initialization interface, and function instances of the plurality of functions are generated; setting a calling interface for the functions, and calling function instances of the functions through the calling interface; setting a destroying interface for the functions, and if the objective function in the functions is not called in a preset time period, destroying a function instance when the objective function is initialized through the destroying interface.
In another optional embodiment, after the interfaces of the micro service are deployed through the functions, through a function arrangement function, partial functions of the functions are controlled to cooperatively complete one or more business logics, and on the basis of the functions, the function arrangement function is provided, so that the functions can cooperatively complete various business logics, and products adapting to market changes can be conveniently and quickly manufactured. For example, the micro-service corresponding to the function 1 is an uploaded image, the micro-service corresponding to the function 2 is a printed image, the functions 1 and 2 are arranged as a steel bracket function, and the system finishes uploading and printing the image.
In the embodiment of the invention, based on an intelligent scheduling system, the capacity of dynamic expansion and contraction can be provided for functions, computing resources are dynamically applied at the peak of flow, and are automatically recovered when the peak of flow is removed, so that the maximization of the resource utilization rate of a server is realized, and particularly, alarm information initiated when the fact that the running resources of an objective function meet preset rules is detected is received, wherein the alarm information carries alarm types; judging whether the number of function instances in the running process of the objective function is larger than or equal to a preset threshold value or not under the condition that the alarm type is a capacity expansion alarm; judging whether the occurrence frequency of the current alarm information is greater than a preset frequency threshold value or not under the condition that the judgment result is negative; under the condition that the judgment result is negative, performing capacity expansion processing on the objective function; judging whether the objective function is accessed in a preset time under the condition that the alarm type is a capacity-shrinking alarm; and if the judgment result is negative, carrying out capacity reduction processing on the objective function, namely carrying out capacity expansion processing according to the alarm information.
In an alternative embodiment, the platform maintains a set of early warning sensing capability above the capability of automatically expanding and contracting, namely, the platform dynamically adjusts the number of function instances to meet a large number of concurrent requests through the interface in a preset time (for example, 10 minutes) before the flood peak comes in advance through the use condition analysis of each function and the experience value of external activity. The part of operation is initiated manually by operation and maintenance personnel, and specifically, after the interfaces of the micro service are deployed through the functions, a triggering instruction for triggering and adjusting the number of function instances of the functions in advance is received; analyzing the use conditions of the functions according to the trigger instruction; and adjusting the number of function instances of the functions at a preset time before the flow peak occurs.
In another alternative embodiment, after adjusting the number of function instances of the plurality of functions a preset time before the traffic peak occurs, a call request to call the plurality of functions is received, the call request being routed to the function instances of the plurality of functions based on load balancing.
In another alternative embodiment, when detecting that the objective function instance is down or unavailable, generating a new function instance, and specifically generating the new function instance through kubernetes; the objective function instance is rerouted into the new function instance.
The embodiment of the invention constructs a comprehensive research and development platform integrating development, test, operation and operation, and the platform refines micro services to interface level, namely: the interface of each micro-service is deployed in the platform as an independently operable service (hereinafter referred to as a function). The platform bottom layer adopts an open-source kubernetes scheduling system, and all services run in the Pod of the kubernetes. FIG. 3 is a schematic diagram of a resource scheduling platform according to an embodiment of the present invention, where the functions are an independent entity capable of running, independent of other functions, each function performs a business function, and is managed by registering, incorporating into the platform, providing a unified access portal and a custom trigger portal to access the functions, as shown in FIG. 3. Because the service is exposed to the outside, only the development of the business function is needed, and the method can be independent of programming language. The function can be realized by java, nodejs or python, and a developer only needs to write documents according to different languages to develop a function capable of running. With the continued abundance of platforms, multiple programming languages will be supported in the future.
In order to simplify the development of functions, a platform is built with some basic function runtimes (run times), for example, jdk is needed when java functions are run, some functions need spring frameworks, some functions need jetty frameworks, different functions have different requirements, the platform cannot provide large and full Runtime support for all functions, and only some relatively basic Runtime frameworks can be provided. As platform capabilities increase, the runtime of the full programming language will be covered in the future. The currently provided runtime information is used:
java-with-jetty provides a jetty runtime framework that developers can directly use the jetty's capabilities for input and output programming.
Java-with-Spring provides a Spring runtime framework, and developers can directly use the Spring's capabilities for input and output programming.
Nodejs-with-express provides an express runtime framework.
Python-with-flash provides a flash runtime framework.
When a developer writes a function, only a proper runtime base SDK package is needed to be downloaded, then the function is written, if the function is needed to depend on other class libraries, the function and the function can be directly packaged into a zip compression package, and finally the zip compression package is uploaded to a platform for storage. It should be noted here that the underlying runtime information need not be packaged, and the platform will automatically pull the underlying runtime image at runtime. The information in the zip package comprises function information which is developed by a user and information such as class libraries, resources and the like which are depended on when the function runs.
On this basis, the basic runtime provides unified hypertext transfer protocol (Hyper Text Transfer Proctor, abbreviated as HTTP) communication service to the outside, listens to the fixed port, receives the external access request, and the platform-based runtime extracts three interfaces:
function initialization interface (init). The method is used for dynamically loading function codes written by users during running, and reducing the cold start time of the functions. After the basic runtime is started, the HTTP port is monitored, when a function initialization request is received, function codes written by a user are searched under a designated directory, and the functions are loaded into a memory through a dynamic loading mechanism to generate an example of the functions or a callable object. The dynamic loading mechanism has different test modes under different operation conditions, adopts a self-defined classLoader mode for loading under java, adopts a dynamic link library mode for loading under c++, adopts a plug mechanism for loading under golang, and can directly require loading under nodejs.
Function call interface (invoke). After the function is initialized, the instance of the function is loaded, when an invoke request is received, the function executes function codes, processes input and output information, and returns the result to a calling party in a synchronous or asynchronous mode.
And a function destruction interface (destroyer), wherein when the function is not called within a certain period of time, the instance of the function in initialization is automatically destroyed by the platform, the url is called back before destruction, and a developer can realize the interface to finish the self-defined resource release operation.
FIG. 4 is a schematic diagram of a functional base runtime, as shown in FIG. 4, that provides a run time mechanism instead of an exec mechanism, according to an embodiment of the present invention. The difference between the exec mode and the execution mode is that a function instance is generated during operation, the function is executed, and the function is destroyed. This results in an initialization and destruction action before each function is called, with some time overhead. After the runtime mechanism is adopted, the initialization and the destruction are only carried out once, and the same function instance is shared no matter how many times of calls are initiated, so that multiple unnecessary initialization operations and destruction operations are avoided. Meanwhile, because the function instance is shared by a plurality of threads, special attention is required to be paid to concurrent problems during development.
The platform provides a multi-language programming software development kit (Software Development Kit, abbreviated as SDK), a developer first downloads the SDK or writes function information according to a development document, note that the functions only need to complete the development of one service function, and if one application needs multiple interfaces, multiple functions need to be developed to meet the requirement, instead of using one function to realize all the functions.
The developer selects the programming technology which is good at himself to write the function content, if other libraries are needed to be relied on in the development process, library files are needed to be downloaded in advance, and library file information and function codes are put into a compression package when finally packaged. Ensuring that the required resources can be found in the compressed package.
The base runtime provides some context information for the function to use. The context information currently provided is:
acquiring a unique identifier ID of a current function call request;
acquiring a log object used by a function;
acquiring authorization information used by the function;
obtaining meta information such as version number, method name, author and the like of the function;
acquiring request information of a client, such as client IP, header information and the like;
acquiring environment variables used by the function;
the function signatures in different languages are different.
In java language, the function needs to implement the RequestHandler interface or the RequestStreamHandler interface, and the content of the function is implemented.
In nodejs programming, code can be written using module. Specific implementations require reference to development documents.
In the construction stage, a developer needs to complete writing of function codes, execute construction commands, provide different construction methods in different languages, such as a maven one-key construction in java, and provide an integrated development environment (Integrated Development Environment, abbreviated as IDE) construction in other languages, and finally compress static resources such as function information, dependent library resources, pictures and the like into a zip package. The development work is completed so far.
From the development process, the basic information of the function runtime is provided by platform developers.
And the steps required for developing a function are relatively fewer, and the function content only needs to be filled in according to the specification.
For enterprises, a large number of utility functions can be easily developed. The lead time is very significantly shortened.
After the function is written, the ready zip package can be uploaded to the platform through the web console, the command line and the IDE integration tool, and then the runtime properties of the function are configured, so that the function deployment is completed.
The runtime properties of the function configuration are:
the name of the function is used for identifying the unique function name under the current user;
the version number of the function is increased, and the version number is used for describing the version information of the current function and is used for gray level release and version rollback;
the basic runtime environment of the function is used for providing runtime support for the function;
the labels of the functions are convenient to search;
the description information of the function is convenient for a user to know the purpose of the function;
the memory resource limitation of the function, the maximum memory allowed to be used is identified, and after the maximum memory exceeds the maximum memory, the function is killed;
the environment variables of the function are customized with some attributes by a user for the function to access;
The runtime roles of the function define what roles the function can be accessed by for rights control;
the entry of the function, the name of the entry function of the function in the specified zip packet, the platform will use this as the entry to visit when running;
the maximum instance number of the function designates the maximum value of the function during dynamic capacity expansion, the minimum value is not required to be designated, and the default value is 0;
the maximum concurrency number of the function designates the maximum concurrency number which can be simultaneously supported by one function;
the maximum execution time of the function exceeds the time that the function is not successfully executed and is killed by the platform;
the function code package is used for returning the unique identification code after the user uploads the unique identification code to the cloud storage;
the execution plan of the function is mainly characterized in that whether the function is sensitive to cold start time or not;
the load balancing strategy of the function is mainly used for load balancing of the function instance.
The above attributes are abstracted in the platform into a Function object, which describes all meta-information of the Function as implementation of custom resource types of kubernetes. Finally, the file is serialized into json and saved into a distributed storage component etcd of kubernetes.
When the platform is started, a customized CRD resource object is obtained from the etcd, the Function object is monitored through a listmandwatch mechanism of kubernetes, once the resource is updated in the etcd, the platform is notified, and the processing operations of Function addition, function deletion and Function modification are correspondingly executed by three methods of addFunc, deleteFunc and updateFunc.
When the platform discovers a newly added Function object, extracting the object information of the Function, constructing the configuration information into a depth object in kubernetes, and then submitting the depth object to kubernetes for scheduling operation. The specific conversion rules are:
the author, user name and version number of the function are bound together to form the name attribute of the Deployment object.
The execution plan, environment variables, function authors, namespaces, labels, description information, maximum instance number, roles, entries, execution time and other attributes of the functions are bound to the animation attributes of the Deployment object and used as some description information.
The deviyment object constructs two containers, one for downloading user-defined function code and the other for pulling up an instance of the user function at run-time.
1. The downloading container is a container built in the platform and is used for sharing the same pod with the user function container, so that the same data volume can be shared, the container is used as a precondition of the user container, an HTTP port is started firstly, an external downloading request is received, then request parameters are read, and according to information of a request function in the parameters, a function code compression package developed by a user is downloaded from cloud storage to a local volume and decompressed. If for some reason the download is not successful, a retry may be attempted 3 times, and if still unsuccessful, the function construction failure is declared.
And 2, binding the basic mirror image information of the function with the image attribute of the user function container, wherein the mirror image pulling strategy is IfNotPresent.
And 3, binding the memory attribute of the function with the Resource attribute of the deviyment.
Finally, a Deployment object is constructed according to the Function attribute, after the Deployment object is submitted to kubernetes, the object automatically pulls up the pod of the user Function according to the Function information, so that the Function is really operated, wherein the pod is a carrier for Function operation, is a container, has all resources required by Function operation, and is a minimum unit called in kubernetes. The function is finally reported as a pod to the kubernetes platform.
FIG. 5 is a schematic diagram of attribute deployment during function operation according to an embodiment of the present invention, where, as shown in FIG. 5, while creating a discover, the platform also creates a service object and a horizontal auto-scaling object for the discover, the service object is used to expose access addresses of functions to the outside, while providing simple load balancing capability. The horizontal automatic expansion is used for detecting the use resources of the functions, and if a certain resource limit is exceeded, the capacity is automatically expanded transversely, and the capacities depend on the capacities of kubernetes. The platform does the matter of converting the Function object into a data structure which can be identified by a controller in Kubernetes.
After deployment, the functions can be run in kubernetes, and the platform gives the functions the capability of dynamic scaling and intelligent routing.
The platform constructs a reliable monitoring and early warning system for the function instance, and measures the runtime resources of the function container instance, including but not limited to the utilization rate of a central processing unit (Central Processing Unit, CPU for short), the memory utilization rate, the error access rate, the time-out access rate, the error access times, the time-out access times, the total time-consuming time and the like. The monitoring information is used for capturing data through open source software promethaus and analyzing the data in real time. When the monitored data reaches a preset threshold value or meets a specified rule, the AltertManager is automatically triggered to carry out alarm processing. In the alarm strategy, the platform can make corresponding expansion and contraction actions according to different types of alarms. Triggering rules are completed by PromQL carried by promethaus.
After the platform receives the prometaus capacity expansion alarm, the platform makes a judgment:
and if the number of instances reaches the maximum value when the function of the current alarm runs, rejecting the capacity expansion request after the number exceeds the maximum value.
Judging whether the frequency of the current alarm information is within the expected range, and if the frequency exceeds the expected threshold (configurable), manually processing is needed to determine whether capacity expansion is needed or whether illegal capacity expansion caused by abnormality is needed.
And calculating the number of target copies, and calculating the reasonable number of the copies by monitoring information and actual operation requirements.
The copy number of the function corresponding to the duplicate in kubernetes is modified, so that the purpose of capacity expansion is achieved.
The platform starts a background thread, monitors all function examples, calls an API of kubernetes when the function examples are not accessed within 2 minutes, directly modifies the copy number of the corresponding depth of the function to 0, and releases all computing resources.
When the number of the function instances is 0, the next time the request enters, the number of the instances is first adjusted to be 1, so that the function instances are pulled up for use.
On the basis of the capacity automatic expansion and contraction capability, the platform maintains a set of early warning sensing capability, namely, the number of function instances is dynamically adjusted through the interface within 10 minutes before the flood peak comes in advance through the use condition analysis of each function and the experience value of external activity, so that a large number of concurrent requests are met. This part of the operation is initiated manually by the operation and maintenance personnel.
The platform manages the example copies of all functions, when external access is called, the intelligent routing node is firstly accessed, and the node selects the function example which is most suitable for running the access request to forward through a series of rule calculation, so that the purpose of load balancing is achieved.
The routing policies supported by the platform are: random forwarding, sequential polling, connection pool loading, near routing, etc.
FIG. 6 is a schematic diagram of intelligent routing of functions according to an embodiment of the present invention, such as random forwarding, as shown in FIG. 6, with call requests sent randomly to a function instance.
FIG. 7 is a second schematic diagram of intelligent routing of functions according to an embodiment of the present invention, as shown in FIG. 7, polling sequentially, selecting a function instance for a call request in an incremental remainder manner.
FIG. 8 is a schematic diagram III of intelligent routing of functions according to an embodiment of the present invention, as shown in FIG. 8, the connection pool loads, and the principle is the same as the database connection pool, firstly, selecting one function instance which is not yet used from the function instances for random forwarding, then marking the use state of the function as "in use", and when the next request comes, not selecting the function instance which is in use, thereby selecting the function instance which is not yet used. If all function instances are in the "in use" state, the function request enters a wait state until the function use state changes, and wakes up again to route to the appropriate function instance.
FIG. 9 is a schematic diagram of functional intelligent routing according to an embodiment of the present invention, as shown in FIG. 9, where the function instance closest to the request issuer is selected to run in the vicinity of the route.
When the function is created, a load strategy is specified, the platform can judge the maximum concurrency quantity supported by the current function instance when in operation, if the current concurrency exceeds a threshold value, the request directly needs to be added into an event queue, and the dequeue is waited. After the waiting queue is full, the degradation process is performed directly.
After the function instance is configured, the execution can be scheduled in a platform, and the platform supports setting different triggers for the function, and the function is called in different modes. The triggers provided by the current platform are:
triggering by an application programming interface (Application Programming Interface, abbreviated as API), namely requesting a function url triggering by means of HTTP or HTTPs;
the cron expression and the rule same as linux can be configured through timing task triggering;
triggering through a message queue, and calling a function instance after acquiring a message from the message queue.
Aiming at an API triggering mode, the platform maintains the corresponding relation between each Function and each route, and is realized by a self-defined kubernetes CRD (Custom Resource Definition) HttpTrigger mode, and the principle is consistent with a Function object. The platform monitors the change of HttpTrigger, and when the change of HttpTrigger is found to be added, deleted and modified, the binding relation of the route is reestablished and stored in the route table. When the request arrives, the path of the function instance corresponding to the request path is obtained from the routing table, so that the data forwarding is completed.
For the mode of timing task triggering, the platform provides a cron expression, supports range triggering, periodic triggering, timing triggering and the like. By means of a TimeTrigger of a CRD of custom kubernetes. Monitoring the change of the TimeTrigger, and reconstructing the binding relation between the cron expression and the function once the change exists, stopping the old cron expression and starting a new expression.
Aiming at the triggering mode of the message queue, the platform adds subscription to the appointed message queue, when the message is received, the information of the called function is analyzed from the message, then the platform constructs a standard API request mode, and the subsequent calling mode is the same as the API triggering mode.
In the operation and maintenance stage, the platform is constructed on the basis of kubernetes and managed in a manner of deviyment, and when a function instance is down or unavailable, the kubernetes automatically pulls up a new function instance.
In the case of gray level distribution, this may be performed by configuring the flow ratio for the trigger, and fig. 10 is a schematic diagram of rerouting an operation and maintenance node function instance according to an embodiment of the present invention, as shown in fig. 10, where the platform configures the attribute of the trigger, and increases the flow ratio rule, for example, 10% to v2 version and 90% to v1 version.
FIG. 11 is a schematic diagram II of rerouting an operation and maintenance node function example according to an embodiment of the present invention, as shown in FIG. 11, the function access request traffic is all sent to the router node of the platform, and then the router node distributes the routes to different function examples according to the proportion of the trigger configuration through a weighted random algorithm, and then performs load balancing through an intelligent routing algorithm of the function examples.
In the production of an actual enterprise, one function cannot complete one service function, and more cases, a plurality of functions are required to complete one service function in a cooperative manner through a flow arrangement mode. Platform support is accomplished by way of visual editing of the flow file. Hereinafter referred to as a function workflow.
The function workflow is responsible for arranging the function information existing at present and accessing the functions through an API trigger mode. The platform adopts an xml format description file at present, and describes the calling relation of functions.
The platform provides 11 types of nodes, and more types of nodes can be provided as the platform evolves in the future:
1, start and end nodes, which represent the start and end of the flow
2, successful, fail node, indicating successful termination and abnormal termination of the flow
3, task node, which means that the function in the platform is to be called, trigger 4 by API mode, wait node, which means that the current node needs to wait for a period of time before continuing to operate
5, choice node, which means that the current node needs to perform a section of operation, and decides which branch to flow to after obtaining the result
And 6, a parallel node represents a node which is currently executed concurrently, a plurality of tasks can be called simultaneously, and the cooperative relationship of the plurality of tasks can be ALL, any and a self-defined callback function to cooperate. And respectively indicating that when all the subtasks are finished, any subtask is finished, and the Parallel node continues to run downwards in a specific state.
And 7, the catch node represents a rollback policy of the flow, captures the exception when a specific exception is encountered, and selects different branch nodes for processing according to different exceptions.
8, retry indicates a retry strategy that the node may choose if an exception occurs
9, map represents that the current task is subjected to MapReduce task segmentation, and rule functions of segmentation are required to be specified
10, child flow indicates that the current task is a sub-flow, which is a complete workflow task.
The human expression is an artificial node, and the process can continue to run after manual intervention is needed.
Context information is transferred between nodes through json's data structure. When the function workflow is executed, a complete json input request is received, json is transmitted to a first flow node, the output value of the last node is used as the input value of a second node, flow transmission is sequentially carried out until the flow is ended, and the output value of the last node is used as the output value of the whole flow.
Each node can perform operations such as adding, modifying and deleting input and output data, and the platform provides a jsonpath description language for processing.
The method comprises the steps of supporting the arrangement mode of distributed transactions among nodes, calling each node to operate in a completely asynchronous mode, enabling execution steps of each flow to be incorporated into a platform for management, recording operation time meta-information of each step by the platform, for example, starting time of workflow, node operation time, input parameters of each node, output parameters of each node, time-consuming conditions, abnormal conditions and the like of each node, and recording the operation time meta-information into a platform database for monitoring and inquiring conveniently. The platform provides online visual execution process tracking, and each time a node is executed, the web console can receive a flow message sent by the platform, so that the state node of the workflow can be updated in real time, and a progress prompt is given to a user.
In the embodiment of the invention, as the platform provides the dynamic capacity expansion and contraction mechanism and the intelligent routing mechanism, the boundary of the system becomes fuzzy, and a server is not required to be applied for each system, but the computing resource is dynamically applied for the function in the running process. Thus, the platform can utilize the computing resources of the whole enterprise, and greatly reduces the cost of the enterprise. Because the functions are managed in a centralized way, the functions written by different users can be accessed and called by other users, and the effect of enterprise organization level sharing is achieved. The functions are independently developed and independently run, have no dependence on other projects or services, can be independently debugged and tested, and provide great convenience for quick development and testing. The platform manages functions, the functions manage data, the functions are equivalent to the data of enterprises, and the data of the enterprises can be easily advanced by a flow arrangement mode, so that digital support is provided for digital transformation. By means of flow arrangement, a new service is released, the new service can be completed only through visual editing, the capability of the existing functions is fully utilized, different service modes are combined, and the method is suitable for rapid changes of markets. Intelligent routing, providing dynamic routing between function instances, load balancing, connection pool policies, and the like. On the basis of automatic capacity expansion and contraction, the intelligent capacity expansion and contraction utilizes big data analysis and artificial experience values to expand capacity in advance, so that the cold start time of the function is greatly reduced. Function workflow orchestration provides a fully asynchronous high-performance orchestration engine, and extended jsonpath expressions manipulate data, developing business logic in a visual editing manner.
Example 2
According to another embodiment of the present invention, there is further provided a resource scheduling apparatus applied to a platform, and fig. 12 is a block diagram of the resource scheduling apparatus according to an embodiment of the present invention, as shown in fig. 12, including:
the encapsulation module 122 is configured to encapsulate each interface of the micro service into a function, so as to obtain a plurality of encapsulated functions, where each function is a service capable of running independently, and is used to complete a service function;
a deployment module 124, configured to deploy the interface of the micro service through the plurality of functions;
the resource scheduling module 126 is configured to perform resource scheduling according to the multiple functions.
Optionally, the deployment module 124 includes:
the acquisition sub-module is used for receiving the function codes of the functions uploaded after editing is completed and acquiring a runtime frame selected when the functions are uploaded;
a generating sub-module, configured to generate the plurality of functions according to the function codes and the corresponding runtime frameworks, and store the plurality of functions;
and the configuration submodule is used for respectively configuring the runtime attributes of the functions and completing the deployment of the functions.
Optionally, the configuration submodule includes:
An abstract unit, configured to abstract a runtime attribute of each of the plurality of functions into a function object;
and the monitoring unit is used for monitoring the functional object.
Optionally, the apparatus further comprises:
the receiving sub-module is used for receiving a notification message triggered when the presence update of the resource is determined by monitoring the functional object;
and the updating sub-module is used for updating the functions according to the notification message.
Optionally, the apparatus further comprises:
an execution sub-module, configured to set an initialization interface for the plurality of functions, execute function codes of the plurality of functions through the initialization interface, and generate function instances of the plurality of functions;
a calling sub-module, configured to set a calling interface for the plurality of functions, and call function instances of the plurality of functions through the calling interface;
and the destroying sub-module is used for setting destroying interfaces for the functions, and destroying function instances when the objective function is initialized through the destroying interfaces if the objective function in the functions is not called in a preset time period.
Optionally, the apparatus further comprises:
and the control module is used for controlling partial functions of the functions to cooperatively complete one or more business logics through functions of function arrangement.
Optionally, the apparatus further comprises:
the first receiving module is used for receiving alarm information initiated when the fact that the runtime resource of the objective function meets a preset rule is detected, wherein the alarm information carries an alarm type;
the first judging module is used for judging whether the number of function instances in the running process of the objective function is larger than or equal to a preset threshold value or not under the condition that the alarm type is capacity expansion alarm;
the second judging module is used for judging whether the occurrence frequency of the current alarm information is larger than a preset frequency threshold value or not under the condition that the judging result is negative;
the capacity expansion module is used for carrying out capacity expansion processing on the objective function under the condition that the judgment result is negative;
the third judging module is used for judging whether the objective function is accessed within a preset time under the condition that the alarm type is a capacity-shrinking alarm;
and the capacity shrinking module is used for carrying out capacity shrinking processing on the objective function under the condition that the judgment result is negative.
Optionally, the apparatus further comprises:
the second receiving module is used for receiving a trigger instruction for triggering and adjusting the number of function instances of the functions in advance;
the analysis module is used for analyzing the use conditions of the functions according to the trigger instruction;
And the adjusting module is used for adjusting the number of function instances of the functions at a preset time before the flow peak occurs.
Optionally, the apparatus further comprises:
a third receiving module, configured to receive a call request for calling the plurality of functions;
and the routing module is used for routing the call request to the function instances of the functions based on load balancing.
Optionally, the apparatus further comprises:
the generation module is used for generating a new function instance when detecting that the objective function instance is down or unavailable;
and the rerouting module is used for rerouting the target function instance into the new function instance.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Example 3
Embodiments of the present invention also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for performing the steps of:
s1, packaging each interface of a micro service into a function to obtain a plurality of packaged functions, wherein each function is a service capable of independently running and is used for completing a business function;
s2, deploying the interfaces of the micro service through the functions;
and S3, scheduling the resources according to the functions.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Example 4
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, packaging each interface of a micro service into a function to obtain a plurality of packaged functions, wherein each function is a service capable of independently running and is used for completing a business function;
s2, deploying the interfaces of the micro service through the functions;
and S3, scheduling the resources according to the functions.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may alternatively be implemented in program code executable by computing devices, so that they may be stored in a memory device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module for implementation. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. The resource scheduling method is characterized by being applied to a platform and comprising the following steps of:
packaging each interface of the micro service into a function to obtain a plurality of packaged functions, wherein each function is independently operable service and is used for completing a business function;
deploying the interface of the micro service through the plurality of functions comprises: receiving function codes of the functions uploaded after editing is completed, and acquiring a runtime frame selected when the functions are uploaded; generating the functions according to the function codes and the corresponding runtime frameworks, and storing the functions; respectively configuring the run-time attributes of the functions to finish the deployment of the functions;
and carrying out resource scheduling according to the functions.
2. The method of claim 1, wherein configuring runtime properties of the plurality of functions, respectively, and completing deployment of the plurality of functions comprises:
Abstracting the runtime attribute of each of the plurality of functions into a function object;
and monitoring the functional object.
3. The method of claim 2, wherein after configuring the runtime properties of the plurality of functions, respectively, and completing the deployment of the plurality of functions, the method further comprises:
receiving a notification message triggered when the existence of the resource is updated by monitoring the functional object;
and updating the functions according to the notification message.
4. The method of claim 1, wherein after configuring the runtime properties of the plurality of functions, respectively, and completing the deployment of the plurality of functions, the method further comprises:
setting an initialization interface for the functions, executing function codes of the functions through the initialization interface, and generating function instances of the functions;
setting a calling interface for the functions, and calling function instances of the functions through the calling interface;
setting a destroying interface for the functions, and if the objective function in the functions is not called in a preset time period, destroying a function instance when the objective function is initialized through the destroying interface.
5. The method of claim 1, wherein after deploying the interface of the micro service through the plurality of functions, the method further comprises:
and controlling partial functions of the functions to cooperatively complete one or more business logics through functions of function arrangement.
6. The method according to claim 1, wherein the method further comprises:
receiving alarm information initiated when the runtime resource of the objective function is detected to meet a preset rule, wherein the alarm information carries an alarm type;
judging whether the number of function instances in the running process of the objective function is larger than or equal to a preset threshold value or not under the condition that the alarm type is a capacity expansion alarm;
judging whether the occurrence frequency of the current alarm information is greater than a preset frequency threshold value or not under the condition that the judgment result is negative;
under the condition that the judgment result is negative, performing capacity expansion processing on the objective function;
judging whether the objective function is accessed in a preset time under the condition that the alarm type is a capacity-shrinking alarm;
and if the judgment result is negative, carrying out the capacity reduction processing on the objective function.
7. The method of claim 1, wherein after deploying the interface of the micro service through the plurality of functions, the method further comprises:
Receiving a trigger instruction for triggering and adjusting the number of function instances of the functions in advance;
analyzing the use conditions of the functions according to the trigger instruction;
and adjusting the number of function instances of the functions at a preset time before the flow peak occurs.
8. The method of claim 7, wherein after adjusting the number of function instances of the plurality of functions at a preset time before the peak flow occurs, the method further comprises:
receiving a call request for calling the functions;
the call request is routed into a function instance of the plurality of functions based on load balancing.
9. The method according to any one of claims 1 to 8, further comprising:
generating a new function instance when detecting that the objective function instance is down or unavailable;
the objective function instance is rerouted into the new function instance.
10. A resource scheduling apparatus, applied to a platform, comprising:
the packaging module is used for packaging each interface of the micro service into a function to obtain a plurality of packaged functions, wherein each function is a service capable of independently running and is used for completing one business function;
The deployment module is used for deploying the interfaces of the micro service through the functions;
the resource scheduling module is used for scheduling resources according to the functions;
the deployment module comprises:
the acquisition sub-module is used for receiving the function codes of the functions uploaded after editing is completed and acquiring a runtime frame selected when the functions are uploaded;
a generating sub-module, configured to generate the plurality of functions according to the function codes and the corresponding runtime frameworks, and store the plurality of functions;
and the configuration submodule is used for respectively configuring the runtime attributes of the functions and completing the deployment of the functions.
11. A computer-readable storage medium, characterized in that the storage medium has stored therein a computer program, wherein the computer program is arranged to execute the method of any of the claims 1 to 9 when run.
12. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of the claims 1 to 9.
CN202010889940.6A 2020-08-28 2020-08-28 Resource scheduling method and device Active CN112035228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010889940.6A CN112035228B (en) 2020-08-28 2020-08-28 Resource scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010889940.6A CN112035228B (en) 2020-08-28 2020-08-28 Resource scheduling method and device

Publications (2)

Publication Number Publication Date
CN112035228A CN112035228A (en) 2020-12-04
CN112035228B true CN112035228B (en) 2024-04-12

Family

ID=73587061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010889940.6A Active CN112035228B (en) 2020-08-28 2020-08-28 Resource scheduling method and device

Country Status (1)

Country Link
CN (1) CN112035228B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565093A (en) * 2020-12-11 2021-03-26 山东大学 Method and storage medium for realizing micro-service route dynamic change based on memory database
CN114697395A (en) * 2020-12-11 2022-07-01 北京神州泰岳软件股份有限公司 Service resource calling execution method, device, service gateway and readable storage medium
CN112596720A (en) * 2020-12-25 2021-04-02 第四范式(北京)技术有限公司 Service operation method and device, electronic equipment and computer storage medium
CN112631804B (en) * 2020-12-25 2024-05-24 杭州涂鸦信息技术有限公司 Service call request processing method based on isolation environment and computer equipment
CN112291104B (en) * 2020-12-30 2021-04-06 望海康信(北京)科技股份公司 Micro-service automatic scaling system, method and corresponding equipment and storage medium
CN113157251B (en) * 2021-02-24 2022-05-31 复旦大学 Resource servitization and customization method for man-machine-object fusion application
CN113821336B (en) * 2021-03-08 2024-04-05 北京京东乾石科技有限公司 Resource allocation method and device, storage medium and electronic equipment
CN113051004B (en) * 2021-03-30 2022-04-15 北京字节跳动网络技术有限公司 Processing method, device and equipment of dependence function and storage medium
CN113778511A (en) * 2021-09-10 2021-12-10 豆盟(北京)科技股份有限公司 Resource allocation method, device, equipment and storage medium
CN114125055B (en) * 2021-11-30 2023-12-12 神州数码系统集成服务有限公司 Multi-protocol automatic adaptation cloud native gateway system control method, system, equipment and application
CN114064062B (en) * 2022-01-17 2022-05-13 北京快成科技有限公司 Kubernetes platform and load balancing component-based default gray level issuing method and device
CN114706622B (en) * 2022-03-10 2023-08-18 北京百度网讯科技有限公司 Method, device, equipment, medium and product for starting model service

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016101638A1 (en) * 2014-12-23 2016-06-30 国家电网公司 Operation management method for electric power system cloud simulation platform
CN108462656A (en) * 2016-12-09 2018-08-28 中国移动通信有限公司研究院 The resource regulating method and device of integrated services deployment based on container
US10291462B1 (en) * 2017-01-03 2019-05-14 Juniper Networks, Inc. Annotations for intelligent data replication and call routing in a hierarchical distributed system
US10511690B1 (en) * 2018-02-20 2019-12-17 Intuit, Inc. Method and apparatus for predicting experience degradation events in microservice-based applications
CN111158855A (en) * 2019-12-19 2020-05-15 中国科学院计算技术研究所 Lightweight virtual clipping method based on micro-container and cloud function
WO2020124459A1 (en) * 2018-12-19 2020-06-25 深圳晶泰科技有限公司 Method employing hybrid-cloud computing platform to provide serverless function as a service

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10860390B2 (en) * 2017-06-28 2020-12-08 Intel Corporation Microservices architecture
US11748178B2 (en) * 2019-04-02 2023-09-05 Intel Corporation Scalable and accelerated function as a service calling architecture

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016101638A1 (en) * 2014-12-23 2016-06-30 国家电网公司 Operation management method for electric power system cloud simulation platform
CN108462656A (en) * 2016-12-09 2018-08-28 中国移动通信有限公司研究院 The resource regulating method and device of integrated services deployment based on container
US10291462B1 (en) * 2017-01-03 2019-05-14 Juniper Networks, Inc. Annotations for intelligent data replication and call routing in a hierarchical distributed system
US10511690B1 (en) * 2018-02-20 2019-12-17 Intuit, Inc. Method and apparatus for predicting experience degradation events in microservice-based applications
WO2020124459A1 (en) * 2018-12-19 2020-06-25 深圳晶泰科技有限公司 Method employing hybrid-cloud computing platform to provide serverless function as a service
CN111158855A (en) * 2019-12-19 2020-05-15 中国科学院计算技术研究所 Lightweight virtual clipping method based on micro-container and cloud function

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向微服务系统的运行时部署优化;徐琛杰;周翔;彭鑫;赵文耘;;计算机应用与软件(第10期);全文 *

Also Published As

Publication number Publication date
CN112035228A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
CN112035228B (en) Resource scheduling method and device
Palade et al. An evaluation of open source serverless computing frameworks support at the edge
CN109120678B (en) Method and apparatus for service hosting of distributed storage system
US10175957B1 (en) System and method for pervasive software platform-based model driven architecture application generator
CN110413288B (en) Application deployment method, device, server and storage medium
Yang et al. A profile-based approach to just-in-time scalability for cloud applications
CN109067890B (en) CDN node edge computing system based on docker container
Caromel et al. ProActive: an integrated platform for programming and running applications on grids and P2P systems
US20170185507A1 (en) Processing special requests at dedicated application containers
US20170123777A1 (en) Deploying applications on application platforms
CN111190586A (en) Software development framework building and using method, computing device and storage medium
Da et al. Kalimucho: middleware for mobile applications
CN112394947A (en) Information system based on micro-service architecture
Kehrer et al. TOSCA-based container orchestration on Mesos: two-phase deployment of cloud applications using container-based artifacts
CN111090423A (en) Webhook framework system and method for realizing active calling and event triggering
CN112698921A (en) Logic code operation method and device, computer equipment and storage medium
Yangui et al. An OCCI compliant model for PaaS resources description and provisioning
CN114168179A (en) Micro-service management method, device, computer equipment and storage medium
CN110457132B (en) Method and device for creating functional object and terminal equipment
CN113419818B (en) Basic component deployment method, device, server and storage medium
CN116204239A (en) Service processing method, device and computer readable storage medium
Tang et al. Application centric lifecycle framework in cloud
CN110011827A (en) Towards doctor conjuncted multi-user's big data analysis service system and method
CN115567526B (en) Data monitoring method, device, equipment and medium
CN115357198B (en) Mounting method and device of storage volume, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant