CN117648175A - Service execution method and device based on dynamic algorithm selection and electronic equipment - Google Patents

Service execution method and device based on dynamic algorithm selection and electronic equipment Download PDF

Info

Publication number
CN117648175A
CN117648175A CN202410128326.6A CN202410128326A CN117648175A CN 117648175 A CN117648175 A CN 117648175A CN 202410128326 A CN202410128326 A CN 202410128326A CN 117648175 A CN117648175 A CN 117648175A
Authority
CN
China
Prior art keywords
computing
calculation
resource
information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410128326.6A
Other languages
Chinese (zh)
Other versions
CN117648175B (en
Inventor
杨浩
曹阳
杨书天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202410128326.6A priority Critical patent/CN117648175B/en
Publication of CN117648175A publication Critical patent/CN117648175A/en
Application granted granted Critical
Publication of CN117648175B publication Critical patent/CN117648175B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Stored Programmes (AREA)

Abstract

The specification discloses a service execution method, a device and electronic equipment based on dynamic algorithm selection. The method comprises the following steps: the business engineering unit receives a business request aiming at a target business and acquires calculation force dependency information required by executing the target business, wherein the calculation force dependency information is used for representing software resources required by executing the target business; determining a computing job task corresponding to the target service based on the computing force dependency information; the resource scheduling unit is used for determining a current resource scheduling algorithm according to resource demand information corresponding to the calculation job task at the current moment and calculation power information corresponding to each calculation resource; based on a resource scheduling algorithm, invoking computing resources of the computing equipment to execute computing job tasks according to the computing resources to obtain a computing result; and executing the target service according to the calculation result. The scheme realizes the dynamic selection of the scheduling algorithm, improves the flexibility of the system and fully meets the requirements of users.

Description

Service execution method and device based on dynamic algorithm selection and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a service execution method and apparatus based on dynamic algorithm selection, and an electronic device.
Background
With the continued development of machine learning algorithms, the demand for computing resources has increased. It is particularly important to provide users with packaged computing job submission capabilities and to be able to efficiently schedule underlying computing resources to run computing jobs.
However, when the application program is deployed, the existing method can only schedule the computing resource by using a fixed resource scheduling algorithm, and the online of a new scheduling algorithm can be realized only by reissuing the application, so that the computing resource cannot be reasonably utilized, and the service requirement of a user is difficult to meet.
Therefore, how to improve the flexibility of computing resource scheduling, reasonably utilize computing resources, and further meet the requirements of users is a problem to be solved urgently.
Disclosure of Invention
The present disclosure provides a service execution method, apparatus, storage medium and electronic device based on dynamic algorithm selection, so as to partially solve the foregoing problems in the prior art.
The technical scheme adopted in the specification is as follows:
the specification provides a service execution method based on dynamic algorithm selection, which is applied to a server, wherein the server is provided with: the service engineering unit, the resource scheduling unit and the computing resource unit comprise:
Receiving a service request aiming at a target service through the service engineering unit, and acquiring calculation force dependency information required by executing the target service according to the service request, wherein the calculation force dependency information is used for representing software resources required by executing the target service;
determining a computing job task corresponding to the target service based on the computing force dependency information;
determining a current resource scheduling algorithm according to the resource demand information corresponding to the calculation job task at the current moment and the calculation power information corresponding to each calculation resource by the resource scheduling unit;
based on the resource scheduling algorithm, invoking computing resources of computing equipment to execute the computing job task according to the computing resources to obtain a computing result;
and executing the target service according to the calculation result.
Optionally, the determining, by the resource scheduling unit, a current resource scheduling algorithm according to the resource requirement information corresponding to the calculation job task at the current moment and the calculation power information corresponding to each calculation resource specifically includes:
calling preset configuration middleware in the server through the resource scheduling unit;
reading a preset scheduling algorithm selection rule through the configuration middleware;
And determining a current resource scheduling algorithm according to the resource demand information corresponding to the calculation job task and the calculation power information corresponding to each calculation resource based on the scheduling algorithm selection rule.
Optionally, based on the resource scheduling algorithm, invoking a computing resource of the computing device to execute the computing job task according to the computing resource to obtain a computing result, and specifically includes:
the calculation result is sent to a message middleware preset in the server through the resource scheduling unit;
generating execution condition information according to the calculation result through the message middleware, and forwarding the execution condition information to the business engineering unit;
and receiving the execution condition information through the service engineering unit, and executing service operation according to the execution condition information.
Optionally, the server is further provided with: a scheduling engineering unit and an algorithm management unit;
before determining the current resource scheduling algorithm according to the resource demand information corresponding to the computing job task and the computing power information corresponding to each computing resource, the method further comprises:
acquiring serial peripheral interface SPI information set by a user through the scheduling engineering unit;
And sending the binary package of the SPI information to the algorithm management unit, and updating coordinate information corresponding to the binary package to a preset registration middleware in the server through the algorithm management unit.
Optionally, the determining, by the resource scheduling unit, a current resource scheduling algorithm according to the resource requirement information corresponding to the calculation job task at the current moment and the calculation power information corresponding to each calculation resource specifically includes:
sending an information acquisition request to the scheduling engineering unit through the resource scheduling unit, so that the scheduling engineering unit acquires coordinate information of a target binary packet corresponding to a current resource scheduling algorithm from the registration middleware according to the information acquisition request;
based on the coordinate information, acquiring the target binary packet, creating a loader corresponding to a current resource scheduling algorithm, loading the target binary packet through the loader to obtain target SPI interface information corresponding to the current resource scheduling algorithm, and sending the target SPI interface information to the resource scheduling unit;
and executing a current resource scheduling algorithm based on the target SPI interface information through the resource scheduling unit.
Optionally, before acquiring the computing power dependency information required for executing the target service according to the service request, the method further includes:
acquiring a target Application Programming Interface (API) required by a Software Development Kit (SDK) of computing power dependency information selected by a user through the business engineering unit, and generating an API list script based on the target API;
and determining SDK source codes corresponding to the calculation force dependency information in a preset Git warehouse according to the API list script, wherein the Git warehouse stores the SDK source codes corresponding to the calculation force dependency information required by executing different services.
Optionally, the server is further provided with: a persistent integration and publication unit, the method further comprising:
verifying the target SDK source code through the continuous integration and release unit, and compiling the target SDK source code after the target SDK source code passes the verification to obtain an executable file;
testing the executable file, and packaging the target SDK source code after the executable file passes the test to obtain an SDK package file which is used as the calculation force dependency information;
and after receiving the downloading instruction, sending the calculation force dependency information to the business engineering unit.
The present specification provides a service execution device based on dynamic algorithm selection, including:
the receiving module is used for receiving a service request aiming at a target service through the service engineering unit, and acquiring calculation force dependency information required by executing the target service according to the service request, wherein the calculation force dependency information is used for representing software resources required by executing the target service;
the first determining module is used for determining a computing job task corresponding to the target service based on the computing force dependency information;
the second determining module is used for determining a current resource scheduling algorithm according to the resource demand information corresponding to the calculation job task at the current moment and the calculation power information corresponding to each calculation resource through the resource scheduling unit;
the calling module is used for calling computing resources of the computing equipment based on the resource scheduling algorithm so as to execute the computing job task according to the computing resources to obtain a computing result;
and the execution module is used for executing the target service according to the calculation result. The present specification provides a computer readable storage medium storing a computer program which when executed by a processor implements the above-described dynamic computing power encapsulation based business execution method.
The present specification provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the above-mentioned business execution method based on dynamic computing power encapsulation when executing the program.
The above-mentioned at least one technical scheme that this specification adopted can reach following beneficial effect:
in the service execution method based on dynamic computing power encapsulation provided by the specification, a server receives a service request aiming at a target service through a service engineering unit, and acquires computing power dependency information required by executing the service request according to the service request, wherein the computing power dependency information is used for representing software resources required by executing the target service; determining a computing job task corresponding to the target service based on the computing force dependency information; determining a current resource scheduling algorithm according to resource demand information corresponding to the calculation job task at the current moment and calculation power information corresponding to each calculation resource by a resource scheduling unit; based on a resource scheduling algorithm, invoking computing resources of the computing equipment to execute computing job tasks according to the computing resources to obtain a computing result; and executing the target service according to the calculation result.
According to the method, before the computing resource is called, the resource scheduling algorithm can be updated in real time based on the resource demand information corresponding to the current target service and the computing power information of the computing equipment, so that the resource scheduling algorithm is matched with the actual situation in the service execution process, the flexibility of resource scheduling is ensured, the utilization rate of the computing resource is fully improved, and the user demands are further met.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
fig. 1 is a schematic flow chart of a service execution method based on dynamic algorithm selection provided in the present specification;
FIG. 2 is a schematic diagram of an SDK acquisition process of the calculation force dependent information provided in the present specification;
FIG. 3 is a schematic diagram of a hot deployment process of a resource scheduler provided in the present specification;
FIG. 4 is a schematic diagram of an overall invocation of a computing resource provided herein;
FIG. 5 is a schematic diagram of a service execution device based on dynamic algorithm selection provided in the present specification;
Fig. 6 is a schematic diagram of an electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
Most of the current technical schemes divide the system into an algorithm application layer, a service operation layer, a management arrangement layer and an infrastructure layer. At the computing power application layer, computing operation demands from various research fields and business directions are borne. In the service operation layer, the method mainly comprises the capabilities of computing power registration, service application, computing power encapsulation, service charging and the like. And at the management and arrangement layer, the management and arrangement layer is responsible for scheduling, arranging and controlling the computing resources. The lowest layer is the infrastructure layer, providing computing, storage, network resources to the upper layers.
Based on the above architecture, in the management arrangement layer, when a new scheduling algorithm needs to be online, the corresponding service needs to be reissued, and the online scheduling algorithm has long flow and long time consumption.
There is no flexible switching mode between each scheduling algorithm according to a predetermined strategy. Once a certain scheduling algorithm is selected, it cannot be changed unless the application is reissued. But in actual situations, different scheduling algorithms need to be adopted according to different situations so as to achieve the most reasonable use of resources.
In many cases, the service operation layer only hopes to provide the encapsulation of part of the computing power capability for the computing power service layer, but the current technical scheme is to encapsulate all the computing power capability, and then manage and control the computing power capability by the authority system, so that the situation that the user can see the corresponding computing power capability and cannot use the computing power capability can occur, and the experience is poor.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a flow chart of a service execution method based on dynamic algorithm selection provided in the present specification, which includes the following steps:
s101: and receiving a service request aiming at a target service through the service engineering unit, and acquiring calculation force dependency information required by executing the target service according to the service request, wherein the calculation force dependency information is used for representing software resources required by executing the target service.
The method for executing the service based on dynamic algorithm selection is improved, the problem that the calculation power is flexibly packaged is solved through a dynamic packaging mode, the problem that all calculation power is exposed to users due to the fact that partial calculation power cannot be selected for packaging in the existing calculation scheme is solved, and the problem that the heat deployment of a scheduling algorithm is used for Jie Juesuan power scheduling is solved. In addition, unlike the traditional scheme that the scheduling algorithm cannot be dynamically adjusted, the method and the system have the advantages that when the server runs, algorithm matching is carried out according to the running parameters and the configuration in the configuration center, the corresponding scheduling algorithm is selected for resource scheduling, scheduling flexibility is greatly improved, and the actual situation can be subjected to wider adaptation.
In this specification, an execution body for implementing a service execution method based on dynamic algorithm selection may be a server of an application program after release and deployment, where a service engineering unit, a resource scheduling unit, a computing resource unit, a degree engineering unit, an algorithm management unit, and a persistent integration and release unit are disposed in the server.
The server may start the service engineering unit in response to a user's designated operation, and then receive a service request for a target service through the service engineering unit, and obtain computing power dependency information required for executing the target service according to the service request.
In this specification, the target service may include: image recognition, word processing, unmanned (e.g., automatic navigation), information recommendation, anomaly detection, etc., which are not particularly limited in this specification.
The computing power dependency information may be a packaged software development kit (Software Development Kit, SDK) file for characterizing software resources required to execute the target business, thereby providing corresponding computing power capabilities. Comprising the following steps: the management interface, the computing framework, and auxiliary tools for simplifying operations and enhancing performance may include computing power capabilities corresponding to other software functions, which are not specifically limited in this specification.
Specifically, the server may obtain, through the service engineering unit, a target application programming interface API required by the software development kit SDK of the computing force dependency information selected by the user, generate an API manifest script based on the target API, and then determine, according to the API manifest script, an SDK source code corresponding to the computing force dependency information in a preset Git repository, where the Git repository stores a full amount of SDK source codes corresponding to the computing force dependency information required for executing different services, and the service engineering unit may select, based on the API manifest script, the SDK source code required by the current target service from among them.
And then the server can verify the target SDK source code through the continuous integration and release unit, and compile the target SDK source code after the target SDK source code passes the verification to obtain the executable file.
And then testing the executable file, packaging the target SDK source code after the test is passed, obtaining an SDK package file which is used as calculation force dependency information, and uploading the calculation force dependency information to a preset storage middleware in a server. After receiving the download instruction, the storage middleware can send the calculation force dependent information to the business engineering unit.
For ease of understanding, the present specification provides a schematic diagram of an SDK acquisition process of the calculation force dependent information, as shown in fig. 2.
Fig. 2 is a schematic diagram of an SDK acquisition process of the calculation force dependent information provided in the present specification.
Wherein, the user can select the required API in the SDK, and the business engineering unit generates a corresponding API list script based on the API selected by the user. The code change monitor continuously monitors the change of the Git warehouse, and when the code change occurs, the code change monitor transmits the information of the code change to the trigger. After the trigger receives the information, the information is passed to a persistent integration and publication unit. After the continuous integration and release unit receives the information, the latest codes are pulled from the Git warehouse. The code updating unit updates the code based on the API list script generated before and the latest code of the Git warehouse to obtain the target SDK source code.
The continuous integration and release unit performs code static inspection on the target SDK source code to find common known errors and vulnerabilities. After the static check of the code is passed, the code is compiled to generate an executable binary file. The generated binary file is tested by a test unit. After the test is completed, SDK packaging operation is carried out, and SDK package files required by users are obtained. Uploading the file to the storage middleware, and generating a download link of the file in the storage middleware. And finally, generating an SDK package version provided for the user, wherein the SDK package version comprises information such as a download link, a version number and the like.
S102: and determining a computing job task corresponding to the target service based on the computing force dependency information.
S103: and determining a current resource scheduling algorithm according to the resource demand information corresponding to the calculation job task at the current moment and the calculation power information corresponding to each calculation resource by the resource scheduling unit.
The server may then create a computing job task corresponding to the target service based on the computing force dependency information, where the computing task may include: specific requirements and goals of computing tasks (e.g., data to be processed, computing operations to be performed, etc.), computing code (including writing programs, scripts, or using existing libraries and tools), computing resource requirements, etc.
Further, the server may call a preset configuration middleware in the server through the resource scheduling unit, read a preset scheduling algorithm selection rule through the configuration middleware, and then determine a current resource scheduling algorithm according to resource demand information corresponding to the calculation job task and calculation power information corresponding to each calculation resource based on the scheduling algorithm selection rule.
The selection rule of the scheduling algorithm determines how the configuration middleware selects the resource scheduling algorithm according to the resource requirement information and the computing power information, and in this specification, the resource scheduling algorithm may include: a first come first serve scheduling algorithm, a short job priority scheduling algorithm, a time slice rotation scheduling algorithm, a multi-stage feedback queue scheduling algorithm, a proportional fair scheduling algorithm and the like.
Along with the continuous change of the resource demand and the computing power information of the computing equipment in the execution process of the computing job, the configuration middleware can dynamically adjust the resource scheduling algorithm matched with the current situation to schedule the computing resource of the computing equipment.
For example, in the case where multiple computing resources meet resource requirements simultaneously, the selected resource scheduling algorithm may prioritize higher computing resources and then lower computing resources.
In addition, for some special services, such as services marked as important services by users or services with high real-time requirements, the resource scheduling algorithm selected by the configuration middleware can call computing resources with high computational power resources preferentially.
For some services marked as unimportant or not of interest in executing efficiency, the lower computing resource may be invoked to perform its corresponding computing task.
Wherein the power demand information may include: data size requirements: the larger the data size processed, the more computational effort is required;
algorithm complexity requirement: different algorithms and models require different computing resources. Some complex algorithms and models may require more computing resources and time to execute;
real-time requirements: some applications require real-time processing of data, with high demands on the response speed of the computing forces. Therefore, the calculation force demand information needs to consider the real-time requirement to ensure that the task can be completed in time.
Concurrency requirements: some applications require processing multiple tasks or data streams simultaneously, with high demands on concurrent processing power. Thus, the calculation force demand information needs to take into account concurrency requirements to ensure that tasks can be completed simultaneously.
Of course, other calculation force demand information may be included, which is not particularly limited in this specification.
The computing power information of the computing device may include its CPU performance, GPU performance, memory bandwidth and capacity, and storage device performance, to which this specification is not limited in detail.
The server may pre-hot deploy the resource scheduling algorithm before the matching value middleware determines the current resource scheduling algorithm.
Specifically, the server may obtain serial peripheral interface (Service Provider Interface, SPI) information set by the user through the scheduling engineering unit, where the SPI information is used to implement a resource scheduling algorithm.
And the scheduling engineering unit can send the binary packet of the SPI information to the algorithm management unit, and the algorithm management unit updates the coordinate information corresponding to the binary packet to the preset registration middleware in the server.
After the resource scheduling unit receives the calculation job task corresponding to the target service, an information acquisition request can be sent to the scheduling engineering unit, so that the scheduling engineering unit acquires the coordinate information of the target binary packet corresponding to the current resource scheduling algorithm from the registered middleware according to the information acquisition request.
The scheduling engineering unit can acquire the target binary packet based on the coordinate information, and create a loader corresponding to the current resource scheduling algorithm, so as to load the target binary packet through the loader, obtain target SPI interface information corresponding to the current resource scheduling algorithm, and send the target SPI interface information to the resource scheduling unit. The resource scheduling unit may then execute the current resource scheduling algorithm based on the target SPI interface information.
For ease of understanding, the present disclosure provides a schematic diagram of a hot deployment process of resource scheduling algorithm, as shown in fig. 3.
Fig. 3 is a schematic diagram of a hot deployment procedure of a resource scheduling algorithm provided in the present specification.
The server can start the scheduling engineering unit, register the self-defined SPI interface information to the algorithm release management unit, and realize the self-defined SPI interface by an algorithm developer.
And then, coding the code for realizing the self-defined SPI interface into a binary packet, and releasing the binary packet to an algorithm release management unit. The algorithm distribution management unit updates the algorithm information to the registration middleware.
The scheduling engineering monitors the registration middleware continuously, and acquires the binary package coordinates from the registration middleware after the algorithm is updated. And determining the selected binary file coordinates by a rule selector in the scheduling engineering according to the current parameters, namely performing algorithm selection. After a new algorithm package is selected, the old loader is destroyed, an updated binary file is obtained from the algorithm release management unit, and a new loader is created.
And finally, loading the binary file by using a new loader to finish the hot deployment of the algorithm.
S104: and calling computing resources of computing equipment based on the resource scheduling algorithm to execute the computing job task according to the computing resources to obtain a computing result.
S105: and executing the target service according to the calculation result.
The server can send the calculation result to a message middleware preset in the server through the resource scheduling unit; then, generating execution condition information according to the calculation result through the message middleware, and forwarding the execution condition information to the business engineering unit;
after receiving the execution condition information, the business engineering unit can execute business operations according to the execution condition information, obtain business execution results and feed back the business execution results to the user.
For example, the service engineering may parse the received message to extract key information in the message, such as an operation type, an operation object, an operation parameter, and the like. The source, content and format of the message are verified to be correct to ensure the validity and correctness of the message. And then carrying out service logic processing, and executing corresponding service logic according to the operation type and the operation object in the message. This may include querying a database, updating data, invoking an external interface, and so on. And then respond to the message, such as returning a result, updating a state, or triggering other operations, etc. In addition, the server may also record the course and results of the business operations for subsequent analysis and troubleshooting. If an abnormality occurs in the business logic processing process, proper abnormality processing such as log recording, alarm sending and the like is needed.
For ease of understanding, the present description provides an overall call process diagram for a computing resource, as shown in FIG. 4.
FIG. 4 is a schematic diagram of an overall invocation of computing resources provided herein.
The business engineering introduces the packaged computing force SDK package dependence, uses computing force capability to develop business, and the computing force SDK bottom layer depends on a computing resource scheduling module to receive computing job tasks transmitted by the business engineering. The rule selector reads the corresponding algorithm selection rule from the configuration middleware, and selects a resource scheduling algorithm based on the current situation. And scheduling the resources required in the computational job task by the selected resource scheduling algorithm.
The computing resource receives the resource request transmitted by the computing resource dispatcher, executes the computing job, returns the result to the computing resource dispatcher after the computing resource executes the computing job, and transmits the executing process of the computing job or the result information to the message middleware which is responsible for transmitting the executing condition information of the computing job. The business project, having subscribed to the corresponding message in advance, receives information from the message middleware regarding execution of the computing job. And the business engineering makes corresponding business operation according to the message content.
According to the method, after the scheduling algorithm developer finishes code development, the scheduling algorithm developer can push the code to the algorithm release management unit, and then the hot deployment can be automatically finished. The complex operation of manual re-release and application each time is omitted, the manpower is liberated, the time is saved, and the use is convenient.
The rule selector is introduced, flexible selection of the scheduling algorithm is completed, quick switching of various scheduling algorithms is realized, replacement of the scheduling algorithm is not simply new and old replacement, selection is performed based on configuration of the configuration center, and code is not required to be pushed again for switching of the existing scheduling algorithm. The repeated labor amount of the developer is reduced.
The dynamic generation of the calculation force SDK is supported, and the application engineering development user can select the required APIs in advance to generate the SDK packet suitable for the application engineering, so that the exposure of useless APIs is reduced, the stability and the security of the system are improved, and the user experience is improved.
The above is one or more embodiments of the present disclosure implementing a service execution method based on dynamic computing power encapsulation, and based on the same concept, the present disclosure further provides a corresponding service execution device based on dynamic computing power encapsulation, as shown in fig. 5.
Fig. 5 is a schematic diagram of a service execution device based on dynamic algorithm selection provided in the present specification, including:
a receiving module 501, configured to receive, by using the service engineering unit, a service request for a target service, and obtain, according to the service request, computing power dependency information required for executing the target service, where the computing power dependency information is used to characterize a software resource required for executing the target service;
a first determining module 502, configured to determine a computing job task corresponding to the target service based on the computing force dependency information;
a second determining module 503, configured to determine, by using the resource scheduling unit, a current resource scheduling algorithm according to resource requirement information corresponding to the calculation job task at the current time and calculation power information corresponding to each calculation resource;
a calling module 504, configured to call computing resources of a computing device based on the resource scheduling algorithm, so as to execute the computing job task according to the computing resources, and obtain a computing result;
and the execution module 505 is configured to execute the target service according to the calculation result.
Optionally, the second determining module 503 is specifically configured to invoke, through the resource scheduling unit, a configuration middleware preset in the server; reading a preset scheduling algorithm selection rule through the configuration middleware; and determining a current resource scheduling algorithm according to the resource demand information corresponding to the calculation job task and the calculation power information corresponding to each calculation resource based on the scheduling algorithm selection rule.
Optionally, the invoking module 504 is specifically configured to send, through the resource scheduling unit, the calculation result to a message middleware preset in the server; generating execution condition information according to the calculation result through the message middleware, and forwarding the execution condition information to the business engineering unit; and receiving the execution condition information through the service engineering unit, and executing service operation according to the execution condition information.
Optionally, the server is further provided with: a scheduling engineering unit and an algorithm management unit;
before determining the current resource scheduling algorithm according to the resource demand information corresponding to the calculation job task and the calculation power information corresponding to each calculation resource, the second determining module 503 is further configured to obtain, through the scheduling engineering unit, serial peripheral interface SPI information set by a user; and sending the binary package of the SPI information to the algorithm management unit, and updating coordinate information corresponding to the binary package to a preset registration middleware in the server through the algorithm management unit.
Optionally, the second determining module 503 is specifically configured to send, by using the resource scheduling unit, an information acquisition request to the scheduling engineering unit, so that the scheduling engineering unit acquires, according to the information acquisition request, coordinate information of a target binary packet corresponding to a current resource scheduling algorithm from the registration middleware; based on the coordinate information, acquiring the target binary packet, creating a loader corresponding to a current resource scheduling algorithm, loading the target binary packet through the loader to obtain target SPI interface information corresponding to the current resource scheduling algorithm, and sending the target SPI interface information to the resource scheduling unit; and executing a current resource scheduling algorithm based on the target SPI interface information through the resource scheduling unit.
Optionally, before acquiring the computing power dependency information required for executing the target service according to the service request, the receiving module 501 is further configured to acquire, by using the service engineering unit, a target application programming interface API required by a software development kit SDK of the computing power dependency information selected by a user, and generate an API manifest script based on the target API; and determining SDK source codes corresponding to the calculation force dependency information in a preset Git warehouse according to the API list script, wherein the Git warehouse stores the SDK source codes corresponding to the calculation force dependency information required by executing different services.
Optionally, the server is further provided with: the receiving module 501 is further configured to verify the target SDK source code by using the persistent integrating and publishing unit, and compile the target SDK source code after the verification is passed, so as to obtain an executable file; testing the executable file, and packaging the target SDK source code after the executable file passes the test to obtain an SDK package file which is used as the calculation force dependency information; and after receiving the downloading instruction, sending the calculation force dependency information to the business engineering unit.
The present specification also provides a computer readable storage medium storing a computer program operable to perform a service execution method based on dynamic algorithm selection as provided in fig. 1 above.
The present specification also provides a schematic structural diagram of an electronic device corresponding to fig. 1 shown in fig. 6. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as illustrated in fig. 6, although other hardware required by other services may be included. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to realize the service execution method based on dynamic computing power encapsulation, which is described in the above figure 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
Improvements to one technology can clearly distinguish between improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) and software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description.

Claims (10)

1. A service execution method based on dynamic algorithm selection, which is characterized in that the method is applied to a server, and the server is provided with: the system comprises a business engineering unit, a resource scheduling unit and a computing resource unit;
receiving a service request aiming at a target service through the service engineering unit, and acquiring calculation force dependency information required by executing the target service according to the service request, wherein the calculation force dependency information is used for representing software resources required by executing the target service;
determining a computing job task corresponding to the target service based on the computing force dependency information;
determining a current resource scheduling algorithm according to the resource demand information corresponding to the calculation job task at the current moment and the calculation power information corresponding to each calculation resource by the resource scheduling unit;
based on the resource scheduling algorithm, invoking computing resources of computing equipment to execute the computing job task according to the computing resources to obtain a computing result;
and executing the target service according to the calculation result.
2. The method according to claim 1, wherein determining, by the resource scheduling unit, a current resource scheduling algorithm according to the resource requirement information corresponding to the computing job task at the current time and the computing power information corresponding to each computing resource specifically includes:
Calling preset configuration middleware in the server through the resource scheduling unit;
reading a preset scheduling algorithm selection rule through the configuration middleware;
and determining a current resource scheduling algorithm according to the resource demand information corresponding to the calculation job task and the calculation power information corresponding to each calculation resource based on the scheduling algorithm selection rule.
3. The method of claim 1, wherein invoking computing resources of a computing device based on the resource scheduling algorithm to perform the computing job task according to the computing resources results in a computation result, comprises:
the calculation result is sent to a message middleware preset in the server through the resource scheduling unit;
generating execution condition information according to the calculation result through the message middleware, and forwarding the execution condition information to the business engineering unit;
and receiving the execution condition information through the service engineering unit, and executing service operation according to the execution condition information.
4. The method of claim 1, wherein the server is further provided with: a scheduling engineering unit and an algorithm management unit;
Before determining the current resource scheduling algorithm according to the resource demand information corresponding to the computing job task and the computing power information corresponding to each computing resource, the method further comprises:
acquiring serial peripheral interface SPI information set by a user through the scheduling engineering unit;
and sending the binary package of the SPI information to the algorithm management unit, and updating coordinate information corresponding to the binary package to a preset registration middleware in the server through the algorithm management unit.
5. The method of claim 4, wherein determining, by the resource scheduling unit, a current resource scheduling algorithm according to the resource requirement information corresponding to the computing job task at the current time and the computing power information corresponding to each computing resource, specifically comprises:
sending an information acquisition request to the scheduling engineering unit through the resource scheduling unit, so that the scheduling engineering unit acquires coordinate information of a target binary packet corresponding to a current resource scheduling algorithm from the registration middleware according to the information acquisition request;
based on the coordinate information, acquiring the target binary packet, creating a loader corresponding to a current resource scheduling algorithm, loading the target binary packet through the loader to obtain target SPI interface information corresponding to the current resource scheduling algorithm, and sending the target SPI interface information to the resource scheduling unit;
And executing a current resource scheduling algorithm based on the target SPI interface information through the resource scheduling unit.
6. The method of claim 1, wherein prior to obtaining the computing force dependent information required to perform the target service from the service request, the method further comprises:
acquiring a target Application Programming Interface (API) required by a Software Development Kit (SDK) of computing power dependency information selected by a user through the business engineering unit, and generating an API list script based on the target API;
and determining SDK source codes corresponding to the calculation force dependency information in a preset Git warehouse according to the API list script, wherein the Git warehouse stores the SDK source codes corresponding to the calculation force dependency information required by executing different services.
7. The method of claim 6, wherein the server is further provided with: a persistent integration and publication unit, the method further comprising:
verifying the target SDK source code through the continuous integration and release unit, and compiling the target SDK source code after the target SDK source code passes the verification to obtain an executable file;
testing the executable file, and packaging the target SDK source code after the executable file passes the test to obtain an SDK package file which is used as the calculation force dependency information;
And after receiving the downloading instruction, sending the calculation force dependency information to the business engineering unit.
8. A service execution device based on dynamic algorithm selection, comprising:
the receiving module is used for receiving a service request aiming at a target service through the service engineering unit, and acquiring calculation force dependency information required by executing the target service according to the service request, wherein the calculation force dependency information is used for representing software resources required by executing the target service;
the first determining module is used for determining a computing job task corresponding to the target service based on the computing force dependency information;
the second determining module is used for determining a current resource scheduling algorithm according to the resource demand information corresponding to the calculation job task at the current moment and the calculation power information corresponding to each calculation resource through the resource scheduling unit;
the calling module is used for calling computing resources of the computing equipment based on the resource scheduling algorithm so as to execute the computing job task according to the computing resources to obtain a computing result;
and the execution module is used for executing the target service according to the calculation result.
9. A computer readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-7 when executing the program.
CN202410128326.6A 2024-01-30 2024-01-30 Service execution method and device based on dynamic algorithm selection and electronic equipment Active CN117648175B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410128326.6A CN117648175B (en) 2024-01-30 2024-01-30 Service execution method and device based on dynamic algorithm selection and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410128326.6A CN117648175B (en) 2024-01-30 2024-01-30 Service execution method and device based on dynamic algorithm selection and electronic equipment

Publications (2)

Publication Number Publication Date
CN117648175A true CN117648175A (en) 2024-03-05
CN117648175B CN117648175B (en) 2024-04-12

Family

ID=90043781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410128326.6A Active CN117648175B (en) 2024-01-30 2024-01-30 Service execution method and device based on dynamic algorithm selection and electronic equipment

Country Status (1)

Country Link
CN (1) CN117648175B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102904961A (en) * 2012-10-22 2013-01-30 浪潮(北京)电子信息产业有限公司 Method and system for scheduling cloud computing resources
CN110795219A (en) * 2019-10-24 2020-02-14 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Resource scheduling method and system suitable for multiple computing frameworks
WO2021190482A1 (en) * 2020-03-27 2021-09-30 中国移动通信有限公司研究院 Computing power processing network system and computing power processing method
CN114756340A (en) * 2022-03-17 2022-07-15 中国联合网络通信集团有限公司 Computing power scheduling system, method, device and storage medium
CN116089046A (en) * 2023-01-31 2023-05-09 安徽航天联志科技有限公司 Scheduling method, device, equipment and medium based on software-defined computing network
CN116501711A (en) * 2023-04-28 2023-07-28 山东省计算中心(国家超级计算济南中心) Computing power network task scheduling method based on 'memory computing separation' architecture
CN116643893A (en) * 2023-07-27 2023-08-25 合肥中科类脑智能技术有限公司 Method and device for scheduling computing task, storage medium and server
CN116684349A (en) * 2023-06-05 2023-09-01 中国联合网络通信集团有限公司 Method, system, electronic equipment and storage medium for distributing computing power network resources
CN116820764A (en) * 2023-06-27 2023-09-29 杭州阿里巴巴飞天信息技术有限公司 Method, system, electronic device and storage medium for providing computing resources
CN116932168A (en) * 2023-07-26 2023-10-24 中国电信股份有限公司技术创新中心 Heterogeneous core scheduling method and device, storage medium and electronic equipment
CN117331678A (en) * 2023-12-01 2024-01-02 之江实验室 Heterogeneous computing power federation-oriented multi-cluster job resource specification computing method and system
CN117395251A (en) * 2023-09-27 2024-01-12 中国电信股份有限公司技术创新中心 Resource scheduling method, device and computer readable storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102904961A (en) * 2012-10-22 2013-01-30 浪潮(北京)电子信息产业有限公司 Method and system for scheduling cloud computing resources
CN110795219A (en) * 2019-10-24 2020-02-14 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Resource scheduling method and system suitable for multiple computing frameworks
WO2021190482A1 (en) * 2020-03-27 2021-09-30 中国移动通信有限公司研究院 Computing power processing network system and computing power processing method
CN114756340A (en) * 2022-03-17 2022-07-15 中国联合网络通信集团有限公司 Computing power scheduling system, method, device and storage medium
CN116089046A (en) * 2023-01-31 2023-05-09 安徽航天联志科技有限公司 Scheduling method, device, equipment and medium based on software-defined computing network
CN116501711A (en) * 2023-04-28 2023-07-28 山东省计算中心(国家超级计算济南中心) Computing power network task scheduling method based on 'memory computing separation' architecture
CN116684349A (en) * 2023-06-05 2023-09-01 中国联合网络通信集团有限公司 Method, system, electronic equipment and storage medium for distributing computing power network resources
CN116820764A (en) * 2023-06-27 2023-09-29 杭州阿里巴巴飞天信息技术有限公司 Method, system, electronic device and storage medium for providing computing resources
CN116932168A (en) * 2023-07-26 2023-10-24 中国电信股份有限公司技术创新中心 Heterogeneous core scheduling method and device, storage medium and electronic equipment
CN116643893A (en) * 2023-07-27 2023-08-25 合肥中科类脑智能技术有限公司 Method and device for scheduling computing task, storage medium and server
CN117395251A (en) * 2023-09-27 2024-01-12 中国电信股份有限公司技术创新中心 Resource scheduling method, device and computer readable storage medium
CN117331678A (en) * 2023-12-01 2024-01-02 之江实验室 Heterogeneous computing power federation-oriented multi-cluster job resource specification computing method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
赵姗, 周兴社, 王云岚: "基于多维QoS的集群作业调度算法", 计算机工程, no. 22, 5 October 2006 (2006-10-05) *
陈重韬;: "面向多用户环境的MapReduce集群调度算法研究", 高技术通讯, no. 04, 15 April 2017 (2017-04-15) *

Also Published As

Publication number Publication date
CN117648175B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN108229686B (en) Model training and predicting method and device, electronic equipment and machine learning platform
US20190324772A1 (en) Method and device for processing smart contracts
CN108062252B (en) Information interaction method, object management method, device and system
CN110704037B (en) Rule engine implementation method and device
CN111897539B (en) Method and device for deploying application according to service roles
CN112597013A (en) Online development and debugging method and device
CN116225669B (en) Task execution method and device, storage medium and electronic equipment
CN116185532B (en) Task execution system, method, storage medium and electronic equipment
CN114168114A (en) Operator registration method, device and equipment
CN113835705B (en) Big data service product development method, device and system
CN111597058A (en) Data stream processing method and system
CN116933886B (en) Quantum computing execution method, quantum computing execution system, electronic equipment and storage medium
CN116107728B (en) Task execution method and device, storage medium and electronic equipment
CN117648175B (en) Service execution method and device based on dynamic algorithm selection and electronic equipment
CN116347623B (en) Task scheduling method and device, storage medium and electronic equipment
CN116382713A (en) Method, system, device and storage medium for constructing application mirror image
US11573777B2 (en) Method and apparatus for enabling autonomous acceleration of dataflow AI applications
CN114782016A (en) Creditor data processing method and device based on intelligent contract and block chain system
CN111796864A (en) Data verification method and device
CN117519733B (en) Project deployment method and device, storage medium and electronic equipment
CN111966479B (en) Service processing and risk identification service processing method and device and electronic equipment
Anthony et al. A middleware approach to dynamically configurable automotive embedded systems
CN117076336B (en) Testing method and device of cloud edge cooperative system, storage medium and equipment
CN112540835B (en) Method and device for operating hybrid machine learning model and related equipment
CN116755677A (en) Atomic service arrangement method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant