CN116980430A - Resource allocation processing method, device, computer equipment and storage medium - Google Patents

Resource allocation processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116980430A
CN116980430A CN202211455669.0A CN202211455669A CN116980430A CN 116980430 A CN116980430 A CN 116980430A CN 202211455669 A CN202211455669 A CN 202211455669A CN 116980430 A CN116980430 A CN 116980430A
Authority
CN
China
Prior art keywords
data
service
resource
application
call
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211455669.0A
Other languages
Chinese (zh)
Inventor
林良敏
陆宁
匡俊霖
林灶胜
黄庆宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211455669.0A priority Critical patent/CN116980430A/en
Publication of CN116980430A publication Critical patent/CN116980430A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/78Architectures of resource allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Exchange Systems With Centralized Control (AREA)

Abstract

The application relates to a resource allocation processing method, a device, a computer device and a storage medium. The method relates to cloud technology, comprising: and receiving a resource configuration request, determining application services to be configured and processed according to the resource configuration request, collecting call resource data among links of the application services to be configured and processed, carrying out link call aggregation analysis and flow statistics based on the call resource data, and determining dependency call relations among the application services. And determining resource change data of each application service according to the dependency calling relationship and actual calling data among each application service. And carrying out resource allocation adjustment on each application service based on the resource change data. By adopting the method, the resource allocation adjustment can be carried out on each application service based on the resource change data so as to reasonably allocate the service resources and data traffic of the platform or the program, thereby avoiding the problem that the service request cannot be processed or the platform crashes due to resource allocation errors and achieving the efficient operation and maintenance processing of the platform or the program.

Description

Resource allocation processing method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of cloud technologies, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for resource allocation processing.
Background
With the development of cloud technology and artificial intelligence technology and the wide use of various application platforms or application programs, the performance requirements on data calling and resource allocation among different application platforms or application programs are increasingly improved. Specifically, for example, when there is a high concurrency and large-scale access or call, the problem that the service interface of the application platform or program cannot be accessed or has access error often occurs easily, and further the whole application platform or program is crashed, so that the service resources and data traffic of the application platform or program need to be reasonably allocated.
Conventionally, for different scenes such as service access, data call and the like, the resource pressure and the service pressure of an application platform or a program in a preset time period are predicted according to idle resources of the application platform or the program and historical service response time, so that reasonable allocation of resources or data traffic is realized according to a prediction result, and the problem that a service interface cannot be accessed or the platform collapses is avoided.
However, the current allocation processing mode of resources or data traffic cannot be flexibly changed in real time according to the changes of the current resources and response time due to the prediction processing based on the idle resources and the historical response time, so that the prediction result has time delay and deviation, the resource pressure and service pressure of a platform or a program in the next time period cannot be accurately obtained, and the problem of unreasonable allocation of resources and data caused by inaccurate prediction result still exists, so that the operation and maintenance effects of the platform or the program are still to be improved.
Disclosure of Invention
Based on the above, it is necessary to provide a resource allocation processing method, a device, a computer readable storage medium and a computer program product, which can reasonably allocate service resources and data traffic of an application platform or program, improve the operation and maintenance effects of the platform or program, and ensure the stable operation of the platform or program.
In a first aspect, the present application provides a resource allocation processing method. The method comprises the following steps:
receiving a resource allocation request, and determining an application service to be allocated according to the resource allocation request;
collecting call resource data among links of the application services to be configured and processed, carrying out link call aggregation analysis and flow statistics based on the call resource data, and determining a dependency call relation among the application services;
Determining resource change data of each application service according to the dependency calling relationship and actual calling data among the application services;
and carrying out resource allocation adjustment on each application service based on the resource change data.
In one embodiment, based on the call resource data, performing standardization processing to obtain standard call resource data, including:
obtaining a corresponding standardized flow protocol according to service fields corresponding to different services and basic fields corresponding to standardized processing; and carrying out mapping processing and standardization processing on each original field in the call resource data by utilizing the standardized flow protocol to obtain standard call resource data.
In one embodiment, the mapping and normalizing the original fields in the call resource data by using the standardized traffic protocol to obtain standard call resource data includes:
invoking a streaming processing framework matched with the standardized flow protocol, in a first processing stage, carrying out rebalancing distribution processing on the invoked resource data, and distributing the invoked resource data to a processing operator corresponding to a second processing stage; in the second processing stage, based on the processing operator, performing decentralized processing on the received call resource data to obtain a resource data set corresponding to the call resource data; and in a third processing stage, according to the standardized traffic protocol, mapping each original field in the resource data set into a standardized traffic protocol field corresponding to the standardized traffic protocol according to a data source in sequence, and carrying out aggregation processing on each standardized traffic protocol field to obtain corresponding standard call resource data.
In a second aspect, the application further provides a resource allocation processing device. The device comprises:
the resource allocation request receiving module is used for receiving a resource allocation request and determining an application service to be allocated and processed according to the resource allocation request;
the dependency calling relation acquisition module is used for acquiring calling resource data among links of the application services to be configured and processed, carrying out link calling aggregation analysis and flow statistics based on the calling resource data, and determining the dependency calling relation among the application services;
the resource change data determining module is used for determining resource change data of each application service according to the dependency calling relationship and actual calling data among the application services;
and the resource allocation adjustment module is used for carrying out resource allocation adjustment on each application service based on the resource change data.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
receiving a resource allocation request, and determining an application service to be allocated according to the resource allocation request;
Collecting call resource data among links of the application services to be configured and processed, carrying out link call aggregation analysis and flow statistics based on the call resource data, and determining a dependency call relation among the application services;
determining resource change data of each application service according to the dependency calling relationship and actual calling data among the application services;
and carrying out resource allocation adjustment on each application service based on the resource change data.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
receiving a resource allocation request, and determining an application service to be allocated according to the resource allocation request;
collecting call resource data among links of the application services to be configured and processed, carrying out link call aggregation analysis and flow statistics based on the call resource data, and determining a dependency call relation among the application services;
determining resource change data of each application service according to the dependency calling relationship and actual calling data among the application services;
And carrying out resource allocation adjustment on each application service based on the resource change data.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
receiving a resource allocation request, and determining an application service to be allocated according to the resource allocation request;
collecting call resource data among links of the application services to be configured and processed, carrying out link call aggregation analysis and flow statistics based on the call resource data, and determining a dependency call relation among the application services;
determining resource change data of each application service according to the dependency calling relationship and actual calling data among the application services;
and carrying out resource allocation adjustment on each application service based on the resource change data.
In the resource configuration processing method, the device, the computer equipment, the storage medium and the computer program product, the application service to be configured and processed is determined according to the resource configuration request by receiving the resource configuration request, and the dependency calling relationship among the application services is determined by collecting the calling resource data among links to which the application service to be configured and processed belongs, so that link calling aggregation analysis and flow statistics are performed based on the calling resource data. Further, according to the dependency calling relationship and the actual calling data among the application services, the resource changing data of each application service is determined, so that the resource configuration adjustment can be carried out on each application service based on the resource changing data, and further the real-time allocation of resources is carried out without paying attention to the use condition of the data resources of the platform or the program and the distribution condition of the service processing requests, but the service resources and the data flow of the application platform or the program can be reasonably allocated, the problem that the service requests cannot be processed or the platform crashes due to resource allocation errors is avoided, and the efficient operation and maintenance processing of the platform or the program is achieved.
Drawings
FIG. 1 is an application environment diagram of a resource allocation processing method in one embodiment;
FIG. 2 is a flow diagram of a method for processing resource allocation in one embodiment;
FIG. 3 is a flow diagram of determining dependent call relationships during application services in one embodiment;
FIG. 4 is a schematic diagram of a stage process of the normalization process in one embodiment;
FIG. 5 is a schematic diagram of a flow model determined based on dependency call relationships between application services in one embodiment;
FIG. 6 is a schematic diagram of a flow model determined based on dependency calling relationships between application services in another embodiment;
FIG. 7 is a flow diagram of resource allocation adjustment for application services in one embodiment;
FIG. 8 is a flowchart illustrating resource allocation adjustment for each application service according to another embodiment;
FIG. 9 is a flowchart of a resource allocation processing method according to another embodiment;
FIG. 10 is a block diagram of a resource allocation processing device in one embodiment;
FIG. 11 is a schematic diagram of an architecture of a resource allocation processing system in one embodiment;
fig. 12 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The embodiment of the application provides a resource allocation processing method, which relates to cloud technology and artificial intelligence technology, wherein artificial intelligence (Artificial Intelligence, AI) is theory, method, technology and application system which simulate, extend and expand human intelligence by using a digital computer or a machine controlled by the digital computer, sense environment, acquire knowledge and acquire an optimal result by using the knowledge. In other words, artificial intelligence is a comprehensive technology of computer science, which attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a similar way to human intelligence, i.e., artificial intelligence, i.e., research on the design principles and implementation methods of various intelligent machines, so that the machine has the functions of sensing, reasoning and decision. The artificial intelligence technology is used as a comprehensive discipline, and relates to a technology with a wide field range and a technology with a hardware level and a technology with a software level, wherein the artificial intelligence basic technology generally comprises technologies such as a sensor, a special artificial intelligence chip, cloud computing, distributed storage, big data processing technology, an operation/interaction system, electromechanical integration and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Similarly, cloud technology (Cloud technology) refers to a hosting technology for integrating hardware, software, network and other series of resources in a wide area network or a local area network to implement calculation, storage, processing and sharing of data, and may also be understood as a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a Cloud computing business model, which may form a resource pool, and be used as needed, flexibly and conveniently. Background services of technical network systems require a large amount of computing and storage resources, such as video websites, picture websites and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data of different levels can be processed separately, various industry data need strong system rear shield support, and the cloud computing technology can be realized only through cloud computing, so that the cloud computing technology becomes an important support for data processing of different industries. Wherein cloud computing (clouding) refers to the delivery and usage mode of an IT infrastructure, refers to the acquisition of required resources in an on-demand, easily scalable manner over a network, and generalized cloud computing refers to the delivery and usage mode of a service, refers to the acquisition of required services in an on-demand, easily scalable manner over a network. Such services may be IT, software, internet related, or other services. Cloud Computing is a product of fusion of traditional computer and network technology developments such as Grid Computing (Grid Computing), distributed Computing (distributed Computing), parallel Computing (Parallel Computing), utility Computing (Utility Computing), network storage (Network Storage Technologies), virtualization (Virtualization), load balancing (Load balancing), and the like.
It can be appreciated that with the development of the internet, real-time data flow, diversification of connected devices, and the pushing of demands for search services, social networks, mobile commerce, and open collaboration, cloud computing is rapidly developing. Unlike the previous parallel distributed computing, the generation of cloud computing will promote the revolutionary transformation of the whole internet mode and enterprise management mode in concept. Likewise, as artificial intelligence technology research and advances, artificial intelligence technology expands research and applications in a variety of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, autopilot, unmanned, robotic, smart medicine, smart customer service, etc., it is believed that as technology evolves, artificial intelligence technology will find application in more fields and with increasing value.
The resource configuration processing method provided by the embodiment of the application particularly relates to an artificial intelligence technology and a cloud computing technology in a cloud technology, and can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The server 104 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms. The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, which may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, aircraft, etc., while portable wearable devices may be smart watches, smart bracelets, headsets, etc. The terminal 102 and the server 104 may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment of the present application.
Further, both the terminal 102 and the server 104 may be used separately to perform the resource allocation processing method provided in the embodiment of the present application, and the terminal 102 and the server 104 may also be used cooperatively to perform the resource allocation processing method provided in the embodiment of the present application. For example, taking the method for processing resource configuration provided in the embodiment of the present application as an example where the terminal 102 and the server 104 cooperatively execute, when the server 104 receives a resource configuration request initiated by the terminal 102, an application service to be configured and processed is determined according to the resource configuration request. Wherein, the application service can be configured on different terminals for executing business services corresponding to different business requirements of different terminals. Further, the server 104 acquires call resource data between links to which the application service to be configured and processed belongs, performs link call aggregation analysis and flow statistics based on the call resource data, determines a dependency call relationship between the application services, and determines resource change data of each application service according to the dependency call relationship and actual call data between the application services. Finally, the resource configuration adjustment can be performed on each application service based on the resource change data, and the application service after the resource configuration adjustment can better meet the actual service call and resource call requirements of different terminals 102, and meanwhile, the problem that service requests cannot be processed or the platform crashes due to resource allocation errors is avoided, so that the efficient operation and maintenance processing of the platform or the program is achieved.
In one embodiment, as shown in fig. 2, a resource allocation processing method is provided, which is implemented by a computer device for example, and it is understood that the computer device may be the terminal 102 shown in fig. 1, or may be the server 104, or may be a system formed by the server 104 of the terminal 102, and implemented through interaction between the terminal 102 and the server 104. In this embodiment, the resource allocation processing method specifically includes the following steps:
step S202, a resource allocation request is received, and an application service to be allocated is determined according to the resource allocation request.
When high concurrency large-scale data access or service call occurs, for example, a resource configuration request is usually triggered based on a terminal, so that balanced configuration of resources or services is realized according to the resource configuration request, the problem that service requests cannot be processed or a platform crashes due to resource allocation errors is avoided, and efficient operation and maintenance processing of the platform or a program is achieved.
Specifically, when a resource allocation request is received, an application service to be allocated and processed corresponding to the resource allocation request is obtained, wherein the application service can be specifically allocated on different terminal devices or different processing servers, and if interaction processes such as data access and resource calling exist between the different terminal devices and the processing servers, for the purpose of accurately executing the application service and stably operating the terminal devices or the processing servers, avoiding execution errors or device processing system breakdown, resource allocation needs to be performed on the application service, so that balanced allocation of service resources and data traffic is realized.
Further, in the process of carrying out resource allocation processing on the application service, whether the current limiting configuration and the resource proportion between each application and the server are reasonable or not is judged, and the current limiting configuration and the resource allocation of the application service are dynamically adjusted according to the judging result, so that the efficient operation and maintenance processing of an application platform or a program loaded on the terminal equipment or the server is improved.
Step S204, collecting call resource data among links of the application services to be configured and processed, and carrying out link call aggregation analysis and flow statistics based on the call resource data to determine the dependency call relationship among the application services.
Specifically, the dependency calling relationship between application services is determined according to calling resource data between links to which the application services belong. The call resource data between links to which each application service belongs may be specifically understood as call resource data between call links to which each application service belongs, such as detailed data of a specifically called service interface, a call sequence of the service interface, a dependent component type (for example, may be an application program, a service interface, a middleware, etc.), a call execution condition of the service interface (for example, different execution conditions such as call success, access success, call failure, access failure, etc.), and the like.
Further, call resource data between call links to which each application service to be configured and processed belongs, such as data of a specific call service interface, a call sequence of the service interface, a dependent component type and the like, are obtained, unified standardization processing is carried out on the call resource data, and then link call analysis and flow statistics are carried out based on the standardized call resource data (specifically, standard call resource data), so that a dependent call relation between each application service is obtained.
Step S206, determining the resource change data of each application service according to the dependency calling relationship and the actual calling data among each application service.
The resource change data specifically includes amplification ratio change data and dependency change data, the amplification ratio change data is specifically determined according to an original amplification ratio and an updated amplification ratio, and the dependency change data can be understood as a change condition of a dependency relationship between any two or more application services, for example, a change condition of an application service from no dependency relationship to a dependency relationship, a change of an application service from a dependency relationship to no dependency relationship, or a change of the application service from the dependency relationship to the dependency relationship.
The amplification ratio can be determined by the number of calls between two or more application services. For example, in a service processing link corresponding to a certain actual processing service, for example, application service a and application service B are specifically related, application service a is provided with service interface 1 and service interface 2, application service B is provided with service interface 3, and in the process of the service processing link, there is a dependency relationship between application service a and application service B, that is, application service B needs to be called after application service a is called, specifically, service interface 3 of application service B is called through service interface 1 of application service a, and the number of times of calling service interface 3 by service interface 1 is 2, then the amplification ratio between service interface 1 and service interface 3 can be determined to be 2 according to the number of times of calling.
Likewise, the application service a may also call the service interface 3 of the application service B through the service interface 2, for example, the call number of times that the service interface 3 calls the service interface 3 is 3, and the amplification ratio between the service interface 2 and the service interface 3 may be determined to be 3. After the amplification ratio between the service interfaces is determined, the amplification ratio between the application service A and the application service B can be further determined by summarizing the amplification ratio between the service interfaces.
Specifically, after the dependency call relationship between the application services is obtained, each application service having the dependency relationship and the dependency change data corresponding to each application service are determined further based on the dependency call relationship. According to each dependency call relationship, it can be determined which two or more application services have a dependency relationship, for example, an application service a and an application service B have a dependency relationship, or an application service C and an application service B have a dependency relationship.
According to the application service with the dependency relationship, the dependency relationship before the change is not generated can be further obtained, for example, before a certain business processing logic in the actual business is not executed, the application service A and the application service B have no dependency relationship, after the business processing logic is executed or the business logic is executed, the dependency relationship between the application service A and the application service B is detected, and the dependency change data between the application service A and the application service B can be understood as the dependency relationship is changed from the non-existence of the dependency relationship to the existence of the dependency relationship.
Further, by acquiring actual call data corresponding to each application service with a dependency relationship, such as actual data of which service interfaces are actually required to be called, the calling sequence of each called service interface, the calling times and the like. Furthermore, according to the actual call data corresponding to each application service with the dependency relationship, the original amplification ratio and the updated amplification ratio of each application service with the dependency relationship can be determined, so that the amplification ratio change data of each application service with the dependency relationship can be determined based on the original amplification ratio and the updated amplification ratio.
The original amplification ratio of each application service may be understood as an amplification ratio preset or defaulted by the application service in the execution process of the actual service, that is, an amplification ratio before a certain service processing logic in the actual service is not executed, and updating the amplification ratio may be understood as an amplification ratio required to execute the service processing logic or after the service logic is executed. It can be understood that by comparing the original amplification ratio with the updated amplification ratio, whether there is a front-back change of the amplification ratio is determined, for example, when the original amplification ratio is greater than the updated amplification ratio, or when the updated amplification ratio is greater than the original amplification ratio, or when the original amplification ratio and the updated amplification ratio have the same value, the front-back change of the amplification ratio can be determined, so that the amplification ratio change data of each application service having a dependency relationship is obtained.
Step S208, based on the resource changing data, the resource allocation adjustment is carried out on each application service.
The resource change data includes, in particular, amplification ratio change data and dependency change data, and the performance change data and the current limit change data can be determined based on the amplification ratio change data and the dependency change data. The resource allocation adjustment may include a resource adjustment process for the performance change data and a current limit adjustment process for the current limit change data.
Specifically, the current performance data corresponding to each application service is determined based on the amplification ratio change data and the dependent change data, and the basic performance data and the current performance data can be compared by acquiring the basic performance data corresponding to each application service, the front-back change condition of the performance data is determined, the performance change data is obtained, and then the corresponding resource allocation requirement is determined according to the performance change data, so that the resource adjustment processing is performed on each application service according to the resource allocation requirement. The resource adjustment process may be specifically understood as determining a resource amount of an application resource or a service resource required for capacity expansion or capacity shrinkage, and requesting the service resource or the application resource according to the determined resource amount to provide an application service that requires the resource.
Similarly, the current-limiting change data of the application service with the dependency relationship can be determined based on the amplification ratio change data and the dependency change data, and since the application service is usually provided with a service interface or is connected with middleware (such as other middleware playing roles of forwarding, checking, storing and the like), the current-limiting change data can correspond to different levels, such as a service interface level, an application level, a middleware level, a parameter level, a user level and the like, and likewise, the current-limiting change level and the current-limiting change requirement of different levels are different, the current-limiting change level and the current-limiting change requirement matched with the current-limiting change data need to be determined based on the current-limiting change data of the application service with the dependency relationship.
Further, after determining the actual current-limiting change level and the current-limiting change requirement matched with the current-limiting change data, further performing current-limiting adjustment processing on the corresponding current-limiting change level for each application service according to the current-limiting change requirement. For example, there is a traffic interface level flow limiting change requirement between the application service a and the application service B, i.e. the traffic interface level flow limiting change requirement between the application service a and the application service B is a traffic interface level, and the corresponding flow limiting change requirement is a traffic increase, i.e. the traffic between the application service a and the application service B at the traffic interface level, i.e. the traffic interface 1 of the application service a invokes the traffic interface 2 of the application service B, is increased, and the traffic threshold between the application service a and the application service B is also increased synchronously, so as to avoid the situation that the actual traffic requirement still cannot be met when the traffic increases to the original traffic threshold.
In the resource allocation processing method, the application service to be allocated is determined according to the resource allocation request, call resource data among links to which the application service to be allocated belongs is collected, so that link call aggregation analysis and flow statistics are performed based on the call resource data, and the dependency call relation among the application services is determined. Further, according to the dependency calling relationship and the actual calling data among the application services, the resource changing data of each application service is determined, so that the resource configuration adjustment can be carried out on each application service based on the resource changing data, and further the real-time allocation of resources is carried out without paying attention to the use condition of the data resources of the platform or the program and the distribution condition of the service processing requests, but the service resources and the data flow of the application platform or the program can be reasonably allocated, the problem that the service requests cannot be processed or the platform crashes due to resource allocation errors is avoided, and the efficient operation and maintenance processing of the platform or the program is achieved.
In one embodiment, as shown in fig. 3, the step of determining a dependency call relationship during an application service, that is, the step of performing link call aggregation analysis and flow statistics on call resource data between links to which each application service belongs, and determining the dependency call relationship between each application service specifically includes:
step S302, standardized processing is carried out on call resource data, and standard call resource data are obtained.
The call resource data obtained may be source data based on a link protocol (such as open tracking protocol), and may also be understood as source data on a complete link to which the application service belongs. It can be understood that, because specific parameter configuration information of different application services is different, the obtained associated call resource data also has the condition of non-uniform format, and further, standardized processing needs to be performed on the call resource data to obtain standard call resource data, so that subsequent processing based on uniform standard call resource data, including subsequent processing such as determination of a dependency call relationship and determination of resource change data, is performed, and tedious conversion operation caused by non-uniform format is avoided, thereby playing a role in improving the working efficiency of resource configuration processing.
For example, the complete link may include multiple links, such as a complete link to which an application service called by an actual service belongs, specifically includes a link 1 and a link 2, where it may be that an application service a belongs to a link 1 and an application service 2 belongs to a link 2, and when executing each service processing logic corresponding to the actual service, it is necessary to call the application service a or the application service B through different links to execute the service processing logic, so as to form call resource data in the execution process of the service processing logic.
Specifically, according to the actual service to be processed and the service processing logic corresponding to the actual service, the service fields corresponding to different services can be determined, and then according to the service fields corresponding to different services and the basic fields corresponding to the standardized processing, the corresponding standardized traffic protocol is obtained.
The service field may specifically include a service name, a service requirement, a service name of an application service to be invoked, a service interface name of the application service, a call chain number, a start time, an end time, and the like. Likewise, the basic field corresponding to the standardized processing may include, in particular, a call chain identifier, a start time, an end time, a span number, and associated node information. The service processing logic may be specifically understood as specific tasks that need to be completed or implemented in the actual service process, for example, specific tasks that need to be implemented in data access, data presentation (such as page presentation, or sound playing, etc.), page switching, page skipping, etc.
For example, as shown in table 1 (standardized traffic protocol field table), various standardized traffic protocol fields included in the standardized traffic protocol are provided:
table 1 standardized flow protocol field table
According to the service fields corresponding to different services and the basic fields corresponding to the standardized processing, a plurality of standardized traffic protocol fields can be obtained, and further according to the standardized traffic protocol fields, a standardized traffic protocol is obtained.
Specifically, referring to table 1, it may be known that the standardized traffic protocol field may specifically include a bizType indicating a traffic line identifier (for determining which traffic line is based on an instance of a data source), a nodeName indicating a node name (e.g., a traffic interface route:/userc/wxlogi; a middleware name: dis-10.59.5.4:6379), a label indicating a label (e.g., springMVC, mysql), a trafficType indicating a traffic type (including a common traffic flow, an automatic measurement, a pressure measurement, etc.), a serviceCode indicating a service name (i.e., a name of an application service being invoked), a layer indicating which layer of a micro service specifically belongs to (e.g., an HTTP layer, or a Cache layer, or DB layer, etc.), an error indicating whether an error occurs (including a false i.e.e., an error does not occur), a trafficid indicating a call chain number, a segment number indicating a call link node, a segment number indicating a time indicating a call link node, a stame indicating a start time (e.g., a specific call may be performed), a service name indicating a service end time indicating a service end, a segment indicating a service end time indicating a service in a service segment, a service may be performed in a parent node, a data segment indicating a data segment, a data segment indicating a data processing status, a data segment, a service end, a segment indicating a logical id may be performed in a data segment, and a data segment may be performed.
The bizType field, nodeName field, label field, trafficType field, serviceCode field, layer field, isError field and the like correspond to respective service fields of different actual services. The traceId field, the segment Id field, the startTime field, the endTime field, the span Id field, the refs field, the pantsegment segment Id field, the pantspan Id field, etc., belong to the basic field corresponding to the standardized processing.
Further, after the standardized flow protocol is obtained, mapping processing and standardization processing are carried out on each original field in the call resource data by further utilizing the standardized flow protocol, so that the standard call resource data is obtained. Specifically, mapping processing is performed on each original field in call resource data, and each original field is mapped into each standardized traffic protocol field in a standardized traffic protocol, so that the purpose of standardization is achieved, and unified standard call resource data is obtained.
In one embodiment, as shown in fig. 4, a schematic diagram of a stage process of the standardized process is provided, and referring to fig. 4, the process stages specifically include the following:
s1 first processing stage1 (i.e., source stage): and calling a streaming processing framework matched with the standardized flow protocol, carrying out rebalancing distribution processing on the calling resource data in a first processing stage, and distributing the calling resource data to a processing operator corresponding to a second processing stage.
Specifically, the streaming framework matched with the standardized traffic protocol may be a Flink streaming framework (i.e., apache Flink, which is a distributable open-source computing framework for data stream processing and batch data processing and can support different application types of stream processing and batch processing), and by calling the Flink streaming framework, in the source stage, original call resource data is obtained from a data queue (such as a kafka queue in particular) and the call resource data is distributed to a processing operator corresponding to the second processing stage (i.e., map stage) in a rebalancing manner.
The streaming processing framework matched with the standardized flow protocol is not limited to the flink streaming computing framework, and can be other processing frameworks capable of realizing streaming data computing processing. The concurrency of the first processing stage is N, that is, N processes may execute the rebalancing distribution process of the call resource data at the same time.
S2 second processor stage2 (i.e., map stage): in the second processing stage, based on the processing operator, the received call resource data is subjected to decentralized processing, and a resource data set corresponding to the call resource data is obtained.
Specifically, in the map stage, based on a processing operator, calling resource data is scattered from a segment level to a span level, so that scattered processing is realized, a data set is formed based on the span level data, and a resource data set corresponding to the calling resource data is obtained.
The call resource data corresponds to the whole complete link (i.e., trace), wherein the link trace is specifically composed of a plurality of segments, a segment can be understood as a track segment requesting in a process, a span can be understood as a track segment requesting in a component or processing logic in a certain process, and data of a segment level can be broken into data of a plurality of span levels. The concurrency of the second processing stage is N, that is, N processes may execute the data scattering process at the same time.
S3 third processing stage3 (i.e., window stage): in the third processing stage, according to the standardized flow protocol, mapping each original field in the resource data set into a standardized flow protocol field corresponding to the standardized flow protocol according to the data source in sequence, and carrying out aggregation processing on each standardized flow protocol field to obtain corresponding standard call resource data.
Specifically, in the window stage, hash routing processing is performed according to the link number, and it is determined that the resource data set of the same link implements serial processing, specifically, according to the standardized traffic protocol, each original field in the resource data set may be mapped into a standardized traffic protocol field corresponding to the standardized traffic protocol according to the data source in sequence.
Further, after mapping processing is performed on each original field in the resource data set, aggregation processing is performed on each standardized traffic protocol field based on a window function, corresponding standard call resource data is obtained, and the obtained standard call resource data is sent to a processing operator in a fourth processing stage. The concurrency of the third processing stage is 2N, that is, 2N processes can execute the data standardization processing at the same time.
S4 fourth processing stage4 (i.e. sink stage): in the fourth processing stage, the standard call resource data obtained in the third processing stage is received, the processing operator is utilized to carry out writing request construction processing on the standard call resource data, and the standard call resource data is stored into a preset associated database through writing construction requests.
Specifically, in the sink stage, standard call resource data obtained in the third processing stage is received, a processing operator is called to construct a write-in request corresponding to the standard call resource data, and the standard call resource data is stored in an associated database by responding to the write-in request. The associated database is not limited to a certain database type, and may be a plurality of different types of databases providing functions of storage, access, and the like. The concurrency of the third processing stage is N, that is, N processes may execute the data storage process at the same time.
For example, the associated data database may be specifically a clickhouse, and after storing the standard call resource data into the clickhouse, a corresponding point width table and a corresponding edge width table are formed. The point width table is used for storing application services needing to be called, for example, application service A and application service B need to be called, the application service A and the application service B are stored into the point width table as two points, the edge width table is used for storing the dependency relationship between the application services, for example, the application service A and the application service B need to be called after the dependency relationship exists, and the application service A and the application service B need to be called first, and the dependency relationship between the application service A and the application service B is expressed by using an AB which is stored in the edge width table as an edge.
In one embodiment, before performing link call aggregation analysis and traffic statistics based on call resource data between links to which each application service belongs, determining a dependency call relationship between each application service further includes: and collecting call resource data among links to which each application service belongs.
The method comprises the steps of determining the dependence calling relation among all application services according to the calling resource data among links to which the application services belong. Specifically, call resource data between links to which each application service belongs may be understood as call resource data between call links to which each application service belongs, such as detailed data of a specifically called service interface, a call sequence of the service interface, a dependent component type (for example, may be an application program, a service interface, a middleware, etc.), a call execution condition of the service interface (for example, different execution conditions such as call success, access success, call failure, access failure, etc.), and the like.
Specifically, before the configuration processing is performed on the service resources and the data traffic of the application services, call resource data among call chains to which each application service belongs is required to be collected, and a dependency call relationship among each application service is determined according to the call resource data, so that resource change data of each application service is determined based on the dependency call relationship and actual call data among currently available application services, and finally, resource configuration adjustment is performed on each application service based on the resource change data, such as adjustment processing is performed on service resource amounts which can be applied or accessed by the application service, or flow limit adjustment processing is performed on data traffic values of different service interfaces of the application service.
And step S304, in a preset aggregation processing period, calling resource data according to the standard with the same data source and the same dependency dimension, and carrying out link call aggregation analysis to obtain the first calling times and response time of the dependency dimension associated with each application service.
The data sources are the same, which can be understood as belonging to the same actual service, and whether the data sources are the same or not can be determined by the bizType field (used for representing the service line identifier) in table 1, for example, the values of the bizType field of the resource data are the same in different standard call, i.e. the service line identifier is the same, and in the same actual service, the resource data can be called by different standards with the same bizType field value, and the resource data can be determined as the standard call resource data with the same data sources. The dependency dimensions are the same, and may be specifically determined by a nodeName field, a label field, a trafficType field, a serviceCode field, and a layer field in table 1, for example, the specific values of the nodeName field, that is, the node names are the same, or the label field, that is, the label names are the same, or the trafficType word, that is, the traffic types are the same, the serviceCode field, that is, the application service names are the same, or the layer field, that is, the specific layers in the micro services to which the layer field belongs are the same, and the dependency dimensions of the standard call resource data in these cases may be considered to be the same.
In particular, a dependency dimension may be understood as a dependency between an application service, a business interface, and a middleware, and may specifically include, for example, a dependency between different application services, a dependency between an application service and a business interface, a dependency between an application service and a middleware, a dependency between a business interface and a middleware, a dependency between different business interfaces, and the like.
In step S306, in the preset traffic statistics period, the traffic statistics processing of all links is performed for each application service, so as to obtain the second call times and the amplification ratio between links to which each application service belongs.
Specifically, the flow statistics processing may be specifically understood as performing data flow statistics on a complete link composed of call links to which the application service belongs, for example, in the execution process of different processing logic on the complete link, the data flow between the application services in different time periods specifically takes a value, so that the flow statistics data of the complete link can be obtained by performing statistics on the data flow value of each application service or each call link.
When the data flow value of each application service or each calling link is counted, the corresponding calling times of each application service or each calling link are counted, and the amplification ratio among links to which each application service belongs is further determined according to the calling times. For example, when the data flow value between the application service a and the application service B is counted, and the call number between the application service a and the application service B is counted, for example, the service interface 1 of the application service a calls the service interface 2 of the application service B, and the call number is 2, the amplification ratio between the service interface 1 of the application service a and the service interface 2 of the application service B can be determined to be 2.
Step S308, determining the dependency calling relationship between the application services based on the first calling times and response time of the dependency dimensionality associated with the application services, the second calling times and amplification ratio between links to which the application services belong.
The first call number of the dependency dimensions associated with each application service may be understood as the call number between dependencies under the dependency dimensions to which different application services belong. For example, a dependency relationship exists between the application service a and the application service B, and the application service a and the application service B are specifically shown as the application service a having a dependency relationship with the application service B having a service interface 1 and the application service B having a service interface 3 having a dependency relationship with the application service a having a service interface 1 calling the service interface 2 (the number of times of calling is 2 times, for example), and the service interface 1 calling the service interface 3 (the number of times of calling is 3 times, for example).
Similarly, the response time (i.e., RT time, which is a time span from the start of receiving a request to the return of a response by the system, is used to reflect the overall throughput of the system from the side, and can also be used as a performance criterion for service requests). It can be appreciated that by further analyzing the first number of calls and response time of the dependency dimension associated with each application service, and the second number of calls and amplification ratio between links to which each application service belongs, a traffic model determined based on the dependency call relationship between each application service can be formed, and the automatic analysis is performed on different application services based on the traffic model, so as to determine the corresponding current limit adjustment processing manner and resource adjustment processing manner, such as specific processing manners of needing to increase (or decrease) the data traffic value, or increasing (or decreasing) the allowed access or allocated service resource amount.
In one embodiment, as shown in fig. 5, a flow model determined based on the dependency call relationship between application services is provided, and as can be seen from fig. 5, a flow model formed by the dependency call relationship between application services, business interfaces and middleware is provided. In the flow model shown in fig. 5, in a certain actual service execution process, an application service a, an application service B, a service interface 1 set by the application service a, a service interface 2 and a service interface 3 set by the application service B, and a middleware 1 are specifically related.
Specifically, referring to fig. 5, it can be seen that when the service interface 1 of the application service a calls the service interface 2 and the service interface 3 of the application service B, respectively, the amplification ratio between the service interface 1 and the service interface 2 is determined according to the number of times the service interface 1 calls the service interface 2, and similarly, the amplification ratio between the service interface 1 and the service interface 3 is determined according to the number of times the service interface 1 calls the service interface 3, and the amplification ratio between the service interface 1 and the service interface 2 and the amplification ratio between the service interface 1 and the service interface 3 are analyzed in a summarized manner, so that the amplification ratio between the application service a and the application service B can be determined.
Likewise, the number of times the service interface 2 of the application service B calls the middleware 1 can determine the amplification ratio between the service interface 2 of the application service B and the middleware 1. If there are other service interfaces (such as service interface 3) of the application service B to call the middleware 1, the amplification ratio between the application service B and the middleware 1 needs to be summarized and calculated between all the service interfaces and the middleware 1. Similarly, if the middleware 1 is called by other business interfaces of the application service B, the amplification ratio between the application service B and the middleware 1 can be determined according to the amplification ratio between the business interface 2 and the middleware 1.
Further, with the traffic model determined based on the dependency call relationship between the application services, data analysis including data traffic analysis and service resource allocation analysis may be performed in real time, for example, for the data traffic analysis, such as version iteration of the application service a, the amplification ratio between the service interface 1 of the application service a and the service interface 2 of the application service B is changed, such as from 1 to 2, and then the current limiting adjustment process needs to be performed for the service interface 2 of the application service B, such as providing a data traffic value greater than that before the version iteration for the service interface 2 of the application service B, so as to meet the actual requirement in the execution process of the service processing logic.
For service resource allocation analysis, similarly, for example, if version iteration occurs in the application service a, and the application service a newly increases the dependency relationship between the application service B, there is a dependency relationship between the application service a and the application service B, and then there is a dependency relationship between the application service a and the application service B, and further the amplification ratio between the service interface 1 of the application service a and the service interface 2 of the application service B is changed, then the service resource amount to be adjusted, for example, the actual service resource amount provided for the application service a or the application service B is specifically enlarged (or reduced) according to the amplification ratio change condition and the dependency change condition.
In one embodiment, as shown in fig. 6, another flow model determined based on the dependency call relationship between application services is provided, and referring to fig. 6, it can be known that a flow model formed according to the dependency call relationship between different service interfaces is provided, where the different connection lines between the service interfaces respectively correspond to different actual services, and the corresponding call times, corresponding times, and call proportions (i.e. amplification ratios) are respectively set between the different service interfaces.
For example, referring to fig. 6, the connection lines between the service interface a, the service interface C and the service interface G indicate that the service interface a, the service interface C and the service interface G need to be invoked during the execution of a certain actual service 1, and specifically, the service interface a invokes the service interface C and the service interface C invokes the service interface G. The calling times, response time and calling proportion among the service interface A, the service interface C and the service interface G are determined according to the obtained calling resource data.
Similarly, referring to fig. 6, it can be seen that, for example, in the Actor process (i.e., the process of cooperation through the message passing method), when a service interface C, a service interface G, and a service interface I need to be invoked by a certain actual service 2, connection lines between the service interface C, the service interface G, and the service interface I correspond to the execution process of the actual service 2. The specific calling process of the actual service 2 is as follows: the Actor process calls the service interface C, the service interface C calls the service interface G, and the service interface G calls the service interface I. Similarly, the calling times, response time and calling proportion among the service interface C, the service interface G and the service interface I are determined according to the obtained calling resource data.
As can be seen from fig. 6, for example, when a service interface B, a service interface D, a service interface E, a service interface F, and a service interface H are required to be invoked by a certain actual service 3, connection lines between the service interface B, the service interface D, the service interface E, the service interface F, and the service interface H correspond to execution processes of the actual service 3. The specific calling process of the actual service 3 is as follows: the Actor process calls a service interface B, the service interface B calls a service interface D, the service interface D calls a service interface E, the service interface D calls a service interface F, and the service interface E and the service interface F all call a service interface H. Similarly, the calling times, response time and calling proportion among the service interface B, the service interface D, the service interface E, the service interface F and the service interface H are determined according to the obtained calling resource data.
As another example, referring to fig. 6, when a certain actual service 4 needs to call a service interface D, a service interface E, a service interface F, and a service interface I, connection lines between the service interface D, the service interface E, the service interface F, and the service interface I correspond to execution processes of the actual service 4. The specific calling process of the actual service 4 is as follows: the Actor process calls a service interface D, the service interface D calls a service interface E, the service interface D calls a service interface F, and the service interface E calls a service interface I. Similarly, the calling times, response time and calling proportion among the service interface D, the service interface E, the service interface F and the service interface I are determined according to the obtained calling resource data.
Further, by using the traffic model determined based on the dependency call relationship between the application services, data analysis including data traffic analysis and service resource allocation analysis may be performed in real time, specifically, for example, the data traffic analysis and service resource allocation analysis may be performed according to the connection line (service interface a-service interface C-service interface G) corresponding to the execution process of the actual service 1, and the call times, response times, and call proportions between the service interface a, the service interface C, and the service interface G, so that the change condition of the amplification ratio (i.e., call proportion), the change condition of the response time, and the change condition of the call times between different service interfaces may be determined, and then, according to each change condition obtained by the analysis, the data traffic value and the service resource amount between each service interface may be adjusted.
The method specifically can be flow limit adjustment processing and resource adjustment processing, such as adjusting the data flow value between service interfaces and adjusting the service resource quantity configured or accessible by the service interfaces, so as to meet different actual service requirements, avoid the problem that service requests cannot be processed or the platform crashes caused by resource allocation errors, and achieve efficient operation and maintenance processing of the platform or the program.
In this embodiment, standard call resource data is obtained by performing standardized processing on call resource data, and further, link call aggregation analysis is performed on the standard call resource data with the same data source and the same dependency dimension in a preset aggregation processing period, so as to obtain the first call times and response times of the dependency dimension associated with each application service, and in a preset traffic statistics period, traffic statistics processing is performed on all links for each application service, so as to obtain the second call times and amplification ratio between links to which each application service belongs. Further, based on the first calling times and response time of the dependency dimensions associated with each application service, the second calling times and amplification ratio of links to which each application service belongs, the dependency calling relation among each application service is determined, so that subsequent processing is conducted based on unified standard calling resource data, including subsequent processing such as determination of the dependency calling relation and determination of resource change data, and tedious conversion operation caused by non-unified format is avoided, and therefore the work efficiency of resource configuration processing is improved.
In one embodiment, as shown in fig. 7, the step of performing resource allocation adjustment on each application service, that is, based on the resource change data, specifically includes:
step S702, determining current performance data of each application service for different service interfaces based on the magnification ratio change data and the dependency change data.
Specifically, when a certain business processing logic needs to be executed in the execution process of the business processing logic, the application service B with a dependency relationship is newly added to the application service a based on the amplification ratio change data, and the update amplification ratio after the version iteration is larger than the original amplification ratio, for example, the original modification ratio between the business interface 1 of the application service a and the business interface 3 of the application service B is 1, the update amplification ratio is 2, or the original modification ratio between the business interface 2 of the application service a and the business interface 3 of the application service B is 1.5, and the update amplification ratio is 2, so that the current performance data of each application service for different business interfaces can be determined according to the dependency modification condition between the application service a and the application service B and the amplification ratio modification condition between each business interface of the application service a and the application service B.
Step S704, basic performance data corresponding to each service interface one by one is acquired.
Specifically, for each service interface, basic performance data is preset, and is used for comparing with the determined current performance data to determine performance change data corresponding to each service interface. Specifically, the performance data is performance QPS (which is collectively referred to as Queries Per Second and indicates a query rate per second), and the performance change data is obtained by performing a comparison calculation process on the basic performance data and the current performance data determined from the amplification ratio change data and the dependent change data.
Step S706, for each service interface, according to the basic performance data and the current performance data, determining the performance change data corresponding to each service interface one by one.
Specifically, for each service interface and other called service interfaces, the basic performance data and the current performance data are compared and analyzed, and the performance change data corresponding to each service interface one by one is determined. For example, the business interface 1 of the application service a invokes the business interface 2 of the application service B, and when the version iteration of the application service a occurs, the performance change data corresponding to the business interface 2 is determined by acquiring basic performance data between the business interface 1 of the application service a and the business interface 2 of the application service B and using the calculated current performance data and the basic performance data of the business interface 2.
For example, if a dependency relationship exists between the application service a and the application service B, the service interface 1 of the application service a invokes the service interface 2 of the application service B, the basic performance data between the application service B and the application service a before the version iteration is not performed by the application service a is 2000, and after the version iteration, the amplification ratio between the service interface 1 and the service interface 2 is changed (for example, from 1 to 2), then according to the change condition of the amplification ratio between the service interface 1 and the service interface 2 and the dependency change condition between the application service a and the application service B, the current performance data of the application service a and the application service B relative to different service interfaces, for example, the current performance data is 2500, and the performance change data corresponding to the service interface 2 is 500.
Further, as shown in the following table 2 (flow amplification change example table), when the amplification change analysis is performed, an upstream service a (i.e., seviceA) and an upstream service C (i.e., seviceC) are provided, and each of the upstream service a and the upstream service C calls a downstream service B (i.e., seviceB), wherein the upstream service a sets a traffic interface 1 (i.e., API 1), the upstream service C sets a traffic interface 2 (i.e., API 2), and the downstream service B sets a traffic interface 3 (i.e., API 3), the traffic interface 1 calls the traffic interface 3, and the traffic interface 2 calls the traffic interface 3.
Table 2 flow magnification change example table
Referring to table 2, it can be seen that the basic performance data of the service interface 1 is 1000, the performance data of the service interface 3 before the amplification ratio change (or understood as the basic performance data of the service interface 3) is 2000, the original amplification ratio of the service interface 1 to the service interface 3 is 1.5, and after the service a version is applied and iterated, the update amplification ratio of the service interface 1 to the service interface 3 is 2.
It can be understood that when the data analysis is performed based on the traffic model, since the amplification ratio of the service interface 1 to the service interface 3 is changed from 1.5 to 2 after the new version iteration of the application service a, the current limiting configuration of the service interface 3 of the application service B needs to be adjusted accordingly. Specifically, referring to table 2, it is clear that the performance data of the service interface 3 before the amplification ratio is changed is 2000, and the performance data of the service interface 3 after the amplification ratio is changed is 2500 because the amplification ratio of the service interface 1 to the service interface 3 is changed from 1.5 to 2.
Similarly, referring to table 2, it can be seen that the basic performance data (i.e., QPS before modification) of the service interface 2 of the application service C is 500, and since the amplification ratio between the service interface 2 and the service interface 3 is not modified, there is no modification of the performance data between the service interface 2 and the service interface 3, and the performance modification data of the service interface of the application service B can be determined based on the basic performance data between the service interface 1 and the service interface 3 and the current performance data.
Further, as shown in the following table 3 (dependency change example table), when the dependency change analysis is performed, an upstream service a (i.e., serviceA) and a downstream service B (i.e., serviceB) are provided, the upstream service a is provided with a service interface 1 (i.e., API 1) and a service interface 2 (i.e., API 2), the downstream service B is provided with a service interface 3 (i.e., API 3) and a service interface 4 (i.e., API 4), the service interface 1 calls the service interface 3, and the service interface 2 calls the service interface 4.
Table 3 dependent Change example Table
As can be seen from table 3, when data analysis is performed based on the traffic model, since the dependency of the application service B is increased after the new version of the application service a is iterated, the traffic interfaces 3 and 4 of the application service B need to be subjected to the current limiting adjustment process, such as specifically, the data traffic values of the traffic interfaces 3 and 4 are increased (or decreased).
Specifically, referring to table 3, it can be seen that the amplification ratio of the service interface 1 to the service interface 3 is 1.5, the basic performance data of the service interface 1 is 1000, the amplification ratio of the service interface 2 to the service interface 4 is 1, the basic performance data of the service interface 2 is 500, and since the dependency on the application service B is newly increased, the performance data needs to be increased for the service interface 3, such as the performance data specifically increased by 1500, and similarly, the performance data also increases for the service interface 4, such as the performance data specifically increased by 500, and the overall increased performance data of the application service B is 2000.
Step S708, according to each performance change data, corresponding resource allocation requirements are determined.
Specifically, according to the determined specific value of the performance change data, a corresponding resource configuration requirement is further determined, for example, for each service interface set by the application service, a resource configuration requirement, specifically, a resource configuration requirement that needs to be subjected to capacity expansion processing or capacity shrinkage processing, and a service resource amount specifically, a service resource amount that needs to be provided or reduced are determined.
Further, as shown in table 4 (resource change example table) below, an example of resource change after iteration for an upstream service a (i.e., serviceA) version is provided. Specifically, referring to table 4, the upstream service a is provided with a service interface 1 (i.e., API 1) and a service interface 2 (i.e., API 2), reference resources of the service interface 1 and the service interface 2 are 1 core-2 GB, reference resource QPS of the service interface 1 is 100, reference resource QPS of the service interface 2 is 10, the calculated current QPS of the service interface 1 is 1000, and the calculated current QPS of the service interface 2 is 200.
The current resource is 30 core-60 GB, and the increased QPS for the service interface 1 is 500, and since the reference resource QPS for the service interface 1 corresponds to 1 core 2G when it is 100, when the increased QPS for the service interface 1 is 500, it can be understood that the increased QPS is 5 times the reference resource QPS, and further the service resource required to be increased is 5 times 1 core-2G, i.e. 5 core-10G. Similarly, when the reference resource QPS of the service interface 2 is 10, the increase QPS of the service interface 2 corresponds to 1 core 2G, and when the increase QPS of the service interface 2 is 100, it can be understood that the increase QPS is 10 times the reference resource QPS, and further, the increase service resource is 10 times 1 core-2G, that is, 10 core-20G.
It will be appreciated that, referring to table 4, by integrating the service resources required to be added by the service interface 1 and the service interface 2 of the upstream service a, it can be determined that the service resources required to be added for the upstream service a are 15 core-30 GB, that is, the resource allocation requirements for the upstream service a are service resources required to be added by 15 core-30 GB.
Table 4 resource change examples
In step S710, the resource adjustment process is performed for each application service based on the resource allocation requirement.
Specifically, after the resource allocation requirement is determined for each performance change data, the service resource amount which is specifically required to be enlarged or reduced is further determined according to the resource allocation requirement, so that the resource adjustment processing is performed on each application service according to the determined service resource amount which is required to be adjusted (enlarged or reduced).
Further, for example, by integrating the service resources required to be added by the service interface 1 and the service interface 2 of the upstream service a, when determining that the service resources required to be added for the upstream service a are 15 core-30 GB, it may be determined that the resource configuration requirement for the upstream service a is 15 core-30 GB of service resources. And further, according to the service resource amount (such as 15 core-30 GB service resource increase) which is increased or decreased as required and the preset container resource specification (such as 1 core-2 GB corresponding to one container resource component on the container platform), further determining the container resource component which is required to be increased or decreased, and adjusting the container resource component which is equipped or accessible for each application service.
For example, if the resource allocation requirement for the upstream service a is to increase the service resources of 15 cores-30 GB, and 1 core-2 GB corresponds to one container resource component on the container platform, then 15 container resource components are specifically required to be increased for the upstream service a. Where the container resource component may specifically be a pod (pod specifically a package of individual containers, a pod comprising a set of one or more containers, with shared storage network resources, and specifications of how the containers are to be operated), then specifically an increase of 15 pods is required to be provided to upstream service a.
In this embodiment, current performance data of each application service for different service interfaces is determined based on the amplification ratio change data and the dependent change data, and basic performance data corresponding to each service interface one by one is obtained. Further, for each service interface, according to the basic performance data and the current performance data, the performance change data corresponding to each service interface one by one is determined, and further, according to each performance change data, the corresponding resource allocation requirement is determined, so that quick and accurate resource adjustment processing can be performed on each application service based on the resource allocation requirement. The method and the device have the advantages that the current limit and the resource allocation of the application service are further dynamically adjusted on the basis of whether the current limit configuration between each application service and each business interface is reasonable or not and whether the resource proportion is reasonable or not, and the effect of automatic operation and maintenance is achieved.
In one embodiment, as shown in fig. 8, the step of performing resource allocation adjustment on each application service, that is, based on the resource change data, specifically includes:
step S802, based on the amplification ratio change data and the dependency change data, the current limit change data of the application service with the dependency relationship is determined.
Specifically, change variance data of the application service having the dependency relationship is determined based on the amplification ratio change data and the dependency change data. The change variance data may be, specifically, change variance of the amplification ratio between application services having a dependency relationship.
Further, if the change variance data is larger than the preset variance threshold, triggering a current limiting change request according to the change variance data, and further determining the current limiting change data corresponding to the current limiting change request by responding to the current limiting change request. The preset variance threshold can be set or adjusted according to different actual services, and specific limitation of value is not carried out. The current limit change data may be specifically understood as data traffic change information for an application service, for example, specifically, data traffic of the application service needs to be increased, data traffic of the application service needs to be decreased, or current data traffic of the application service needs to be kept unchanged.
Step S804, determining the current limit change level and the current limit change requirement matched with the current limit change data.
In particular, since the application service is generally provided with a traffic interface, or is connected with middleware (such as other middleware playing a role in forwarding, checking, storing, etc.), the current limit change data may correspond to different levels, such as a traffic interface level, an application level, a middleware level, a parameter level, a user level, etc. Similarly, if the current limit change level and the current limit change requirement of different levels are different, it is necessary to determine the actual current limit change level and the current limit change requirement matching the current limit change data based on the current limit change data of the application service having the dependency relationship.
When the current limit change level, which specifically belongs to the interface level, the application level or the middleware level, is determined, for example, the current limit change level of the interface level, the current limit adjustment process (specifically, increasing the data traffic or decreasing the data traffic) is performed on the interface level by the application service. The specific adjustment degree of the data flow corresponds to the current limiting change requirement, for example, the current limiting change requirement is to increase the data flow to 120% of the current, or decrease the data flow to 80% of the current, and the specific adjustment degree of the data flow can be determined according to the specific actual current limiting change requirement.
Step S806, according to the current limit change requirement, current limit adjustment processing is performed on the corresponding current limit change level for each application service.
Specifically, after determining the current limiting change level matched with the current limiting change data and the current limiting change requirement, determining the specific adjustment degree of the data flow according to the current limiting change requirement, and further performing current limiting adjustment processing on the application service on the determined current limiting change level.
For example, if the current limit change level is an interface level and the current limit change requirement is to increase the data traffic to 120%, the current limit adjustment process is performed at the interface level, that is, for different service interfaces set by the application service, according to the current limit change requirement (that is, to increase the data traffic to 120%), so as to adjust the data traffic value of the service interface set by the application service to 120% of the current size.
In this embodiment, based on the amplification ratio change data and the dependency change data, the current-limiting change data of the application service with the dependency relationship is determined, the current-limiting change level matched with the current-limiting change data and the current-limiting change requirement are further determined, and further, according to the current-limiting change requirement, accurate and rapid current-limiting adjustment processing on the corresponding current-limiting change level of each application service is realized. The method and the device have the advantages that the current limit and the resource allocation of the application service are further dynamically adjusted on the basis of whether the current limit configuration between each application service and each business interface is reasonable or not and whether the resource proportion is reasonable or not, and the effect of automatic operation and maintenance is achieved.
In one embodiment, as shown in fig. 9, a resource allocation processing method is provided, which specifically includes the following steps:
step S901, collecting call resource data between links to which each application service to be configured and processed belongs.
Step S902, obtaining the corresponding standardized flow protocol according to the service fields corresponding to different services and the basic fields corresponding to the standardized processing.
Step S903, a streaming processing framework matched with the standardized flow protocol is called, in a first processing stage, call resource data is subjected to rebalancing distribution processing, and the call resource data is distributed to a processing operator corresponding to a second processing stage.
In the second processing stage, the received call resource data is subjected to decentralized processing based on the processing operator, so as to obtain a resource data set corresponding to the call resource data.
In step S905, in the third processing stage, according to the standardized traffic protocol, each original field in the resource data set is mapped into a standardized traffic protocol field corresponding to the standardized traffic protocol according to the data source in sequence, and aggregation processing is performed on each standardized traffic protocol field, so as to obtain corresponding standard call resource data.
Step S906, in a preset aggregation processing period, invoking resource data according to the standard with the same data source and the same dependency dimension, and performing link invocation aggregation analysis to obtain first invocation times and response time of the dependency dimension associated with each application service.
In step S907, in the preset traffic statistics period, traffic statistics processing of all links is performed for each application service, so as to obtain the second call times and the amplification ratio between links to which each application service belongs.
Step S908 determines a dependency call relationship between application services based on the first call times and response times of the dependency dimensions associated with the application services, the second call times and amplification ratios between links to which the application services belong.
Step S909, a resource configuration request is received, and an application service to be configured is determined according to the resource configuration request.
Step S910, performing link call aggregation analysis and flow statistics based on the call resource data, and determining a dependency call relationship between application services.
In step S911, each application service having a dependency relationship and dependency change data corresponding to each application service are determined based on the dependency call relationship.
Step S912, determining an original amplification ratio and an updated amplification ratio of each application service having a dependency relationship according to the actual call data corresponding to each application service having a dependency relationship, and determining amplification ratio change data of each application service having a dependency relationship based on the original amplification ratio and the updated amplification ratio.
Step S913, based on the amplification ratio change data and the dependent change data, determining the current performance data of each application service for different service interfaces.
Step S914, obtain the basic performance data corresponding to each business interface one by one, for each business interface, according to basic performance data and present performance data, confirm the performance change data corresponding to each business interface one by one.
Step S915, according to each performance change data, determining the corresponding resource allocation requirement, and performing resource adjustment processing on each application service based on the resource allocation requirement.
Step S916, based on the amplification ratio change data and the dependency change data, change variance data of the application service having the dependency relationship is determined.
Step S917, if the variance data is greater than the preset variance threshold, triggering a current limit change request according to the variance data, and determining the current limit change data corresponding to the current limit change request in response to the current limit change request.
Step S918, determining a current limit change level and a current limit change requirement matched with the current limit change data, and performing a current limit adjustment process on the corresponding current limit change level for each application service according to the current limit change requirement.
In the resource allocation processing method, the application service to be allocated is determined according to the resource allocation request, call resource data among links to which the application service to be allocated belongs is further collected, so that link call aggregation analysis and flow statistics are performed based on the call resource data, and dependency calling relations among the application services are determined. Further, according to the dependency calling relationship and the actual calling data among the application services, the resource changing data of each application service is determined, so that the resource configuration adjustment can be carried out on each application service based on the resource changing data, and further the real-time allocation of resources is carried out without paying attention to the use condition of the data resources of the platform or the program and the distribution condition of the service processing requests, but the service resources and the data flow of the application platform or the program can be reasonably allocated, the problem that the service requests cannot be processed or the platform crashes due to resource allocation errors is avoided, and the efficient operation and maintenance processing of the platform or the program is achieved.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a resource allocation processing device for realizing the above related resource allocation processing method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in the embodiments of one or more resource allocation processing apparatus provided below may refer to the limitation of the resource allocation processing method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 10, there is provided a resource configuration processing apparatus including: a resource configuration request receiving module 1002, a dependency call relationship obtaining module 1004, a resource change data determining module 1006, and a resource configuration adjusting module 1008, wherein:
the resource configuration request receiving module 1002 is configured to receive a resource configuration request, and determine an application service to be configured according to the resource configuration request.
The dependency calling relationship obtaining module 1004 is configured to collect calling resource data between links to which the application service to be configured and processed belongs, and perform link calling aggregation analysis and flow statistics based on the calling resource data, and determine a dependency calling relationship between the application services.
The resource change data determining module 1006 is configured to determine resource change data of each application service according to the dependency call relationship and actual call data between each application service.
The resource allocation adjustment module 1008 is configured to perform resource allocation adjustment on each application service based on the resource change data.
In the resource allocation processing device, the resource allocation request is received, the application service to be allocated is determined according to the resource allocation request, call resource data among links of the application service to be allocated is further collected, so that link call aggregation analysis and flow statistics are performed based on the call resource data, and the dependency call relation among the application services is determined. Further, according to the dependency calling relationship and the actual calling data among the application services, the resource changing data of each application service is determined, so that the resource configuration adjustment can be carried out on each application service based on the resource changing data, and further the real-time allocation of resources is carried out without paying attention to the use condition of the data resources of the platform or the program and the distribution condition of the service processing requests, but the service resources and the data flow of the application platform or the program can be reasonably allocated, the problem that the service requests cannot be processed or the platform crashes due to resource allocation errors is avoided, and the efficient operation and maintenance processing of the platform or the program is achieved.
In one embodiment, a dependency call relationship determination module is provided that is further configured to:
carrying out standardized processing on the call resource data to obtain standard call resource data; in a preset aggregation processing period, invoking resource data according to the standard with the same data source and the same dependency dimension, and performing link invocation aggregation analysis to obtain first invocation times and response time of the dependency dimension associated with each application service; in a preset flow statistics period, carrying out flow statistics processing of all links aiming at each application service, and obtaining second calling times and amplification ratio between links of each application service; the dependency call relationship between the application services is determined based on the first call times and response times of the dependency dimensions associated with the application services, the second call times and the amplification ratio between links to which the application services belong.
In one embodiment, the resource change data determination module is further configured to: based on the dependency calling relationship, determining each application service with the dependency relationship and dependency change data corresponding to each application service; according to the actual call data corresponding to each application service with the dependency relationship, determining the original amplification ratio and the updated amplification ratio of each application service with the dependency relationship; and determining the amplification ratio change data of each application service with a dependency relationship based on the original amplification ratio and the updated amplification ratio.
In one embodiment, the resource configuration adjustment module is further configured to: determining current performance data of each application service for different service interfaces based on the amplification ratio change data and the dependence change data; acquiring basic performance data corresponding to each service interface one by one; for each service interface, determining performance change data corresponding to each service interface one by one according to the basic performance data and the current performance data; determining corresponding resource allocation requirements according to the performance change data; and carrying out resource adjustment processing on each application service based on the resource allocation requirement.
In one embodiment, the resource configuration adjustment module is further configured to: determining current limiting change data of the application service with the dependency relationship based on the amplification ratio change data and the dependency change data; determining a current limit change level matched with the current limit change data and a current limit change requirement; and carrying out current limiting adjustment processing on each application service on the corresponding current limiting change level according to the current limiting change requirement.
In one embodiment, the resource configuration adjustment module is further configured to: determining change variance data of the application service with the dependency relationship based on the amplification ratio change data and the dependency change data; if the change variance data is larger than the preset variance threshold, triggering a current limiting change request according to the change variance data; in response to the request for current limit change, current limit change data corresponding to the request for current limit change is determined.
In one embodiment, the standardized processing module is further configured to: obtaining a corresponding standardized flow protocol according to service fields corresponding to different services and basic fields corresponding to standardized processing; and mapping and standardizing each original field in the call resource data by using a standardized flow protocol to obtain the standard call resource data.
In one embodiment, the standardized processing module is further configured to: invoking a streaming processing framework matched with a standardized flow protocol, carrying out rebalancing distribution processing on the invoking resource data in a first processing stage, and distributing the invoking resource data to a processing operator corresponding to a second processing stage; in the second processing stage, based on a processing operator, performing decentralized processing on the received call resource data to obtain a resource data set corresponding to the call resource data; in the third processing stage, according to the standardized flow protocol, mapping each original field in the resource data set into a standardized flow protocol field corresponding to the standardized flow protocol according to the data source in sequence, and carrying out aggregation processing on each standardized flow protocol field to obtain corresponding standard call resource data.
The respective modules in the above-described resource allocation processing device may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, as shown in fig. 11, there is provided a resource allocation processing system, which, referring to fig. 11, specifically includes:
p1 traffic model layer: firstly, call resource data (such as source data based on a link protocol, namely an open pulling protocol, namely pulling source data) among links to which each application service belongs is collected, and the obtained associated call resource data also has the condition of non-uniform format due to different specific parameter configuration information of different application services, so that cleaning and standardization processing is required to be carried out on the call resource data to obtain standard call resource data, so that subsequent processing is carried out on the basis of the unified standard call resource data. The cleaning and standardizing processing of the call resource data specifically comprises the following processing stages:
S1 first processing stage (i.e. source stage): and calling a streaming processing framework matched with the standardized flow protocol, carrying out rebalancing distribution processing on the calling resource data in a first processing stage, and distributing the calling resource data to a processing operator corresponding to a second processing stage.
Specifically, the streaming framework matched with the standardized traffic protocol may be a Flink streaming framework (i.e., apache Flink, which is a distributable open-source computing framework for data stream processing and batch data processing and can support different application types of stream processing and batch processing), and by calling the Flink streaming framework, in the source stage, original call resource data is obtained from a data queue (such as a kafka queue in particular) and the call resource data is distributed to a processing operator corresponding to the second processing stage (i.e., map stage) in a rebalancing manner.
S2 second processor stage (i.e. map stage): in the second processing stage, based on the processing operator, the received call resource data is subjected to decentralized processing, and a resource data set corresponding to the call resource data is obtained.
Specifically, in the map stage, based on a processing operator, calling resource data is scattered from a segment level to a span level, so that scattered processing is realized, a data set is formed based on the span level data, and a resource data set corresponding to the calling resource data is obtained. The call resource data corresponds to the whole complete link (i.e., trace), wherein the link trace is specifically composed of a plurality of segments, a segment can be understood as a track segment requesting in a process, a span can be understood as a track segment requesting in a component or processing logic in a certain process, and data of a segment level can be broken into data of a plurality of span levels.
S3 third processing stage (i.e. window stage): in the third processing stage, according to the standardized flow protocol, mapping each original field in the resource data set into a standardized flow protocol field corresponding to the standardized flow protocol according to the data source in sequence, and carrying out aggregation processing on each standardized flow protocol field to obtain corresponding standard call resource data.
Specifically, in the window stage, hash routing processing is performed according to the link number, and it is determined that the resource data set of the same link implements serial processing, specifically, according to the standardized traffic protocol, each original field in the resource data set may be mapped into a standardized traffic protocol field corresponding to the standardized traffic protocol according to the data source in sequence.
Further, after mapping processing is performed on each original field in the resource data set, aggregation processing is performed on each standardized traffic protocol field based on a window function, corresponding standard call resource data is obtained, and the obtained standard call resource data is sent to a processing operator in a fourth processing stage.
S4 fourth processing stage (i.e. sink stage): in the fourth processing stage, the standard call resource data obtained in the third processing stage is received, the processing operator is utilized to carry out writing request construction processing on the standard call resource data, and the standard call resource data is stored into a preset associated database through writing construction requests.
Specifically, in the sink stage, standard call resource data obtained in the third processing stage is received, a processing operator is called to construct a write-in request corresponding to the standard call resource data, and the standard call resource data is stored in an associated database by responding to the write-in request. The associated database is not limited to a certain database type, and may be a plurality of different types of databases providing functions of storage, access, and the like. For example, the associated data database may be specifically a clickhouse, and after storing the standard call resource data into the clickhouse, a corresponding point width table and a corresponding edge width table are formed.
In one embodiment, after standard call data is obtained and stored in a data warehouse (such as a clickhouse database), link call aggregation analysis and flow statistics are further performed based on the standard call resource data to determine the dependency call relationship between application services. After the dependency calling relation among the application services is obtained, a flow model determined based on the dependency calling relation among the application services is further formed, and automatic analysis is carried out on different application services based on the flow model, so that corresponding current limiting adjustment processing modes and resource adjustment processing modes are determined, for example, specific processing modes such as data flow value needing to be increased (or decreased) or service resource quantity allowing to be accessed or allocated are increased (or decreased). The generated flow models are used for being stored in a model library so as to call the required flow models from the model library in the subsequent processing process.
Specifically, in a preset aggregation processing period, resource data is called according to the standard with the same data source and the same dependency dimension, link call aggregation analysis is performed, and the first call times and response time of the dependency dimension associated with each application service are obtained. The data sources are the same and can be understood to belong to the same actual service, and the dependency dimension can be understood to be the dependency relationship among the application service, the service interface and the middleware, for example, the dependency relationship among different application services, the relationship between the application service and the service interface, the dependency relationship between the application service and the middleware, the dependency relationship between the service interface and the middleware, the dependency relationship between different service interfaces and the like can be specifically included.
Further, in a preset flow statistics period, flow statistics processing of all links is performed for each application service, and second calling times and amplification ratios among links to which each application service belongs are obtained. The flow statistics processing may be specifically understood as performing data flow statistics on a complete link composed of call links to which the application service belongs, for example, in the execution process of different processing logic on the complete link, the data flow between the application services in different time periods specifically takes a value, so that the flow statistics data of the complete link can be obtained by performing statistics on the data flow value of each application service or each call link.
When the data flow value of each application service or each calling link is counted, the corresponding calling times of each application service or each calling link are counted, and the amplification ratio among links to which each application service belongs is further determined according to the calling times. And further, based on the first calling times and response time of the dependency dimensionalities associated with the application services, the second calling times and amplification ratios among links of the application services, determining the dependency calling relation among the application services, and further forming a flow model determined based on the dependency calling relation among the application services.
P2 model application layer: in the model application layer, the flow model is specifically utilized to realize change analysis, including amplification ratio change analysis and dependent change analysis, and further, the analysis result of amplification ratio change and the analysis result of dependent change are fed back to the current limiting change management module and the resource change management module.
Specifically, the magnification ratio change analysis module service performs full automatic analysis and outputs a change result in the dimension of the service line-service-interface (or middleware) based on the latest flow model after iterative update. And the dependent change analysis module is used for carrying out full automatic analysis and outputting a dependent change result according to the dimension of the business line-service based on the latest flow model after iterative updating.
Further, determining the current limiting adjustment processing degree of the application service through the current limiting change management module, specifically, after the current limiting change management module obtains the amplification ratio and the result of the dependent change analysis, calculating the amplification ratio change variance between every two services, when the amplification ratio change variance exceeds a preset variance threshold, constructing an automatic adjustment request, calling a current limiting platform in an external component, and executing corresponding current limiting adjustment processing according to the current limiting adjustment processing degree of the application service.
Similarly, the resource change management module can determine the resource adjustment processing degree of the application service, specifically, the resource change management module determines the performance change data of different service interfaces of different application services according to the amplification ratio and the dependence change analysis, and calculates the service resource amount needing capacity expansion (or capacity contraction) according to the reference performance QPS of each service interface in the application service.
In one embodiment, the processing procedure of the resource change management module specifically includes: firstly, based on the amplification ratio change data and the dependence change data, determining the current performance data of each application service for different service interfaces, and acquiring the basic performance data corresponding to each service interface one by one. And secondly, aiming at each service interface and other called service interfaces, comparing and analyzing the basic performance data and the current performance data, and determining the performance change data corresponding to each service interface one by one. Further, according to the determined specific value of the performance change data, a corresponding resource configuration requirement is further determined, for example, for each service interface set by the application service, a resource configuration requirement, specifically, a resource configuration requirement that needs to be subjected to capacity expansion processing or capacity shrinkage processing, and a service resource amount specifically, a service resource amount that needs to be provided or reduced are determined.
After determining the resource allocation requirement for each performance change data, determining the service resource amount which is specifically required to be enlarged or reduced according to the resource allocation requirement, so as to adjust (enlarge or reduce) the service resource amount according to the determined service resource amount which is required to be adjusted (enlarged or reduced), and carrying out resource adjustment processing on each application service.
In one embodiment, the processing procedure of the current-limiting change management module specifically includes: first, based on the magnification ratio change data and the dependency change data, the current limit change data of the application service having the dependency relationship is determined. The change variance data may be, specifically, change variance of the amplification ratio between application services having a dependency relationship. And secondly, if the change variance data is larger than a preset variance threshold, triggering a current limiting change request according to the change variance data, and further determining the current limiting change data corresponding to the current limiting change request by responding to the current limiting change request. The preset variance threshold can be set or adjusted according to different actual services, and specific limitation of value is not carried out. Further, based on the current limit change data of the application service with the dependency relationship, an actual current limit change level and a current limit change requirement matched with the current limit change data are determined. When the current limit change level, which specifically belongs to the interface level, the application level or the middleware level, is determined, for example, the current limit change level of the interface level, the current limit adjustment process (specifically, increasing the data traffic or decreasing the data traffic) is performed on the interface level by the application service.
P3 external components: the external component specifically comprises a current limiting platform and a container platform, wherein the current limiting platform is used for determining an actual current limiting change level matched with the current limiting change data and a current limiting change requirement, such as determining the current limiting change level which specifically belongs to an interface level, an application level or a middleware level, and determining a specific adjustment degree of the data flow according to the specific actual current limiting change requirement.
For example, when the data flow is to be increased, the data flow value of different service interfaces of the application service is increased by performing the flow limiting adjustment process on the interface level for the application service. Similarly, the container platform is configured to determine, according to the resource allocation requirement determined by using the performance change data, a service resource that needs to be enlarged (or reduced), and obtain an amount of the service resource that needs to be enlarged (or reduced), so as to perform resource adjustment processing on each application service according to the determined amount of the service resource that needs to be adjusted (enlarged or reduced).
In the resource allocation processing system, the application service to be allocated is determined according to the resource allocation request, call resource data among links to which the application service to be allocated belongs is further collected, link call aggregation analysis and flow statistics are performed based on the call resource data, and dependency calling relation among the application services is determined. Further, according to the dependency calling relationship and the actual calling data among the application services, the resource changing data of each application service is determined, so that the resource configuration adjustment can be carried out on each application service based on the resource changing data, and further the real-time allocation of resources is carried out without paying attention to the use condition of the data resources of the platform or the program and the distribution condition of the service processing requests, but the service resources and the data flow of the application platform or the program can be reasonably allocated, the problem that the service requests cannot be processed or the platform crashes due to resource allocation errors is avoided, and the efficient operation and maintenance processing of the platform or the program is achieved.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 12. The computer device includes a processor, a memory, an Input/Output traffic interface (I/O) and a communication traffic interface. The processor, the memory and the input/output service interface are connected through a system bus, and the communication service interface is connected to the system bus through the input/output service interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing resource configuration requests, application services to be configured and processed, dependency calling relations among all application services, calling resource data, actual calling data, resource changing data and the like. The input/output service interface of the computer device is used for exchanging information between the processor and the external device. The communication service interface of the computer device is used for communicating with an external terminal through network connection. The computer program is executed by a processor to implement a resource allocation processing method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 12 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A method for processing resource allocation, the method comprising:
receiving a resource allocation request, and determining an application service to be allocated according to the resource allocation request;
collecting call resource data among links of the application services to be configured and processed, carrying out link call aggregation analysis and flow statistics based on the call resource data, and determining a dependency call relation among the application services;
Determining resource change data of each application service according to the dependency calling relationship and actual calling data among the application services;
and carrying out resource allocation adjustment on each application service based on the resource change data.
2. The method of claim 1, wherein performing link call aggregation analysis and traffic statistics based on call resource data between links to which each of the application services belongs, determining a dependency call relationship between each of the application services comprises:
carrying out standardized processing on the call resource data to obtain standard call resource data;
in a preset aggregation processing period, carrying out link call aggregation analysis on the standard call resource data with the same data source and the same dependency dimension to obtain first call times and response time of the dependency dimension associated with each application service;
in a preset flow statistics period, carrying out flow statistics processing of all links for each application service to obtain second calling times and amplification ratios among links to which each application service belongs;
and determining the dependency calling relationship between the application services based on the first calling times and response time of the dependency dimensions associated with the application services and the second calling times and amplification ratio between links of the application services.
3. The method of claim 1 or 2, wherein the resource change data comprises magnification change data and dependent change data; the determining the resource change data of each application service according to the dependency calling relationship and the actual calling data among the application services includes:
based on the dependency calling relationship, determining each application service with the dependency relationship and dependency change data corresponding to each application service;
according to the actual call data corresponding to each application service with the dependency relationship, determining the original amplification ratio and the updated amplification ratio of each application service with the dependency relationship;
and determining the amplification ratio change data of each application service with a dependency relationship based on the original amplification ratio and the updated amplification ratio.
4. The method of claim 3, wherein said performing resource configuration adjustment on each of said application services based on said resource change data comprises:
determining current performance data of each application service for different service interfaces based on the amplification ratio change data and the dependence change data;
acquiring basic performance data corresponding to each service interface one by one;
For each service interface, determining performance change data corresponding to each service interface one by one according to the basic performance data and the current performance data;
determining corresponding resource allocation requirements according to each performance change data;
and carrying out resource adjustment processing on each application service based on the resource allocation requirements.
5. The method of claim 3, wherein said performing resource configuration adjustment on each of said application services based on said resource change data comprises:
determining current limiting change data of the application service with a dependency relationship based on the amplification ratio change data and the dependency change data;
determining a current limit change level and a current limit change requirement matched with the current limit change data;
and carrying out current limiting adjustment processing on each application service on the corresponding current limiting change level according to the current limiting change requirement.
6. The method of claim 5, wherein the determining the current limit change data for the application service for which a dependency exists based on the magnification change data and dependency change data comprises:
Determining change variance data of the application service with a dependency relationship based on the amplification ratio change data and the dependency change data;
if the change variance data is larger than a preset variance threshold, triggering a current limiting change request according to the change variance data;
and responding to the current limiting change request, and determining current limiting change data corresponding to the current limiting change request.
7. The method of claim 2, wherein performing normalization processing based on the call resource data to obtain standard call resource data comprises:
obtaining a corresponding standardized flow protocol according to service fields corresponding to different services and basic fields corresponding to standardized processing;
and carrying out mapping processing and standardization processing on each original field in the call resource data by utilizing the standardized flow protocol to obtain standard call resource data.
8. A resource allocation processing device, the device comprising:
the resource allocation request receiving module is used for receiving a resource allocation request and determining an application service to be allocated and processed according to the resource allocation request;
the dependency calling relation acquisition module is used for acquiring calling resource data among links of the application services to be configured and processed, carrying out link calling aggregation analysis and flow statistics based on the calling resource data, and determining the dependency calling relation among the application services;
The resource change data determining module is used for determining resource change data of each application service according to the dependency calling relationship and actual calling data among the application services;
and the resource allocation adjustment module is used for carrying out resource allocation adjustment on each application service based on the resource change data.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202211455669.0A 2022-11-21 2022-11-21 Resource allocation processing method, device, computer equipment and storage medium Pending CN116980430A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211455669.0A CN116980430A (en) 2022-11-21 2022-11-21 Resource allocation processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211455669.0A CN116980430A (en) 2022-11-21 2022-11-21 Resource allocation processing method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116980430A true CN116980430A (en) 2023-10-31

Family

ID=88477248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211455669.0A Pending CN116980430A (en) 2022-11-21 2022-11-21 Resource allocation processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116980430A (en)

Similar Documents

Publication Publication Date Title
US11979433B2 (en) Highly scalable four-dimensional web-rendering geospatial data system for simulated worlds
US11392416B2 (en) Automated reconfiguration of real time data stream processing
CN107577805B (en) Business service system for log big data analysis
US11711420B2 (en) Automated management of resource attributes across network-based services
US11416456B2 (en) Method, apparatus, and computer program product for data quality analysis
US11625381B2 (en) Recreating an OLTP table and reapplying database transactions for real-time analytics
US20170289240A1 (en) Managed function execution for processing data streams in real time
WO2020258290A1 (en) Log data collection method, log data collection apparatus, storage medium and log data collection system
US11314808B2 (en) Hybrid flows containing a continous flow
US9992269B1 (en) Distributed complex event processing
US10877971B2 (en) Logical queries in a distributed stream processing system
US10182104B1 (en) Automatic propagation of resource attributes in a provider network according to propagation criteria
CN109360106B (en) Sketch construction method, system, medium and computer system
CN112182004B (en) Method, device, computer equipment and storage medium for checking data in real time
WO2023082681A1 (en) Data processing method and apparatus based on batch-stream integration, computer device, and medium
CN113810234B (en) Method and device for processing micro-service link topology and readable storage medium
Bellini et al. Managing complexity of data models and performance in broker-based Internet/Web of Things architectures
CN110019085A (en) A kind of distributed time series database based on HBase
US10489485B2 (en) Analytic system for streaming quantile computation
CN116955856A (en) Information display method, device, electronic equipment and storage medium
CN114756301B (en) Log processing method, device and system
CN116980430A (en) Resource allocation processing method, device, computer equipment and storage medium
CN108182241A (en) A kind of optimization method of data interaction, device, server and storage medium
US20210141791A1 (en) Method and system for generating a hybrid data model
US9547711B1 (en) Shard data based on associated social relationship

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication