CN118138632A - Extension processing method and system of micro-service architecture - Google Patents

Extension processing method and system of micro-service architecture Download PDF

Info

Publication number
CN118138632A
CN118138632A CN202410551644.3A CN202410551644A CN118138632A CN 118138632 A CN118138632 A CN 118138632A CN 202410551644 A CN202410551644 A CN 202410551644A CN 118138632 A CN118138632 A CN 118138632A
Authority
CN
China
Prior art keywords
service
micro
performance
bottleneck
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410551644.3A
Other languages
Chinese (zh)
Other versions
CN118138632B (en
Inventor
崔磊
李华军
徐海涛
杜万波
郑怀国
李春生
赵树春
郑康乐
范振兴
王炳成
尹志伟
魏玉婷
杨平
王家兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaeng International Power Co ltd Henan Clean Energy Branch
Huaneng Information Technology Co Ltd
Original Assignee
Huaeng International Power Co ltd Henan Clean Energy Branch
Huaneng Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaeng International Power Co ltd Henan Clean Energy Branch, Huaneng Information Technology Co Ltd filed Critical Huaeng International Power Co ltd Henan Clean Energy Branch
Priority to CN202410551644.3A priority Critical patent/CN118138632B/en
Publication of CN118138632A publication Critical patent/CN118138632A/en
Application granted granted Critical
Publication of CN118138632B publication Critical patent/CN118138632B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Algebra (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Pure & Applied Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses an expansion processing method and system of a micro-service architecture, which relate to the technical field of data processing and comprise the steps of deploying a monitoring tool to collect performance data of a system so as to determine a system bottleneck, and determining service to be expanded according to a service flow chart and the system bottleneck; constructing an objective function based on service requirements, determining a disassembly granularity through the objective function, and disassembling the service to be expanded to obtain a plurality of micro services; analyzing the data access mode and performance requirement of each micro service, and expanding a database and optimizing cache; constructing a corresponding elastic infrastructure and an automatic telescoping strategy through the performance data of the micro service; a current limiting and fusing mechanism is implemented for the micro-services. The adaptability and the precision of the micro-service architecture expansion are ensured, and the expansion effect is improved.

Description

Extension processing method and system of micro-service architecture
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and a system for processing an extension of a micro-service architecture.
Background
With the rapid development of internet technology, the scale and complexity of enterprise-level applications is also increasing. Traditional monomer architecture is often difficult to deal with when it is faced with high concurrency, high availability, high scalability, etc. As a solution to these challenges, the microservice architecture is becoming the mainstream architecture for enterprise-level application development.
The micro-service architecture splits a large application into multiple small, independent, reusable services. Each service runs in its separate process and interacts using a lightweight communication mechanism (e.g., HTTP RESTful API). This architecture allows each service to be deployed, extended, and maintained independently, thereby improving the scalability, maintainability, and agility of the system.
However, as business evolves and the volume of users grows, micro-service architecture also faces an expanding challenge. How to effectively expand micro services to meet the ever-increasing load and business requirements has become an important research topic.
In the prior art, the micro-service architecture has poor adaptability, low accuracy and poor expansion effect.
Therefore, how to micro-service the adaptability and accuracy of architecture expansion is a technical problem to be solved at present.
Disclosure of Invention
The invention provides a method for expanding a micro-service architecture, which is used for solving the technical problems of poor adaptability and low accuracy of the expansion of the micro-service architecture in the prior art. The method comprises the following steps:
Deploying a monitoring tool to collect performance data of the system to determine a system bottleneck, and determining service to be expanded according to the service flow chart and the system bottleneck;
Constructing an objective function based on service requirements, determining a disassembly granularity through the objective function, and disassembling the service to be expanded to obtain a plurality of micro services;
Analyzing the data access mode and performance requirement of each micro service, and expanding a database and optimizing cache;
Constructing a corresponding elastic infrastructure and an automatic telescoping strategy through the performance data of the micro service;
a current limiting and fusing mechanism is implemented for the micro-services.
In some embodiments of the application, deploying a monitoring tool to collect performance data of a system to determine a system bottleneck includes:
Collecting performance curves of each performance data with time change within a corresponding time according to the performance data category to obtain a real-time performance curve;
comparing the real-time performance curve with a standard performance curve corresponding to the performance parameter category of the preset real-time performance curve to obtain a longitudinal difference part curve;
Aligning a plurality of real-time performance curves with bottleneck relations on time scales, and then comparing the real-time performance curves, and determining a transverse synchronous part curve and a transverse asynchronous part curve corresponding to the real-time performance curves based on the performance parameter size and the change trend;
carrying out system bottleneck judgment according to a common longitudinal difference part curve and a common transverse synchronization part curve related to the same performance data;
If a single or a plurality of longitudinal difference part curves represent a certain bottleneck category and a transverse synchronization part curve also represents the bottleneck category, determining the bottleneck category as a system bottleneck;
If the single or multiple longitudinal difference part curves represent a certain bottleneck category, but the transverse synchronous part curves represent another bottleneck category, comparing the intensity of the transverse synchronous part curves with the intensity of the transverse asynchronous part curves, if the transverse synchronous part curves are stronger than the transverse asynchronous part curves, taking the bottleneck category represented by the transverse synchronous part curves as a system bottleneck, otherwise, taking the bottleneck category represented by the single or multiple longitudinal difference part curves as the system bottleneck.
In some embodiments of the present application, determining services to be extended according to a service flow chart and a system bottleneck includes:
analyzing the dependence and interaction times between services within a preset distance in a business flow chart;
Combining services with the dependence degree exceeding the corresponding threshold value or the interaction times exceeding the corresponding threshold value into one service to obtain a new business flow chart;
And taking the service with the system bottleneck as the service needing to be expanded.
In some embodiments of the present application, constructing an objective function based on traffic demand includes:
Determining a plurality of demand indexes through service demands, and dividing the plurality of demand indexes into a maximized demand index and a minimized demand index;
constructing an objective function by maximizing the demand index and minimizing the demand index, and determining corresponding constraint conditions so as to solve the constraint conditions;
Wherein, P is an objective function, In order to maximize the conversion coefficient corresponding to the demand, n is the number of the index of the maximized demand,For the weight corresponding to the ith maximized demand index,/>For the ith maximized demand index,/>To minimize the conversion factor corresponding to the demand, m is the minimum number of demand indicators,/>For the j-th weight corresponding to the minimum requirement index,/>The requirement index is minimized for the j-th.
In some embodiments of the present application, analyzing the data access patterns and performance requirements of each micro-service, and expanding the database and optimizing the cache, includes:
Collecting data access logs by using a database audit, log analysis tool or APM tool, and determining a data access mode;
extracting performance indexes from the data access log, and determining the performance requirement of each micro service;
expanding a corresponding database according to the data access mode, and optimizing database configuration based on performance requirements;
And determining a corresponding caching strategy through the data access mode.
In some embodiments of the present application, constructing corresponding elastic infrastructure and auto-scaling policies from performance data of a microservice includes:
integrating the load equalizer with the message queue, and distributing the message to the service instance according to the distribution priority, wherein the distribution priority comprises a service logic distribution priority and a service instance performance distribution priority;
determining distribution priorities corresponding to service logics, and determining performance distribution priorities of all service instances;
The distribution priority corresponding to the business logic is higher than the performance distribution priority of the service instance;
setting a storage resource use threshold according to service requirements and system performance targets, and defining an automatic expansion rule;
And selecting an automatic expansion library, configuring an automatic expansion controller, and adjusting storage resources according to a strategy by the automatic expansion controller when a predefined automatic expansion rule is triggered.
In some embodiments of the present application, implementing a current limiting and fusing mechanism for micro-services includes:
selecting a current limiting algorithm, integrating a current limiting library, configuring current limiting parameters, and realizing current limiting logic at a micro service entry point;
a service monitoring tool is deployed to monitor the health state of key service of the system, a fusing logic and a corresponding degradation strategy are set, the fusing logic is realized in a service call link, and the degradation strategy is provided.
In some embodiments of the application, selecting a current limiting algorithm includes:
determining a flow mode, a resource type and a system architecture corresponding to each current limiting algorithm;
The current traffic pattern, resource type and system architecture are analyzed to determine the corresponding flow restriction algorithm.
In some embodiments of the present application, setting the fusing logic and the corresponding degradation policy includes:
All failure modes that result in service unavailability are identified and degradation conditions and degradation levels are defined to implement degradation logic.
Correspondingly, the application also provides an expansion processing system of the micro-service architecture, which comprises:
the first module is used for deploying a monitoring tool to collect performance data of the system so as to determine a system bottleneck, and determining service to be expanded according to the service flow chart and the system bottleneck;
The second module is used for constructing an objective function based on service requirements, determining the disassembly granularity through the objective function, and carrying out disassembly on the service to be expanded to obtain a plurality of micro services;
the third module is used for analyzing the data access mode and the performance requirement of each micro service, expanding the database and optimizing the cache;
a fourth module, configured to construct a corresponding elastic infrastructure and automatic expansion policy according to performance data of the micro service;
And a fifth module for implementing a current limiting and fusing mechanism for the micro-service.
By applying the technical scheme, the deployment monitoring tool collects performance data of the system to determine the system bottleneck, and determines the service to be expanded according to the service flow chart and the system bottleneck; constructing an objective function based on service requirements, determining a disassembly granularity through the objective function, and disassembling the service to be expanded to obtain a plurality of micro services; analyzing the data access mode and performance requirement of each micro service, and expanding a database and optimizing cache; constructing a corresponding elastic infrastructure and an automatic telescoping strategy through the performance data of the micro service; a current limiting and fusing mechanism is implemented for the micro-services. The adaptability and the precision of the micro-service architecture expansion are ensured, and the expansion effect is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart illustrating an expansion processing method of a micro service architecture according to an embodiment of the present invention;
Fig. 2 shows a schematic structural diagram of an extended processing system of a micro service architecture according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides an extension method of a micro-service architecture, as shown in fig. 1, comprising the following steps:
step S101, deploying a monitoring tool to collect performance data of a system to determine a system bottleneck, and determining service to be expanded according to a service flow chart and the system bottleneck.
In this embodiment, a monitoring tool (e.g., prometheus, grafana) is deployed to collect and analyze system performance data, and evaluate whether the current service needs to be extended according to traffic conditions and performance.
In some embodiments of the application, deploying a monitoring tool to collect performance data of a system to determine a system bottleneck includes:
Collecting performance curves of each performance data with time change within a corresponding time according to the performance data category to obtain a real-time performance curve;
comparing the real-time performance curve with a standard performance curve corresponding to the preset performance parameter category to obtain a longitudinal difference part curve;
Aligning a plurality of real-time performance curves with bottleneck relations on time scales, and then comparing the real-time performance curves, and determining a transverse synchronous part curve and a transverse asynchronous part curve corresponding to the real-time performance curves based on the performance parameter size and the change trend;
carrying out system bottleneck judgment according to a common longitudinal difference part curve and a common transverse synchronization part curve related to the same performance data;
If a single or a plurality of longitudinal difference part curves represent a certain bottleneck category and a transverse synchronization part curve also represents the bottleneck category, determining the bottleneck category as a system bottleneck;
If the single or multiple longitudinal difference part curves represent a certain bottleneck category, but the transverse synchronous part curves represent another bottleneck category, comparing the intensity of the transverse synchronous part curves with the intensity of the transverse asynchronous part curves, if the transverse synchronous part curves are stronger than the transverse asynchronous part curves, taking the bottleneck category represented by the transverse synchronous part curves as a system bottleneck, otherwise, taking the bottleneck category represented by the single or multiple longitudinal difference part curves as the system bottleneck.
In this embodiment, the longitudinal differential part curve is a plurality of real-time performance curves with bottleneck relation of the performance parameter compared with the threshold value, that is, a plurality of performances are related to a bottleneck together, and the performance changes have a certain relation. And determining a transverse synchronous part curve and a transverse asynchronous part curve corresponding to the real-time performance curves based on the performance parameter size and the change trend, determining a part of the coincidence relation in two directions of the parameter and the change trend, wherein the coincidence is the transverse synchronous part curve, and the non-coincidence is the transverse asynchronous part curve.
In this embodiment, the longitudinal difference portion curve and the transverse synchronization portion curve represent the same bottleneck relationship, and then the bottleneck relationship is determined, otherwise, the strength between the transverse synchronization portion curve and the transverse non-synchronization portion curve is determined. The intensity here can be compared by the extent to which both curves occupy the whole curve.
In some embodiments of the present application, determining services to be extended according to a service flow chart and a system bottleneck includes:
analyzing the dependence and interaction times between services within a preset distance in a business flow chart;
Combining services with the dependence degree exceeding the corresponding threshold value or the interaction times exceeding the corresponding threshold value into one service to obtain a new business flow chart;
And taking the service with the system bottleneck as the service needing to be expanded.
Step S102, constructing an objective function based on service requirements, determining a disassembly granularity through the objective function, and disassembling the service to be expanded to obtain a plurality of micro services.
In some embodiments of the present application, constructing an objective function based on traffic demand includes:
Determining a plurality of demand indexes through service demands, and dividing the plurality of demand indexes into a maximized demand index and a minimized demand index;
constructing an objective function by maximizing the demand index and minimizing the demand index, and determining corresponding constraint conditions so as to solve the constraint conditions;
Wherein, P is an objective function, In order to maximize the conversion coefficient corresponding to the demand, n is the number of the index of the maximized demand,For the weight corresponding to the ith maximized demand index,/>For the ith maximized demand index,/>To minimize the conversion factor corresponding to the demand, m is the minimum number of demand indicators,/>For the j-th weight corresponding to the minimum requirement index,/>The requirement index is minimized for the j-th.
In this embodiment, maximizing the demand index includes maximizing overall performance and maintenance, and minimizing the demand index includes minimizing cost and complexity.
In this embodiment, the constraint is obtained by integrating the following:
Business logic integrity:
The split service should maintain the integrity of the business logic, ensuring that the business process is not affected.
Service independence:
each service should be independent, capable of independent deployment, extension, and upgrade.
Team autonomy:
the services should correspond to the organization of the team so that the team can develop and maintain the services autonomously.
Performance requirements:
The split service should meet performance requirements such as response time, throughput, etc.
Data consistency:
The split service should maintain data consistency, especially in transactions involving multiple services.
Cost effectiveness:
the cost of splitting (including development, shipping, and training costs) should be within acceptable limits.
Minimum service size:
The service should be large enough to avoid wasting resources by too small a service.
Maximum service size:
the service should not be too large to avoid excessive complexity and maintenance costs.
Communication cost:
communication between services should be as few as possible to reduce the overall complexity and delay of the system.
Technical feasibility:
the split scheme should take into account existing technology stacks and infrastructure to ensure technology feasibility.
Step S103, analyzing the data access mode and performance requirement of each micro service, expanding the database and optimizing the cache.
In some embodiments of the present application, analyzing the data access patterns and performance requirements of each micro-service, and expanding the database and optimizing the cache, includes:
Collecting data access logs by using a database audit, log analysis tool or APM tool, and determining a data access mode;
extracting performance indexes from the data access log, and determining the performance requirement of each micro service;
expanding a corresponding database according to the data access mode, and optimizing database configuration based on performance requirements;
And determining a corresponding caching strategy through the data access mode.
In this embodiment, common data access patterns, such as read-many-write-few, write-intensive, batch operations, etc., are identified based on the collected data. Service Level Agreements (SLAs) are set to ensure performance requirements are met, and resources such as CPU, memory, storage and the like of the database server are increased according to the performance requirements. Optimizing database configuration, such as adjusting buffer size, connection pool configuration, etc. Expanding the corresponding database according to the data access mode, for read-intensive operations, adding read-only copies or using database shards may be considered. For write-intensive operations, master-slave replication or multi-master replication may be considered. And according to the data access mode, proper caching strategies are designed, such as caching hot spot data, frequently accessed data and the like.
Step S104, constructing a corresponding elastic infrastructure and an automatic telescoping strategy through the performance data of the micro service.
In this embodiment, load balancing may be used in conjunction with message queues (e.g., rabbitMQ, kafka) to enable asynchronous communication and load distribution among services. Under high load conditions, message queues may help decouple services, relieving pressure from a single service, while load balancing may ensure that messages are effectively distributed to all healthy service instances. The automatic scaling can dynamically adjust the number of service instances according to the system load and resource usage. The distributed file storage system (such as HDFS and Ceph) can automatically expand storage resources according to requirements, so that the read-write performance of data and the scalability of the system are improved.
In some embodiments of the present application, constructing corresponding elastic infrastructure and auto-scaling policies from performance data of a microservice includes:
integrating the load equalizer with the message queue, and distributing the message to the service instance according to the distribution priority, wherein the distribution priority comprises a service logic distribution priority and a service instance performance distribution priority;
determining distribution priorities corresponding to service logics, and determining performance distribution priorities of all service instances;
The distribution priority corresponding to the business logic is higher than the performance distribution priority of the service instance;
setting a storage resource use threshold according to service requirements and system performance targets, and defining an automatic expansion rule;
And selecting an automatic expansion library, configuring an automatic expansion controller, and adjusting storage resources according to a strategy by the automatic expansion controller when a predefined automatic expansion rule is triggered.
In this embodiment, in the service architecture, the load balancer may be integrated with the message queue, to ensure that the message is distributed to all service instances according to service logic and performance. In some business scenarios, specific distribution of messages may be required, depending on business logic and policies. If there is a difference in performance or resource utilization between service instances, it may be necessary to allocate messages according to the actual capabilities of the instances. Automatic scaling may adjust the amount of storage resources based on the load and performance metrics of the distributed file storage system. For example, when I/O requests of a file storage system increase, auto-scaling may increase the number of storage nodes to increase the processing power of the system. Setting a threshold value: and setting a threshold value of the use of the storage resources according to the service requirements and the system performance targets. For example, when the storage usage reaches 80%, the storage resources are automatically increased. Rule definition: rules for auto-scaling are defined, including when to increase or decrease resources, the number of resources that are increased or decreased, etc. Selecting an automatic expansion library: automated libraries capable of integrating with distributed file storage systems, such as Horizontal Pod Autoscaler (HPA) and Kubernetes, are selected or developed. Configuring an automatic telescopic controller: the automatic expansion controller is configured to automatically adjust the storage resources according to the monitored data and the expansion policy.
Step S105, implementing a current limiting and fusing mechanism for the micro-service.
In some embodiments of the present application, implementing a current limiting and fusing mechanism for micro-services includes:
selecting a current limiting algorithm, integrating a current limiting library, configuring current limiting parameters, and realizing current limiting logic at a micro service entry point;
a service monitoring tool is deployed to monitor the health state of key service of the system, a fusing logic and a corresponding degradation strategy are set, the fusing logic is realized in a service call link, and the degradation strategy is provided.
In this embodiment, the current limit parameters, such as token generation rate or bucket size, are dynamically adjusted according to the real-time load and performance data of the system. Service monitoring tools, such as health check and anomaly detection mechanisms, are deployed to monitor the health status of service instances. A threshold is determined at which service is not available, such as a response time exceeding a threshold or an abnormal error rate exceeding a threshold. When service unavailability or performance degradation is detected, the fuse may temporarily disconnect the service instance, preventing further request stacking and service avalanche. Degradation logic, such as returning a default response, retry mechanism, or error prompt, is implemented to reduce negative impact on the user experience.
In some embodiments of the application, selecting a current limiting algorithm includes:
determining a flow mode, a resource type and a system architecture corresponding to each current limiting algorithm;
The current traffic pattern, resource type and system architecture are analyzed to determine the corresponding flow restriction algorithm.
In some embodiments of the present application, setting the fusing logic and the corresponding degradation policy includes:
All failure modes that result in service unavailability are identified and degradation conditions and degradation levels are defined to implement degradation logic.
In this embodiment, a traffic pattern, a resource type, and a system architecture corresponding to each current limiting algorithm are determined.
Flow mode:
If the traffic pattern is bursty, it may be desirable to smooth the traffic using a token bucket algorithm.
The leaky bucket algorithm may be more suitable if the flow is continuous and uniform.
Resource type:
for CPU-intensive applications, current limiting may be required depending on CPU usage.
For IO intensive applications, current limiting may be required depending on disk read-write speed or network bandwidth.
The system architecture:
If the system is a distributed architecture, it is necessary to ensure that the throttling algorithm can work in concert on different nodes.
If the system is a micro-service architecture, it may be desirable to implement a current limit at each service entry point.
It should be noted that the traffic pattern, the resource type and the system architecture are only some examples, and may also include other examples, which are not specifically limited herein.
In this embodiment, failure modes that may cause service unavailability are identified, such as database connection failure, external service invocation timeout, etc. Conditions triggering degradation are defined according to failure modes, such as service response time exceeding a threshold, abnormal error rate exceeding a threshold, etc. Different levels of degradation, such as slight, medium, and severe degradation, are determined, each corresponding to a different processing policy.
Degradation logic includes, but is not limited to, the following:
Returning a default response: logic is designed to return default data, such as default product information, static pages, etc.
Implementing a retry mechanism: service retry logic is designed, including number of retries, retry interval time, etc.
Providing an error prompt: user-friendly error cues are designed, such as temporary unavailability of services, please try again later, etc.
By applying the technical scheme, the deployment monitoring tool collects performance data of the system to determine the system bottleneck, and determines the service to be expanded according to the service flow chart and the system bottleneck; constructing an objective function based on service requirements, determining a disassembly granularity through the objective function, and disassembling the service to be expanded to obtain a plurality of micro services; analyzing the data access mode and performance requirement of each micro service, and expanding a database and optimizing cache; constructing a corresponding elastic infrastructure and an automatic telescoping strategy through the performance data of the micro service; a current limiting and fusing mechanism is implemented for the micro-services. The adaptability and the precision of the micro-service architecture expansion are ensured, and the expansion effect is improved.
From the above description of the embodiments, it will be clear to those skilled in the art that the present invention may be implemented in hardware, or may be implemented by means of software plus necessary general hardware platforms. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective implementation scenario of the present invention.
In order to further explain the technical idea of the invention, the technical scheme of the invention is described with specific application scenarios.
Correspondingly, the application also provides an expansion processing system of the micro-service architecture, as shown in fig. 2, comprising:
the first module is used for deploying a monitoring tool to collect performance data of the system so as to determine a system bottleneck, and determining service to be expanded according to the service flow chart and the system bottleneck;
The second module is used for constructing an objective function based on service requirements, determining the disassembly granularity through the objective function, and carrying out disassembly on the service to be expanded to obtain a plurality of micro services;
the third module is used for analyzing the data access mode and the performance requirement of each micro service, expanding the database and optimizing the cache;
a fourth module, configured to construct a corresponding elastic infrastructure and automatic expansion policy according to performance data of the micro service;
And a fifth module for implementing a current limiting and fusing mechanism for the micro-service.
Those skilled in the art will appreciate that the modules in the system in the implementation scenario may be distributed in the system in the implementation scenario according to the implementation scenario description, or that corresponding changes may be located in one or more systems different from the implementation scenario. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be appreciated by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not drive the essence of the corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. An extension processing method of a micro-service architecture is characterized by comprising the following steps:
Deploying a monitoring tool to collect performance data of the system to determine a system bottleneck, and determining service to be expanded according to the service flow chart and the system bottleneck;
Constructing an objective function based on service requirements, determining a disassembly granularity through the objective function, and disassembling the service to be expanded to obtain a plurality of micro services;
Analyzing the data access mode and performance requirement of each micro service, and expanding a database and optimizing cache;
Constructing a corresponding elastic infrastructure and an automatic telescoping strategy through the performance data of the micro service;
a current limiting and fusing mechanism is implemented for the micro-services.
2. The method of extended processing of a micro-service architecture of claim 1, wherein deploying a monitoring tool to collect performance data of a system to determine a system bottleneck comprises:
Collecting performance curves of each performance data with time change within a corresponding time according to the performance data category to obtain a real-time performance curve;
comparing the real-time performance curve with a standard performance curve corresponding to the performance parameter category of the preset real-time performance curve to obtain a longitudinal difference part curve;
Aligning a plurality of real-time performance curves with bottleneck relations on time scales, and then comparing the real-time performance curves, and determining a transverse synchronous part curve and a transverse asynchronous part curve corresponding to the real-time performance curves based on the performance parameter size and the change trend;
carrying out system bottleneck judgment according to a common longitudinal difference part curve and a common transverse synchronization part curve related to the same performance data;
If a single or a plurality of longitudinal difference part curves represent a certain bottleneck category and a transverse synchronization part curve also represents the bottleneck category, determining the bottleneck category as a system bottleneck;
If the single or multiple longitudinal difference part curves represent a certain bottleneck category, but the transverse synchronous part curves represent another bottleneck category, comparing the intensity of the transverse synchronous part curves with the intensity of the transverse asynchronous part curves, if the transverse synchronous part curves are stronger than the transverse asynchronous part curves, taking the bottleneck category represented by the transverse synchronous part curves as a system bottleneck, otherwise, taking the bottleneck category represented by the single or multiple longitudinal difference part curves as the system bottleneck.
3. The method for processing the extension of the micro-service architecture according to claim 2, wherein determining the service to be extended according to the service flow chart and the system bottleneck comprises:
analyzing the dependence and interaction times between services within a preset distance in a business flow chart;
Combining services with the dependence degree exceeding the corresponding threshold value or the interaction times exceeding the corresponding threshold value into one service to obtain a new business flow chart;
And taking the service with the system bottleneck as the service needing to be expanded.
4. The extension processing method of a micro service architecture according to claim 1, wherein constructing an objective function based on a service requirement comprises:
Determining a plurality of demand indexes through service demands, and dividing the plurality of demand indexes into a maximized demand index and a minimized demand index;
constructing an objective function by maximizing the demand index and minimizing the demand index, and determining corresponding constraint conditions so as to solve the constraint conditions;
Wherein, P is an objective function, To maximize the conversion coefficient corresponding to the demand, n is the number of the index of the maximized demand,/>For the weight corresponding to the ith maximized demand index,/>For the ith maximized demand index,/>To minimize the conversion factor corresponding to the demand, m is the minimum number of demand indicators,/>For the j-th weight corresponding to the minimum requirement index,/>The requirement index is minimized for the j-th.
5. The expansion processing method of micro service architecture according to claim 1, wherein analyzing data access patterns and performance requirements of each micro service, expanding a database and optimizing a cache, comprises:
Collecting data access logs by using a database audit, log analysis tool or APM tool, and determining a data access mode;
extracting performance indexes from the data access log, and determining the performance requirement of each micro service;
expanding a corresponding database according to the data access mode, and optimizing database configuration based on performance requirements;
And determining a corresponding caching strategy through the data access mode.
6. The method for processing extension of micro-service architecture according to claim 1, wherein constructing the corresponding elastic infrastructure and auto-scaling policy from performance data of the micro-service comprises:
integrating the load equalizer with the message queue, and distributing the message to the service instance according to the distribution priority, wherein the distribution priority comprises a service logic distribution priority and a service instance performance distribution priority;
determining distribution priorities corresponding to service logics, and determining performance distribution priorities of all service instances;
The distribution priority corresponding to the business logic is higher than the performance distribution priority of the service instance;
setting a storage resource use threshold according to service requirements and system performance targets, and defining an automatic expansion rule;
And selecting an automatic expansion library, configuring an automatic expansion controller, and adjusting storage resources according to a strategy by the automatic expansion controller when a predefined automatic expansion rule is triggered.
7. The method for processing the extension of the micro-service architecture according to claim 1, wherein the implementing the current limiting and fusing mechanism for the micro-service comprises:
selecting a current limiting algorithm, integrating a current limiting library, configuring current limiting parameters, and realizing current limiting logic at a micro service entry point;
a service monitoring tool is deployed to monitor the health state of key service of the system, a fusing logic and a corresponding degradation strategy are set, the fusing logic is realized in a service call link, and the degradation strategy is provided.
8. The method for processing the extension of the micro-service architecture according to claim 7, wherein selecting the current limiting algorithm comprises:
determining a flow mode, a resource type and a system architecture corresponding to each current limiting algorithm;
The current traffic pattern, resource type and system architecture are analyzed to determine the corresponding flow restriction algorithm.
9. The method of extended processing of a micro-service architecture of claim 7, wherein setting fuse logic and corresponding downgrade policies comprises:
All failure modes that result in service unavailability are identified and degradation conditions and degradation levels are defined to implement degradation logic.
10. An extended processing system of a micro-service architecture, comprising:
the first module is used for deploying a monitoring tool to collect performance data of the system so as to determine a system bottleneck, and determining service to be expanded according to the service flow chart and the system bottleneck;
The second module is used for constructing an objective function based on service requirements, determining the disassembly granularity through the objective function, and carrying out disassembly on the service to be expanded to obtain a plurality of micro services;
the third module is used for analyzing the data access mode and the performance requirement of each micro service, expanding the database and optimizing the cache;
a fourth module, configured to construct a corresponding elastic infrastructure and automatic expansion policy according to performance data of the micro service;
And a fifth module for implementing a current limiting and fusing mechanism for the micro-service.
CN202410551644.3A 2024-05-07 2024-05-07 Extension processing method and system of micro-service architecture Active CN118138632B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410551644.3A CN118138632B (en) 2024-05-07 2024-05-07 Extension processing method and system of micro-service architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410551644.3A CN118138632B (en) 2024-05-07 2024-05-07 Extension processing method and system of micro-service architecture

Publications (2)

Publication Number Publication Date
CN118138632A true CN118138632A (en) 2024-06-04
CN118138632B CN118138632B (en) 2024-09-03

Family

ID=91230532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410551644.3A Active CN118138632B (en) 2024-05-07 2024-05-07 Extension processing method and system of micro-service architecture

Country Status (1)

Country Link
CN (1) CN118138632B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5193151A (en) * 1989-08-30 1993-03-09 Digital Equipment Corporation Delay-based congestion avoidance in computer networks
WO2015002372A1 (en) * 2013-07-02 2015-01-08 Samsung Electronics Co., Ltd. Power supply device, micro server having the same, and power supply method
CN106330576A (en) * 2016-11-18 2017-01-11 北京红马传媒文化发展有限公司 Automatic scaling and migration scheduling method, system and device for containerization micro-service
US20190098080A1 (en) * 2017-09-22 2019-03-28 Simon Bermudez System and method for platform to securely distribute compute workload to web capable devices
CN112199150A (en) * 2020-08-13 2021-01-08 北京航空航天大学 Online application dynamic capacity expansion and contraction method based on micro-service calling dependency perception
CN114968563A (en) * 2022-05-16 2022-08-30 杭州电子科技大学 Micro-service resource allocation method based on combined neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5193151A (en) * 1989-08-30 1993-03-09 Digital Equipment Corporation Delay-based congestion avoidance in computer networks
WO2015002372A1 (en) * 2013-07-02 2015-01-08 Samsung Electronics Co., Ltd. Power supply device, micro server having the same, and power supply method
CN106330576A (en) * 2016-11-18 2017-01-11 北京红马传媒文化发展有限公司 Automatic scaling and migration scheduling method, system and device for containerization micro-service
US20190098080A1 (en) * 2017-09-22 2019-03-28 Simon Bermudez System and method for platform to securely distribute compute workload to web capable devices
CN112199150A (en) * 2020-08-13 2021-01-08 北京航空航天大学 Online application dynamic capacity expansion and contraction method based on micro-service calling dependency perception
CN114968563A (en) * 2022-05-16 2022-08-30 杭州电子科技大学 Micro-service resource allocation method based on combined neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANG YANG, ET AL.,: ""Application of Distributed Storage Technology for Financial Management and Control System in Electric Power System"", 《2017 16TH INTERNATIONAL SYMPOSIUM ON DISTRIBUTED COMPUTING AND APPLICATIONS TO BUSINESS, ENGINEERING AND SCIENCE(DCABES)》, 11 January 2018 (2018-01-11) *
彭丽苹;吕晓丹;蒋朝惠;彭成辉;: "基于Docker的云资源弹性调度策略", 计算机应用, no. 02, 10 February 2018 (2018-02-10) *

Also Published As

Publication number Publication date
CN118138632B (en) 2024-09-03

Similar Documents

Publication Publication Date Title
EP3637733B1 (en) Load balancing engine, client, distributed computing system, and load balancing method
CN107430528B (en) Opportunistic resource migration to optimize resource placement
CN108733509B (en) Method and system for backing up and restoring data in cluster system
CN108712464A (en) A kind of implementation method towards cluster micro services High Availabitity
JP4374378B2 (en) Operation performance evaluation apparatus, operation performance evaluation method, and program
US7516221B2 (en) Hierarchical management of the dynamic allocation of resources in a multi-node system
US7284146B2 (en) Markov model of availability for clustered systems
US7035919B1 (en) Method for calculating user weights for thin client sizing tool
EP1654648B1 (en) Hierarchical management of the dynamic allocation of resources in a multi-node system
US20050038829A1 (en) Service placement for enforcing performance and availability levels in a multi-node system
US20070094343A1 (en) System and method of implementing selective session replication utilizing request-based service level agreements
TWI725744B (en) Method for establishing system resource prediction and resource management model through multi-layer correlations
US20120084788A1 (en) Complex event distributing apparatus, complex event distributing method, and complex event distributing program
CN112764920B (en) Edge application deployment method, device, equipment and storage medium
US7600229B1 (en) Methods and apparatus for load balancing processing of management information
Rahmani et al. Burst‐aware virtual machine migration for improving performance in the cloud
US9575854B1 (en) Cascade failure resilient data storage
CN111858458A (en) Method, device, system, equipment and medium for adjusting interconnection channel
CN113590285A (en) Method, system and equipment for dynamically setting thread pool parameters
CN118138632B (en) Extension processing method and system of micro-service architecture
TW201627873A (en) Method and Apparatus of Processing Retransmission Request in Distributed Computing
Semmoud et al. A Distributed Fault Tolerant Algorithm for Load Balancing in Cloud Computing Environments
CN113515524A (en) Automatic dynamic allocation method and device for distributed cache access layer nodes
Al-Wesabi et al. Improving performance in component based distributed systems
JP2010238044A (en) Virtual machine management system, virtual machine management method and virtual machine management program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant