CN117492904A - Micro-service gray level publishing method, system, device and medium - Google Patents

Micro-service gray level publishing method, system, device and medium Download PDF

Info

Publication number
CN117492904A
CN117492904A CN202311309516.XA CN202311309516A CN117492904A CN 117492904 A CN117492904 A CN 117492904A CN 202311309516 A CN202311309516 A CN 202311309516A CN 117492904 A CN117492904 A CN 117492904A
Authority
CN
China
Prior art keywords
application container
service
service instance
micro
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311309516.XA
Other languages
Chinese (zh)
Inventor
程虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lan You Technology Co Ltd
Original Assignee
Shenzhen Lan You Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lan You Technology Co Ltd filed Critical Shenzhen Lan You Technology Co Ltd
Priority to CN202311309516.XA priority Critical patent/CN117492904A/en
Publication of CN117492904A publication Critical patent/CN117492904A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a method, a system, a device and a medium for publishing micro-service gray scale, wherein the method comprises the following steps: acquiring an application container to be processed according to a preset application container engine; preprocessing the application container to be processed to obtain a first application container; installing a load balancing rule to the first application container to obtain a target application container; acquiring a first user request from a front-end application; and according to the first user request and the target application container, performing micro-service gray level release according to the load balancing rule. The invention realizes the gray level release of the micro-service, reduces the service routing cost and improves the universality on different containerized platforms. The invention can be widely applied to the technical field of gray level release.

Description

Micro-service gray level publishing method, system, device and medium
Technical Field
The present invention relates to the field of gray level publishing technology, and in particular, to a method, a system, an apparatus, and a medium for publishing a micro service gray level.
Background
Gray release is a common progressive delivery strategy in the era of micro services and cloud primordia, and allows developers to gradually push out new versions or functions in certain user groups or certain channel users, so that automatic matching routing is realized, and different types of users can use different versions or functions. In the prior art, the gray level release method has high service routing cost and low universality.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the invention provides a method, a system, a device and a medium for publishing micro-service gray scale, which effectively reduce service routing cost and improve universality on different containerized platforms.
In one aspect, the embodiment of the invention provides a method for publishing micro-service gray scale, which comprises the following steps:
acquiring an application container to be processed according to a preset application container engine;
preprocessing the application container to be processed to obtain a first application container;
installing a load balancing rule to the first application container to obtain a target application container;
acquiring a first user request from a front-end application;
and according to the first user request and the target application container, performing micro-service gray level release according to the load balancing rule.
In some embodiments, the preprocessing the application container to be processed to obtain a first application container includes:
setting dynamic parameters in the application container to be processed to obtain a second application container;
installing a registration configuration center to the second application container, and adding the first metadata parameter to the registration configuration center to obtain a third application container;
installing target dependencies to the third application container, resulting in the first application container, the target dependencies including load balancing dependencies or gateway dependencies.
In some embodiments, the installing the load balancing rule to the first application container, to obtain a target application container includes:
acquiring the load balancing rule;
and installing the load balancing rule to the first application container through a SpringCloud framework to obtain a target application container.
In some embodiments, the executing step of the load balancing rule includes:
acquiring a second user request;
acquiring a host number according to the second user request;
and obtaining a target service instance according to the host number and a first service instance set, wherein the first service instance set comprises one or more first service instances.
In some embodiments, the obtaining a target service instance according to the host number and the first service instance set includes:
acquiring a cluster name;
screening the first service instance set according to the host number and the cluster name to obtain a second service instance set, wherein the second service instance set comprises one or more second service instances;
if the second service instance set only comprises one second service instance, the second service instance is taken as the target service instance; otherwise, randomly selecting one second service instance from the second service instance set as the target service instance.
In some embodiments, the filtering the first service instance set according to the host number and the cluster name to obtain a second service instance set includes:
according to the cluster names, performing first filtering operation on each first service instance in the first service instance set to obtain a third service instance set, wherein the third service instance set comprises a plurality of third service instances;
and carrying out second filtering operation on each third service instance in the third service instance set according to the host number to obtain the second service instance set.
In some embodiments, the obtaining the first user request from the front-end application includes:
installing a request interceptor into the front-end application and adding a second metadata parameter into the request interceptor according to Axios;
and acquiring a first user request through the request interceptor.
In another aspect, an embodiment of the present invention provides a micro-service gray scale publishing system, including:
the first module is used for acquiring an application container to be processed according to a preset application container engine;
the second module is used for preprocessing the application container to be processed to obtain a first application container;
the third module is used for installing the load balancing rule to the first application container to obtain a target application container;
a fourth module for obtaining a first user request from the front-end application;
and a fifth module, configured to perform micro-service gray level publishing according to the first user request and the target application container through the load balancing rule.
On the other hand, an embodiment of the present invention provides a micro-service gray scale publishing device, including:
at least one memory for storing a program;
and the at least one processor is used for loading the program to execute the micro-service gray level release method.
In another aspect, an embodiment of the present invention provides a storage medium in which a computer-executable program is stored, where the computer-executable program is used to implement the method for distributing micro-service gray scale when executed by a processor.
The invention has the following beneficial effects:
according to the method, firstly, an application container to be processed is obtained according to a preset application container engine, pretreatment is carried out on the application container to be processed to obtain a first application container, then a load balancing rule is installed on the first application container to obtain a target application container, finally, a first user request is obtained from a front-end application, and micro-service gray level release is carried out through the load balancing rule, so that micro-service gray level release is realized, service routing cost is reduced, and universality on different containerized platforms is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for issuing micro-service gray scale according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an overall flow of micro-service gray level distribution according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and specific embodiments. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
In the description of the present invention, the meaning of a number is one or more, the meaning of a number is two or more, and greater than, less than, exceeding, etc. are understood to exclude the present number, and the meaning of a number is understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the embodiments of the invention is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Before further describing embodiments of the present application in detail, the terms and terminology involved in the embodiments of the present application are described as follows:
dock: is an open-source application container engine, allows developers to package their applications and rely on packages into a portable image, then issue to any popular Linux or Windows operating system machine, and also can implement virtualization. The containers are completely sandboxed without any interface to each other. The components of a complete Docker include: a Docker Client, a Docker Daemon, a Docker Image mirror, and a Docker Container. Dock uses a client-server (C/S) architecture model, using remote APIs to manage and create dock containers. The Docker container is created by Docker mirroring. The relationship of containers to mirrors is similar to objects and classes in object-oriented programming.
SpringCloud: is an ordered set of a series of frames. The development convenience of the Spring Boot is utilized to skillfully simplify the development of the infrastructure of the distributed system, such as service discovery registration, a configuration center, a message bus, load balancing, a circuit breaker, data monitoring and the like, and the development style of the Spring Boot can be used for one-key starting and deployment. The Spring cloud does not repeatedly manufacture wheels, only combines the service frames which are developed by companies and are relatively mature and can withstand practical tests, and the complex configuration and implementation principle is shielded by repackaging in the Spring Boot style, so that a set of simply and easily understood, easily deployed and easily maintained distributed system development tool kit is finally provided for a developer.
Axios: is a promise-based network request library acting in node. Js and browser, and its same set of code can be run in browser and node. Js. At the server it uses native node. Js and http modules, while at the client (browser) it uses XMLHttpRequest. The main characteristics of Axios include: creating XMLHttpRequest from the browser; creating an http request from node. Js; support Promise APIs; intercepting the request and the response; converting request and response data; cancelling the request; automatically converting JSON data; the client supports defensive XSRF.
Metadata (metadata): also called intermediate data and relay data, which are data (data about data) describing data, mainly describe information of data attribute (property) and are used for supporting functions such as indicating storage location, history data, resource searching, file recording, etc. Metadata is an electronic catalog, and can achieve the purpose of assisting data retrieval by describing and collecting the content or characteristics of data.
Dockerfile: the text file used for constructing the Docker mirror image is a script composed of a piece of instructions and parameters required for constructing the mirror image. In practice, command operation is written into the Dockerfire, and the set operation command is executed through the Dockerfire, so that the mirror image constructed through the Dockerfire is ensured to be consistent.
DEVOPS (a combination of Development and Operations): is a collective term for a set of processes, methods and systems for facilitating communication, collaboration and integration between development (application/software engineering), technical operations and Quality Assurance (QA) departments. IT is a culture, exercise or convention that pays attention to the communication cooperation between "software developer (Dev)" and "IT operation and maintenance technician (Ops)". The software can be built, tested and released more quickly, frequently and reliably through the processes of automatic software delivery and architecture change. Its appearance is due to the increasingly clear recognition by the software industry: in order to deliver software products and services on time, development and operation work must be closely coordinated.
Nacos (registration configuration center): the cloud primary application is a new open source project pushed out by the alebab, and is a dynamic service discovery, configuration management and service management platform which is easier to construct. Nacos is directed to assisting in the discovery, configuration and management of micro-services. Nacos provides a set of simple and easy-to-use features that enable dynamic service discovery, service configuration, service metadata, and traffic management to be quickly implemented. Nacos can build, deliver and manage micro-service platforms more quickly and easily. Nacos is a service infrastructure that builds modern application architectures (e.g., micro-service paradigm, yun Yuansheng paradigm) centered on "services".
Embodiments of the present application are specifically explained below with reference to the accompanying drawings:
as shown in fig. 1, an embodiment of the present invention provides a method for publishing micro service gray scale, including but not limited to the following steps:
and S11, acquiring an application container to be processed according to a preset application container engine.
In this embodiment, the application container to be processed may be created by a preset application container engine. The preset application container engine may include a Docker, among others.
And step S12, preprocessing the application container to be processed to obtain a first application container.
In this embodiment, preprocessing an application container to be processed to obtain a first application container includes:
and setting dynamic parameters in the application container to be processed to obtain a second application container.
In this embodiment, in the application container to be processed, the dynamic parameter is added to the dockerrfile description file, so as to obtain a second application container. In this embodiment, the dynamic parameters may include third metadata parameters. For example, in the application container to be processed, "spring, closed, record, metadata, version=1.0.1" may be added as a third metadata parameter to the dockerlue description file, to obtain a second application container.
And installing the registration configuration center to the second application container, and adding the first metadata parameters to the registration configuration center to obtain a third application container.
In this embodiment, the registration configuration center may be installed in the second application container, the original parameter set of the registration configuration center is obtained, the first metadata parameter is added to the original parameter set, a new parameter set is generated, and the new parameter set is set in the registration configuration center, so as to obtain the third application container. For example, the registration configuration center may be installed by adding "spring, closed, uncovered, latch, enabled" to the second application container, the original parameter set of the registration configuration center is obtained, the "spring, closed, uncovered, metadata, version" may be added as the first metadata parameter to the original parameter set, a new parameter set is generated, and the new parameter set is set to the registration configuration center, so as to obtain the third application container.
And installing the target dependency into a third application container to obtain a first application container, wherein the target dependency comprises a load balancing dependency or a gateway dependency.
In this embodiment, the load balancing dependency or the gateway dependency may be installed in the third application container, to obtain the first application container. Illustratively, the load balancing dependency may be installed by adding "org. Spring gframework. Closed" and "spring-closed-loadbalance" to the third application container, and the gateway dependency may be installed by adding "org. Spring gframework. Closed" and "spring-closed-starter-gateway" to the third application container, resulting in the first application container.
And S13, installing the load balancing rule into the first application container to obtain the target application container.
In this embodiment, installing a load balancing rule to a first application container to obtain a target application container includes:
acquiring a load balancing rule;
and installing the load balancing rule to the first application container through the SpringCloud framework to obtain the target application container.
In this embodiment, a customized load balancing rule may be obtained, and the customized load balancing rule is installed to the first application container through a restTemplate function and a custLoadBalancer function in the SpringCloud framework, so as to obtain a target application container.
In this embodiment, the specific implementation process of the load balancing rule includes, but is not limited to, step S201 to step S203:
step S201, obtain a second user request.
Step S202, according to the second user request, the host number is acquired.
In this embodiment, the second user request may be obtained, the request text may be obtained from the second user request, the client request may be obtained from the request text, and the host number set may be obtained from the client request. If the host number set is not empty, the first item is selected from the host number set as the host number.
Step 203, obtaining a target service instance according to the host number and a first service instance set, wherein the first service instance set comprises one or more first service instances.
In the present embodiment, the specific implementation procedure of step S203 includes, but is not limited to, step S301 to step S303:
step S301, obtaining a cluster name.
In this embodiment, the cluster name of the cluster in which the currently available service is located may be obtained, and the currently available service is used to characterize a service that can currently operate normally and can provide a required service for the user.
Step S302, screening the first service instance set according to the host number and the cluster name to obtain a second service instance set, wherein the second service instance set comprises one or more second service instances.
In this embodiment, according to the host number and the cluster name, a screening operation is performed on the first service instance set to obtain a second service instance set, including:
according to the cluster name, performing first filtering operation on each first service instance in the first service instance set to obtain a third service instance set, wherein the third service instance set comprises a plurality of third service instances;
and carrying out second filtering operation on each third service instance in the third service instance set according to the host number to obtain a second service instance set.
In this embodiment, a first filtering operation may be performed on each first service instance in the first service instance set according to the cluster name, and only the first service instance belonging to the cluster range corresponding to the cluster name is reserved, so as to obtain a third service instance set. And carrying out second filtering operation on each third service instance in the third service instance set according to the host number, and only reserving the third service instance corresponding to the host number to obtain a second service instance set.
Step S303, if the second service instance set only comprises one second service instance, the second service instance is taken as a target service instance; otherwise, randomly selecting a second service instance from the second service instance set as the target service instance.
In this embodiment, the number of second service instances in the second service instance set may be determined, and if the second service instance set includes only one second service instance, the second service instance is taken as the target service instance. Otherwise, executing the polling strategy, and randomly selecting a second service instance from the second service instance set as a target service instance by a random number or remainder method. Illustratively, in the second service instance set including 10 second service instances, the obtained random number is 88, the random number 88 may be used to sum up the number 10 of the second service instances to obtain a remainder of 8, and the 8 th second service instance may be selected from the second service instance set as the target service instance.
Step S14, a first user request is obtained from the front-end application.
In this embodiment, acquiring the first user request from the front-end application includes:
installing a request interceptor into the front-end application and adding a second metadata parameter to the request interceptor according to Axios;
the first user request is obtained by a request interceptor.
In this embodiment, a request interceptor in the Axios network request library may be installed in the front-end application according to the Axios network request library, and the second metadata parameter may be added to the request interceptor, where the front-end application may include a client or a browser. For example, the request interceptor may be installed by adding "axios.defaults.baseurl" to the front-end application, and the second metadata parameter may be added to the request interceptor by setting "axios.headers.metadata= $ { metadata }" in the front-end application. The first user request may be obtained through a request interceptor.
And S15, according to the first user request and the target application container, performing micro-service gray level release through a load balancing rule.
In this embodiment, the first user request may be transmitted into the target application container, and the first user request may be calculated by using a load balancing rule, so that different services may be provided for users in different groups, so as to implement micro-service gray level distribution. For example, the first user request may be transmitted to the target application container, and the load balancing rule may obtain a host number from the first user request, screen a service instance meeting the requirements from the currently available services according to the host number and the cluster name, and provide the service instance to the user, so as to implement micro-service gray level publishing.
In this embodiment, the overall flow of micro-service gray level publishing is shown in fig. 2, and a developer may publish dockerfile to an API gateway or docker cluster through DEVOPS to initialize the setting, obtain a user request from a front-end application through the API gateway, and transmit the user request to a custom routing rule. Metadata can be obtained from Nacos, service examples are screened according to user requests and the metadata through a custom routing rule, and corresponding service examples are provided for common users and gray users. The custom routing rule may include a load balancing rule, and the dock cluster may include a first application container.
The embodiment of the invention has the beneficial effects that: according to the method, firstly, an application container to be processed is obtained according to a preset application container engine, pretreatment is carried out on the application container to be processed to obtain a first application container, then a load balancing rule is installed on the first application container to obtain a target application container, finally, a first user request is obtained from a front-end application, and micro-service gray level release is carried out through the load balancing rule, so that micro-service gray level release is realized, service routing cost is reduced, and universality on different containerized platforms is improved.
In the embodiment, the fixed point delivery of the content can be accurately controlled, the gray level release is used as a self-defined rule release engine, and the APP or the web page can be accurately delivered according to self-defined labels such as lower than, crowd and time period, so that the fine delivery requirement of enterprises is met. The embodiment can also promote user experience, provide different functional experiences for different crowds, and collect new version use feedback so as to optimize system and test system functions. The embodiment can also reduce the influence surface of the release error, and the rollback can be adjusted immediately when a problem occurs, so that the influence surface is prevented from being enlarged.
The embodiment of the invention also provides a micro-service gray level release system, which comprises the following steps:
the first module is used for acquiring an application container to be processed according to a preset application container engine;
the second module is used for preprocessing the application container to be processed to obtain a first application container;
the third module is used for installing the load balancing rule to the first application container to obtain a target application container;
a fourth module for obtaining a first user request from the front-end application;
and a fifth module, configured to perform micro-service gray level publishing according to the first user request and the target application container through the load balancing rule.
The content in the method embodiment is applicable to the system embodiment, the functions specifically realized by the system embodiment are the same as those of the method embodiment, and the achieved beneficial effects are the same as those of the method embodiment.
The embodiment of the invention also provides a micro-service gray level publishing device, which comprises:
at least one memory for storing a program;
at least one processor for loading the program to perform a micro-service gray scale distribution method as shown in fig. 1.
The content in the method embodiment is applicable to the embodiment of the device, and the functions specifically realized by the embodiment of the device are the same as those of the method embodiment, and the obtained beneficial effects are the same as those of the method embodiment.
The embodiment of the invention also provides a storage medium, in which a computer executable program is stored, the computer executable program being used for implementing a micro-service gray scale distribution method shown in fig. 1 when being executed by a processor.
The content in the method embodiment is applicable to the storage medium embodiment, and functions specifically implemented by the storage medium embodiment are the same as those of the method embodiment, and the achieved beneficial effects are the same as those of the method embodiment.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the above embodiment, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present invention, and these equivalent modifications and substitutions are intended to be included in the scope of the present invention as defined in the appended claims.

Claims (10)

1. The micro-service gray level release method is characterized by comprising the following steps:
acquiring an application container to be processed according to a preset application container engine;
preprocessing the application container to be processed to obtain a first application container;
installing a load balancing rule to the first application container to obtain a target application container;
acquiring a first user request from a front-end application;
and according to the first user request and the target application container, performing micro-service gray level release according to the load balancing rule.
2. The method for publishing micro service gray scale according to claim 1, wherein the preprocessing the application container to be processed to obtain a first application container comprises:
setting dynamic parameters in the application container to be processed to obtain a second application container;
installing a registration configuration center to the second application container, and adding the first metadata parameter to the registration configuration center to obtain a third application container;
installing target dependencies to the third application container, resulting in the first application container, the target dependencies including load balancing dependencies or gateway dependencies.
3. The method for distributing micro service gray scale according to claim 1, wherein said installing the load balancing rule to the first application container to obtain the target application container comprises:
acquiring the load balancing rule;
and installing the load balancing rule to the first application container through a SpringCloud framework to obtain a target application container.
4. The method for distributing micro service gray scale according to claim 3, wherein the executing step of the load balancing rule comprises:
acquiring a second user request;
acquiring a host number according to the second user request;
and obtaining a target service instance according to the host number and a first service instance set, wherein the first service instance set comprises one or more first service instances.
5. The method for gray scale distribution of micro services according to claim 4, wherein obtaining the target service instance according to the host number and the first service instance set comprises:
acquiring a cluster name;
screening the first service instance set according to the host number and the cluster name to obtain a second service instance set, wherein the second service instance set comprises one or more second service instances;
if the second service instance set only comprises one second service instance, the second service instance is taken as the target service instance; otherwise, randomly selecting one second service instance from the second service instance set as the target service instance.
6. The method for publishing micro service gray scale according to claim 5, wherein said filtering the first service instance set according to the host number and the cluster name to obtain a second service instance set comprises:
according to the cluster names, performing first filtering operation on each first service instance in the first service instance set to obtain a third service instance set, wherein the third service instance set comprises a plurality of third service instances;
and carrying out second filtering operation on each third service instance in the third service instance set according to the host number to obtain the second service instance set.
7. The method for distributing micro service gray scale according to claim 1, wherein said obtaining the first user request from the front-end application comprises:
installing a request interceptor into the front-end application and adding a second metadata parameter into the request interceptor according to Axios;
and acquiring a first user request through the request interceptor.
8. A micro-service gray scale distribution system, comprising:
the first module is used for acquiring an application container to be processed according to a preset application container engine;
the second module is used for preprocessing the application container to be processed to obtain a first application container;
the third module is used for installing the load balancing rule to the first application container to obtain a target application container;
a fourth module for obtaining a first user request from the front-end application;
and a fifth module, configured to perform micro-service gray level publishing according to the first user request and the target application container through the load balancing rule.
9. A micro-service gray scale distribution device, comprising:
at least one memory for storing a program;
at least one processor for loading the program to perform a micro-service gray scale distribution method according to any of claims 1-7.
10. A storage medium having stored therein a computer executable program for implementing a micro-service gray scale distribution method according to any of claims 1-7 when executed by a processor.
CN202311309516.XA 2023-10-10 2023-10-10 Micro-service gray level publishing method, system, device and medium Pending CN117492904A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311309516.XA CN117492904A (en) 2023-10-10 2023-10-10 Micro-service gray level publishing method, system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311309516.XA CN117492904A (en) 2023-10-10 2023-10-10 Micro-service gray level publishing method, system, device and medium

Publications (1)

Publication Number Publication Date
CN117492904A true CN117492904A (en) 2024-02-02

Family

ID=89671596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311309516.XA Pending CN117492904A (en) 2023-10-10 2023-10-10 Micro-service gray level publishing method, system, device and medium

Country Status (1)

Country Link
CN (1) CN117492904A (en)

Similar Documents

Publication Publication Date Title
US10735345B2 (en) Orchestrating computing resources between different computing environments
US11327749B2 (en) System and method for generating documentation for microservice based applications
US10430172B2 (en) Re-configuration in cloud computing environments
CN111142879B (en) Software integrated release method and automatic operation and maintenance platform
US10613853B2 (en) Updating software components through online stores
US8099720B2 (en) Translating declarative models
US8225308B2 (en) Managing software lifecycle
US20150186129A1 (en) Method and system for deploying a program module
US10191735B2 (en) Language-independent program composition using containers
US9535754B1 (en) Dynamic provisioning of computing resources
US8839223B2 (en) Validation of current states of provisioned software products in a cloud environment
US20180210768A1 (en) Api-based service command invocation
JP2021509498A (en) Computing device
US10915378B1 (en) Open discovery service
CN112256406B (en) Operation flow platformization scheduling method
US20190188010A1 (en) Remote Component Loader
US9542171B2 (en) Managing an application modification process
US20170031667A1 (en) Managing application lifecycles within a federation of distributed software applications
CN117492904A (en) Micro-service gray level publishing method, system, device and medium
US20140172955A1 (en) Distributed mobile enterprise application platform
CN117337429A (en) Deploying a machine learning model
US8127287B2 (en) Systems and methods for reusing SAP adaptors in non-supported releases
US10977210B2 (en) Methods for implementing an administration and testing tool
US20230132531A1 (en) Software Development Project Infrastructure Builder Tool
US20230176839A1 (en) Automatic management of applications in a containerized environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination