CN110944067A - Load balancing method and server - Google Patents

Load balancing method and server Download PDF

Info

Publication number
CN110944067A
CN110944067A CN201911382060.3A CN201911382060A CN110944067A CN 110944067 A CN110944067 A CN 110944067A CN 201911382060 A CN201911382060 A CN 201911382060A CN 110944067 A CN110944067 A CN 110944067A
Authority
CN
China
Prior art keywords
micro
service
load balancing
request message
strategies
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911382060.3A
Other languages
Chinese (zh)
Other versions
CN110944067B (en
Inventor
李林锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201911382060.3A priority Critical patent/CN110944067B/en
Publication of CN110944067A publication Critical patent/CN110944067A/en
Application granted granted Critical
Publication of CN110944067B publication Critical patent/CN110944067B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The application provides a load balancing method and a server, wherein the method comprises the following steps: the micro-service consumption end acquires a first micro-service calling request message; the micro-service consumption end arranges the plurality of load balancing strategies according to the first micro-service calling request message to obtain a plurality of arranged load balancing strategies; the micro-service consumption end determines a first target micro-service node from a plurality of micro-service nodes according to a plurality of arranged load balancing strategies; and the micro-service consumption end initiates micro-service calling to the first target micro-service node. The scheme of the embodiment of the application is beneficial to meeting the micro-service load balancing requirement of increasingly complex services.

Description

Load balancing method and server
Technical Field
The present application relates to the field of cloud computing microservices, and more particularly, to a load balancing method and a server.
Background
In recent years, the service and even micro-service of business technology architecture have become the mainstream trend: as more and more applications are used, interaction between applications is inevitable. The core services are extracted according to the division of the functional modules and are used as independent services to gradually form a stable service center, so that the front-end application can respond to changeable market demands more quickly, the public capacity of the rear end can be shared, and the repeated development cost is reduced.
At present, large internet enterprises (such as Ali, Jingdong and the like) at home and abroad realize servization and even micro-servization, for example, AWS uses a self-developed Coral Service framework to realize micro-servization of self-Service, and Netflix uses the Service framework to finish foreground and background splitting and Service micro-servization of video Service. In the field of operators, systems such as the middle mobile migu group, the third generation CRM and the like have also completed service transformation, and other overseas and overseas operators are also performing service transformation.
With the development of services and in order to ensure high availability of the system, service microservices need to be deployed in different machine rooms. Since the increase of network transmission delay between different machine rooms, especially machine rooms with long distance (over 100 km), may cause the increase of micro-service invocation delay, which affects user experience, a load balancing policy of the micro-service is usually required to support a load balancing policy prioritized by the machine room. From the perspective of business, in order to compare the effects of two different recommendation algorithms, two sets of recommendation micro-services of different versions are often required to be deployed in a production environment, and during business routing, requests of different users are required to be distributed to the recommendation micro-services of different versions according to user characteristics, so that a load balancing strategy according to the user characteristics is realized. The stations are in different dimensions, the routing strategies of the business micro-services are different, and superposition and mutual influence exist. The traditional load balancing strategy of the micro service only supports the strategy setting of a single dimensionality, the strategies are mutually covered, and the requirement of the service cannot be completely met.
Disclosure of Invention
The application provides a load balancing method and a server, which are beneficial to meeting the increasingly complex micro-service load balancing requirement of services.
In a first aspect, a load balancing method is provided, where the method is applied to a microservice consumer, where the microservice consumer includes a plurality of load balancing policies, and the method includes: the micro-service consumption end acquires a first micro-service calling request message; the micro-service consumption end arranges the plurality of load balancing strategies according to the first micro-service calling request message to obtain a plurality of arranged load balancing strategies; the micro-service consumption end determines a first target micro-service node from a plurality of micro-service nodes according to a plurality of arranged load balancing strategies; and the micro-service consumption end initiates micro-service calling to the first target micro-service node.
In the embodiment of the application, the load balancing strategy scheduling engine of the micro-service consumption end schedules the plurality of load balancing strategies, and the most appropriate target micro-service node is selected from the plurality of micro-service nodes through scheduling and executing the plurality of load balancing strategies, so that micro-service calling is initiated, and increasingly complex micro-service load balancing requirements are met.
In some possible implementations, the load balancing policy is an atomic load balancing policy.
In some possible implementations, the load balancing policy includes a cross-room load balancing policy, a user feature rule load balancing policy, a random load balancing policy, and the like.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: the micro-service consumption end acquires a second micro-service calling request message; the micro-service consumption end inserts a first load balancing strategy into the arranged multiple load balancing strategies according to the second micro-service calling request message to obtain a plurality of rearranged load balancing strategies; the micro-service consumption end determines a second target as a service node from the plurality of micro-service nodes according to the rearranged load balancing strategies; and the micro-service consumption end initiates micro-service calling to the second target micro-service node.
In the embodiment of the application, when the micro-service consuming terminal obtains the second micro-service invocation request message, the multiple load balancing strategies can be rearranged, for example, a new load balancing strategy is newly inserted into the multiple load balancing strategies arranged last time, so that the newly arranged multiple load balancing strategies are executed, and thus, code rewriting of all the load balancing strategies can be avoided when obtaining a new micro-service invocation request, and repeated development and extra maintenance cost are avoided.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: the method further comprises the following steps: the micro-service consumption end acquires a third micro-service calling request message; the micro-service consumption end replaces a second load balancing strategy in the arranged plurality of load balancing strategies with a third load balancing strategy according to the third micro-service calling request message to obtain a plurality of rearranged load balancing strategies; the micro-service consumption end determines a third target as a service node from the plurality of micro-service nodes according to the rearranged load balancing strategies; and the micro-service consumption end initiates micro-service calling to the third target micro-service node.
In the embodiment of the application, when the micro-service consuming side acquires the third micro-service invocation request message, the multiple load balancing strategies may be rearranged, for example, a certain load balancing strategy of the multiple load balancing strategies arranged last time is replaced by a new load balancing strategy, so as to execute the multiple newly arranged load balancing strategies, thereby avoiding rewriting codes of all the load balancing strategies when acquiring a new micro-service invocation request, and avoiding repeated development and extra maintenance cost.
With reference to the first aspect, in some implementation manners of the first aspect, the orchestrating, by the micro-service consumer, the load balancing policies according to the first micro-service invocation request message includes: the micro-service consumption end determines user information according to the first micro-service calling request message, wherein the user information comprises information of a user registration place and version number information of an application; the micro-service consumption end arranges the load balancing strategies according to the user information.
In the embodiment of the application, the micro-service consumption end can determine the user information according to the obtained micro-service scheduling request message, so that a plurality of load balancing strategies are arranged according to the user information, and finally, the most appropriate target micro-service node is selected from a plurality of micro-service nodes, so that micro-service calling is initiated, and the increasingly complex micro-service load balancing requirements can be met.
With reference to the first aspect, in some implementation manners of the first aspect, the acquiring, by the micro-service consumer, the first micro-service invocation request message includes: the micro-service consumer receives the micro-service call request sent by the micro-service client.
In some possible implementation manners, the micro-service consuming side may further generate a micro-service invocation request message through a timing task or acquire the micro-service invocation request message through interface configuration.
In a second aspect, a server is provided, including: one or more processors; a memory; one or more application programs; and one or more computer programs. Wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions. The instructions, when executed by the electronic device, cause the electronic device to perform any one of the possible load balancing methods of the first aspect described above.
In a third aspect, the present technical solution provides a computer storage medium, which includes computer instructions, and when the computer instructions are run on an electronic device, the electronic device is enabled to execute the load balancing method in any one of the possible implementations of the first aspect.
In a fourth aspect, the present disclosure provides a computer program product, which when run on an electronic device, causes the electronic device to perform the load balancing method in any one of the possible designs of the first aspect.
Drawings
Fig. 1 is a schematic diagram of a system architecture provided in an embodiment of the present application.
Fig. 2 is a schematic diagram of another system architecture provided in the embodiment of the present application.
Fig. 3 is a schematic diagram of an operating principle of a chain of duties for load balancing policy orchestration according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a networking of provinces a, B and C.
Fig. 5 is a schematic flowchart of a load balancing method provided in an embodiment of the present application.
Fig. 6 is a schematic flowchart of a load balancing method provided in an embodiment of the present application.
Fig. 7 is a schematic block diagram of a server provided in an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
Before describing the technical solutions of the embodiments of the present application, several terms referred to in the embodiments of the present application are explained first.
Micro-service: a microservice is a software architecture style (software architecture style) that is based on small functional blocks (small building blocks) that focus on single responsibility and function, and combines complex large-scale applications in a modular manner, where each functional block uses a language-independent/language-aware Application Programming Interface (API) set to communicate with each other.
It should be understood that the microservices in the embodiments of the present application have a wider scope, including Services Oriented Architecture (SOA) services, services developed based on a distributed service framework, or business services/microservices constructed based on an open source service framework.
Load balancing: computer technology is used to distribute load among multiple computers (computer clusters), network connections, Central Processing Units (CPUs), disk drives, or other resources to optimize resource usage, maximize throughput, minimize response time, and avoid overloading.
The load balancing of the micro service is to select a target micro service node from one or more micro service clusters, initiate micro service invocation to realize redundancy and reliability of the service and complete specific service logic execution, for example, invoking different micro service versions according to user characteristics.
Fig. 1 is a schematic diagram of a system architecture provided in an embodiment of the present application. The system comprises a micro service consumption end and a micro service providing end node. The micro-service consumption end can take the micro-service request message and the target micro-service list as input, when the load balancing strategy is executed, rule adaptation is carried out according to the micro-service request message, and micro-service nodes meeting rule conditions are selected from the input target micro-service list, so that micro-service calling is initiated. As shown in fig. 3, after the micro service consumer executes the load balancing policy, micro service node 1 and micro service node 2 …, micro service node N, may be selected to initiate a micro service call.
The current load balancing strategy comprises a single-dimensional load balancing strategy and a self-defined load balancing strategy. The single-dimension load balancing strategy comprises a random, polling, cross-machine room (or machine room priority) and the like.
For example, since an increase in network transmission delay between different machine rooms, especially machine rooms with a long distance (e.g., more than 100 km), may result in an increase in micro-service invocation delay and affect user experience, a load balancing policy for micro-services is generally required to support a load balancing policy that is prioritized by the machine room.
For example, in order to compare the effects of two different recommendation algorithms, two sets of recommendation micro-services of different versions are often deployed in a production environment, and during service routing, requests of different users need to be distributed to the recommendation micro-services of different versions according to user characteristics, so that a load balancing strategy according to the user characteristics is implemented.
At present, when a micro-service consumption end executes a single-dimensional load balancing strategy, if the single-dimensional load balancing strategy of a micro-service framework is used, a service can only set the load balancing strategy according to the single dimension, the load balancing strategies are mutually covered, and a complex load balancing strategy cannot be realized.
In order to solve the problem that the traditional micro-service load balancing strategies are mutually covered, the service can only meet the requirements of the service by completely rewriting the load balancing strategy of the micro-service framework.
For example, the customized load balancing policy may include 2 single-dimensional load balancing policies: (1) preferentially selecting a micro-service cluster list of the machine room; (2) and selecting a specific micro service node through a random algorithm.
If the service has two sets of recommendation algorithms, the traditional method is to put the recommendation algorithm v1 micro service on line and collect operation effect data. And then the recommendation algorithm v2 is put on line to collect the operation effect data. And comparing the operating data of the two different versions to select an optimal algorithm. In order to accelerate the verification speed, the service hopes to simultaneously bring the two recommended microservice versions online, perform load balancing according to the user characteristics, and route different user requests to different recommended microservice versions (different recommendation algorithms). Thus, the previous load balancing policy needs to be modified as follows: (1) preferentially selecting a micro-service cluster list of the machine room; (2) selecting a recommended micro service of a corresponding version according to the user characteristics; (3) and finally, selecting a specific micro service node through a random algorithm.
When a new load balancing requirement exists in a service, the original load balancing strategy needs to be greatly modified, and the load balancing strategy can change along with the deployment position and reliability of the micro-service and the change of the business requirement of the service, so that the cost for meeting the service requirement by rewriting the service self-defined load balancing strategy is too high. The custom load balancing strategy may have several disadvantages:
(1) the development cost is high: if the service does not use the load balancing strategy provided by the micro service architecture, the load balancing strategy is completely rewritten, the technical difficulty is high, and the workload is large. In addition, the load balancing strategy also has additional functions of failure retry, micro-service fault isolation and the like, and the cost of complete rewriting of the service is too high.
(2) Poor expandability: if a new load balancing strategy of one dimension is added, the service needs to modify the existing load balancing code, and the expansibility is poor.
Random load balancing strategies, cross-machine room load balancing strategies, load balancing strategies according to user characteristics and the like are all single-dimensional and atomic load balancing strategies, most micro-service frameworks support the original services without re-development, and if the load balancing strategies of the atoms can be arranged through a certain engine, more complex load balancing strategies can be realized in a combined mode, and various complex service load balancing scenes can be flexibly coped with.
The embodiment of the application provides a micro-service load balancing strategy, wherein a load balancing strategy arranging engine can arrange a single-dimension micro-service load balancing strategy, can flexibly combine and arrange load balancing strategies of a plurality of atoms, realizes load balancing according to a plurality of dimensions in a configuration mode, and more flexibly meets various differentiated load balancing requirements of services.
The load balancing strategy arrangement engine provided in the embodiment of the application can realize that a plurality of load balancing strategies are combined and executed by configuring the names and the sequence of the load balancing strategies, sequentially schedules and executes the load balancing strategies of a single dimension by a load balancing strategy chain (a load balancing strategy arrangement responsibility chain), and realizes the combination of the multi-dimensional load balancing strategies. Meanwhile, in order to meet the service customization and expansion, a single-dimensional load balancing strategy expansion point is provided, a single-dimensional load balancing strategy expanded by a user can be added into a designated responsibility chain, and the function customization and expansion are realized on the basis of maximally reusing the existing multi-dimensional load balancing strategy.
Fig. 2 shows a schematic diagram of a system architecture provided in an embodiment of the present application. The system architecture comprises a micro-service consumption end, a load balancing strategy arranging engine and a target micro-service node. The load balancing strategy arranging engine and the micro-service are integrated and run in the same process, and are responsible for combining the load balancing strategies of each single dimension together, run in a chain of responsibility mode, and mainly provide the following functions:
(1) and providing a load balance policy orchestration responsibility chain (load balance chain) which is responsible for combining the load balance policies of the single dimensions together and operating in a responsibility chain mode.
(2) A configuration mode is provided to arrange each single-dimensional load balancing strategy, and the micro-service can flexibly assemble the required load balancing strategy according to the requirement.
The load balancing strategy arranging engine also comprises a load balancing strategy expansion interface, when the existing atomic load balancing strategies of the micro-service framework and the service can not meet the service requirement, a new atomic load balancing strategy can be expanded by providing the load balancing strategy expansion interface, and therefore the new atomic load balancing strategy can be realized.
Meanwhile, a configured load balancing strategy expansion interface is utilized, the newly expanded atomic load balancing strategy is added to a position required by a responsibility chain, and a new load balancing strategy can be combined.
Illustratively, as shown in fig. 2, the chain of responsibility for orchestration of load balancing policies includes an existing load balancing policy 2, an existing load balancing policy 3, and an existing load balancing policy 4. If the current existing load balancing strategy cannot meet the service requirement, the expanded load balancing strategy 1 can be added before the existing load balancing strategy 2, and the expanded load balancing strategy 5 can be added after the existing load balancing strategy 4.
For example, the existing load balancing policy may be an existing single-dimensional load balancing policy, including a user characteristic load balancing policy, a random load balancing policy, a load balancing across rooms, and the like,
it should be understood that the arrangement order of the extended load balancing policy 1, the existing load balancing policy 2, the existing load balancing policy 3, the existing load balancing policy 4, and the extended load balancing policy 5 is not limited to that shown in fig. 2, and the load balancing policy arrangement engine may arrange the order of the load balancing policies in other manners.
The working principle of the chain of responsibility for load balancing policy orchestration is described below.
The load balancing strategy arrangement duty chain is responsible for combining the load balancing strategies of each single dimension together and operating in a duty chain mode. Figure 3 shows a schematic diagram of the working principle of the load balancing policy orchestration chain of responsibilities.
Load balancing policy chain of responsibility execution principle: the entry parameter of each load balancing strategy is a micro service request message context and a target micro service node list, when the current load balancing strategy is executed, rule adaptation is carried out according to micro service request parameters, micro service nodes meeting rule conditions are selected from the entry target micro service node list, then the next load balancing strategy is called, the entry parameter of the next load balancing strategy is the micro service request message context and the micro service node list after the last load balancing strategy is selected, the target micro service nodes are continuously filtered from the micro service node list until the last load balancing strategy is executed, a unique target micro service node is selected, and micro service calling is initiated.
It should be understood that in the embodiment of the present application, the micro service call needs to send the micro service request message to the micro service node of a specific IP and port, so the finally selected micro service node is unique.
A single-dimensional atomic load balancing policy, as one node (node) of a chain of responsibilities, holds two location pointers: the former load balancing policy and the latter load balancing policy to facilitate the load balancing policy to orchestrate the execution of the chain of responsibilities define an example as follows:
private RouterPolicy preNode;
private RouterPolicy nextNode;
by orchestrating multiple load balancing policies in a configured manner in a microservice profile, an example is as follows:
microservice profile (microservice. yaml):
Loadbalance Chain:
RouterPolicy:DarklaunchRouter,RandomRouter,XXXRouter
the load balancing strategy of the embodiment of the present application is described below with reference to a specific embodiment.
Given that the application marketplace recommends 2 different versions of a microservice (v1 and v2), the recommended microservices are deployed in a clustered fashion across multiple locations throughout the country. Fig. 4 shows a schematic diagram of a networking in provinces a, B and C.
The load balancing strategy of the service expectation is as follows:
(1) firstly, the recommended micro-service in the machine room is called preferentially, and if the micro-service in the machine room is unavailable, the recommended micro-service deployed in the machine room in different places is called remotely.
(2) Routing according to the user registration place, and routing the user of province A to the recommended micro service v2 version; other provincial users are routed to the recommended microservice v1 version.
(3) And finally, a random load balancing strategy is adopted, namely after the target machine room and the target micro-service version list are found, the user request is randomly distributed to a certain micro-service node.
If a single-dimensional load balancing strategy is adopted, the target of the service requirement cannot be achieved. If a user-defined load balancing strategy is adopted, the existing atomic load balancing strategy capability in the micro-service framework cannot be reused, all rewriting is required, and the code of the simplified version example is as follows:
Figure BDA0002342516010000061
Figure BDA0002342516010000071
after the user-defined load balancing strategy is adopted, the load balancing business logic needs to be rewritten in an overlaying mode, and the existing load balancing strategy of the micro-service framework cannot be reused. For other different micro services, such as promoted micro services, the load balancing strategy is similar to the recommended micro services, but there are some differences, the load balancing strategy for promoted micro services adopts a polling strategy instead of a random strategy, and the business needs to modify codes to meet the differentiated load balancing strategy for promoted micro services, which brings a lot of ineffective repeated development and additional maintenance cost.
By adopting the duty chain of the load balancing strategy provided by the embodiment of the application, services do not need to develop any code additionally, and only need to arrange the existing load balancing strategy of the atomic micro-service in a configuration mode according to service requirements so as to combine the multidimensional load balancing strategy meeting the service requirements. Fig. 5 shows a schematic flowchart of a load balancing method provided in an embodiment of the present application. The method comprises the following steps:
s501, the micro-service consumption end sends a micro-service request message context and a target micro-service node list to the load balancing strategy arranging engine.
And S502, the load balancing strategy engine generates a load balancing strategy responsibility chain according to the sequence according to a load balancing strategy list configured in the micro service request message context.
S503, the load balancing strategy engine calls a cross-machine room load balancing strategy provided by the micro-service framework, and a first micro-service node set is filtered from the target micro-service node list.
For example, the micro service nodes in the first set of micro service nodes may be micro service nodes deployed in the same machine room as the micro service consuming side.
And S504, the load balancing strategy engine calls a user characteristic load balancing strategy provided by the micro service framework, and the micro service nodes selected in the S503 are further adapted to obtain a second micro service node set.
For example, if it is an a-province user, the second set of micro service nodes may be the micro service nodes of the first set of micro service nodes with the micro service version v 2.
And S505, the load balancing strategy engine calls a random load balancing strategy provided by the micro-service framework, and further adapts the micro-service node selected in the S504 to obtain the target micro-service node.
For example, the load balancing policy engine may randomly select one or more micro service nodes from the second set of micro service nodes to form a third set of micro service nodes by using a random algorithm.
It should be understood that the processes of S503-S505 may be understood as the execution process of the load balancing policy.
S506, the load balancing strategy engine initiates micro-service call to the target micro-service node.
According to the load balancing method, the technology of scheduling the plurality of atomic load balancing strategies through the load balancing strategy scheduling engine is adopted, the plurality of load balancing strategies configured by the user are scheduled and executed through the load balancing strategy responsibility chain, the most appropriate target micro-service node is selected in the cluster environment finally, micro-service calling is initiated, and the increasingly complex micro-service load balancing requirements of the service are met.
The method of the embodiment of the application can also reuse the existing atomic load balancing strategy of the micro-service framework, flexibly assemble a more complex load balancing strategy through the scheduling of the load balancing strategy arrangement engine, and meet the load balancing strategy requirement of service differentiation. Compared with the traditional code writing mode, the configured load balancing arrangement strategy is simpler and more flexible.
For the promotion microservice, if the polling load balancing strategy is used, a new microservice load balancing strategy can be arranged only by modifying the configuration, and the code logic does not need to be modified again as in the prior art, and the following examples are shown:
Router Chain:
RouterPolicy:DataCenterRouter,UserRuleRouter,PollingRouter
in the embodiment of the application, in order to meet service customization and expansion, a load balancing strategy expansion interface of a single-dimensional atom is provided, a single-dimensional load balancing strategy expanded by a user can be added into a designated responsibility chain, and the function customization and expansion are realized on the basis of maximally reusing the existing multi-dimensional load balancing strategy.
The service inherits and realizes a single-dimensional load balancing strategy extension point interface, and a new single-dimensional load balancing strategy is extended. And arranging the expanded load balancing strategy with the existing load balancing strategies of other platforms according to the service scene to realize a more complex load balancing strategy.
An example of a single dimension atom load balancing policy extension point interface definition is as follows:
Figure BDA0002342516010000081
combining and arranging the newly expanded load balancing strategy based on the load weight and other load balancing strategies to form a new micro-service strategy, wherein the configuration example is as follows:
Router Chain:
RouterPolicy:DataCenterRouter,UserRuleRouter,LoadRouter
the finally realized load balancing strategy has the following effects:
(1) firstly, the recommended micro-service in the machine room is called preferentially, and if the micro-service in the machine room is unavailable, the recommended micro-service deployed in the machine room in different places is called remotely.
(2) Routing according to the user registration place, and routing the user of province A to the recommended micro service v2 version; other provincial users are routed to the recommended microservice v1 version.
(3) And finally, routing according to the load condition of the micro-service, selecting the first 100 micro-service nodes with lighter load, and distributing the user request of province A to the 100 micro-service nodes.
In the embodiment of the application, when the existing load balancing strategy of the single-dimensional atom cannot meet the requirement, the function expansion is realized through the load balancing strategy expansion interface. The newly expanded load balancing strategy can be combined with the existing load balancing strategy in a configuration mode to form a new load balancing strategy which is more complex and meets the service requirement.
Compared with the existing overlay rewriting, the method and the device can realize the expansion of the load balancing strategy in a new adding and combining mode, and are simpler and more flexible.
The embodiment of the application provides a technology capable of arranging and scheduling a plurality of atomic micro-service load balancing strategies, a load balancing strategy arranging engine is used for arranging a plurality of atomic micro-service strategies configured by a user, a more complex and multidimensional load balancing strategy is assembled, and the problems that the traditional single-dimensional load balancing strategy is mutually covered, and a user-defined load balancing strategy completely rewrites a load balancing algorithm, so that the workload is large, and the flexibility and the expansibility are poor are solved.
The load balancing strategy arranging engine technology assembles the load balancing strategies of each single dimension together through a load balancing strategy arranging duty chain and schedules and executes the load balancing strategies in a duty chain mode. The user adjusts the load balancing strategies of each single dimension in a configuration mode, and more complex and multidimensional (reliability, user experience, business dimension and the like) load balancing strategies can be flexibly assembled.
The extensible load balancing strategy technology can extend a new single-dimensional load balancing strategy by inheriting and realizing a single-dimensional load balancing strategy interface. And adding the newly expanded load balancing strategy into a load balancing strategy arranging responsibility chain by using a configuration mode, and expanding a new service customized load balancing strategy.
Fig. 6 shows a schematic flow chart of a load balancing method 600 provided in an embodiment of the present application. As shown in fig. 6, the method 600 may be performed by a microserver that includes a plurality of atomic microserver load balancing policies, the method 600 comprising:
s601, the micro service consumption end obtains a first micro service calling request message.
In an embodiment, the micro service consumer may obtain the first micro service invocation request message from the micro service client, or the micro service consumer may also obtain the first micro service invocation request message from another place, for example, the micro service consumer may generate the micro service invocation request message through a timing task or obtain the micro service invocation request message through interface configuration, which is not limited in this embodiment of the present application.
In an embodiment, the micro service consumer may further obtain a micro service node list, where the micro service node list includes information of a plurality of micro service nodes.
And S602, arranging the plurality of load balancing strategies by the micro-service consumption end according to the first micro-service calling request to obtain the arranged plurality of load balancing strategies.
In one embodiment, the information of the user is included in the micro service invocation request information, for example, the information of the user may include one or more of information of a user registration place, information of an application version number, model information of the micro service client, an age of the user, a gender of the user, and end information of a mobile phone number of the user.
For example, if the information of the user may include information of a user registration place and information of an application version number, the micro service consumer may select a load balancing policy of the user registration place, a load balancing policy of the application version number, and a random load balancing policy from the plurality of load balancing policies to execute the load balancing policy, and execute the load balancing policy of the user registration place, the load balancing policy of the application version number, and the random load balancing policy in this order.
And S603, the micro-service consumption end determines a target micro-service node from the micro-service node list according to the arranged multiple load balancing strategies.
Illustratively, the microservice call request information includes information of a user registration place (for example, the user registration place is shanxi province) and information of an application version number (for example, the application version number is version 1). When the micro-service consumption end determines the sequence of arranging a plurality of load balancing strategies through a micro-service call request, micro-service nodes serving users in Shaanxi province (for example, 50 micro-service nodes serving users in Shaanxi province among 100 micro-service nodes) can be filtered from a micro-service node list (for example, the micro-service node list comprises 100 micro-service nodes) through information of a user registration place; then, the micro service consumption end can filter out micro service nodes with version numbers larger than that of the version 1 from the last filtered 50 micro service nodes by applying the information of the version numbers (for example, 30 micro service nodes with version numbers larger than that of the version 1 in the last filtered 50 micro service nodes); and finally, the micro-service consumption end selects a target micro-service node from the last filtered 30 micro-service nodes according to a random load balancing strategy.
S604, the micro service consumption end initiates micro service calling to the target micro service node.
In the embodiment of the application, after the micro-service consuming terminal initiates the micro-service call to the target serving node, the micro-service consuming terminal can also receive response information sent by the target micro-service node, and the response information can carry a result corresponding to the micro-service call request. For example, the response may carry a list of comments for an application.
In one embodiment, the method 600 further comprises: the micro-service consumption end acquires a second micro-service calling request message; the micro-service consumption end inserts a first load balancing strategy into the multiple load balancing strategies according to the second micro-service calling request message to obtain multiple rearranged load balancing strategies; the micro-service consumption end determines a target as a service node from the micro-service node list according to the rearranged multiple load balancing strategies; and the micro-service consumption end initiates micro-service calling to the target micro-service node.
For example, the second micro service invocation request message may also include user information, and the user information in the second micro service invocation request message includes information of a user registration place (for example, the user registration place is shanxi province), information of an application version number (for example, the application version number is version 2), and end information of a user mobile phone number (for example, the end information of the user mobile phone is an even number) compared with the first micro service invocation request message.
At this time, the micro-service consumption end can arrange the load balancing strategy, add the mobile phone tail number load balancing strategy into a plurality of load balancing strategies obtained by last arrangement, further rearrange to obtain a user registration place load balancing strategy, an application version number load balancing strategy, a user mobile phone tail number load balancing strategy and a random load balancing strategy to execute the load balancing strategies, and execute the load balancing strategies according to the sequence.
For example, when determining the order of arranging the plurality of load balancing policies through the micro service invocation request message, the micro service consuming side may first filter out the micro service nodes serving users in shanxi province from the micro service node list (for example, the micro service node list includes 100 micro service nodes) through the information of the user registration place (for example, 30 micro service nodes serving users in shanxi province among the 100 micro service nodes); the microservice consuming terminal can filter microservice nodes with version numbers larger than version 2 from the last filtered 30 microservice nodes by applying the information of the version numbers (for example, 20 microservice nodes with version numbers larger than version 1 from the last filtered 30 microservice nodes); the micro-service consumption end can filter out micro-service nodes corresponding to the mobile phone tail numbers of even numbers from the last filtered 20 micro-service nodes through the information of the mobile phone tail numbers of the users (for example, 10 micro-service nodes corresponding to the mobile phone tail numbers of the last filtered 20 micro-service nodes are even numbers); and finally, the micro-service consumption end selects a target micro-service node from the last filtered 10 micro-service nodes according to a random load balancing strategy.
It should be understood that, in the embodiment of the present application, inserting the first load balancing policy into the multiple load balancing policies by the micro service consumer may refer to inserting the first load balancing policy between any two adjacent load balancing policies of the multiple load balancing policies, or may refer to inserting the first load balancing policy before a first load balancing policy of the multiple load balancing policies, or may refer to inserting the first load balancing policy after a last load balancing policy of the multiple load balancing policies.
In one embodiment, the method 600 further comprises: the micro-service consumption end acquires a third micro-service calling request message; the micro-service consumption end replaces a second load balancing strategy in the plurality of load balancing strategies with a third load balancing strategy according to a third micro-service calling request message to obtain a plurality of rearranged load balancing strategies; the micro-service consumption end determines a target as a service node from the micro-service node list according to the rearranged multiple load balancing strategies; and the micro-service consumption end initiates micro-service calling to the target micro-service node.
For example, the second micro service invocation request message may also include user information, and the user information in the second micro service invocation request message includes information of a user registration place (for example, the user registration place is shanxi province), information of an application version number (for example, the application version number is version 2), and an indication of a load condition of the micro service node, which is used for indicating to select the micro service node with a lighter load, compared with the first micro service invocation request message.
Illustratively, the microservice call request information includes information of a user registration place (for example, the user registration place is shanxi province) and information of an application version number (for example, the application version number is version 1). When the micro-service consumption end determines the sequence of arranging a plurality of load balancing strategies through a micro-service call request, micro-service nodes serving users in Shaanxi province (for example, 50 micro-service nodes serving users in Shaanxi province among 100 micro-service nodes) can be filtered from a micro-service node list (for example, the micro-service node list comprises 100 micro-service nodes) through information of a user registration place; then, the micro service consumption end can filter out micro service nodes with version numbers larger than that of the version 1 from the last filtered 50 micro service nodes by applying the information of the version numbers (for example, 30 micro service nodes with version numbers larger than that of the version 1 in the last filtered 50 micro service nodes); and finally, the micro-service consumption end selects one micro-service node with the lightest load as a target micro-service node from the last filtered 30 micro-service nodes according to the load condition of the micro-service.
In the above, with reference to fig. 1 to fig. 6, a load balancing method provided in the embodiment of the present application is described, and a server provided in the embodiment of the present application is described below.
Fig. 7 is a schematic block diagram of a server provided in the embodiment of the present application, and it is understood that, in order to implement the functions of the microserver described above, the server includes corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In this embodiment, the electronic device may be divided into functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module by corresponding functions, fig. 7 shows a possible composition diagram of the server 700 involved in the above embodiment, as shown in fig. 7, the server 700 may include: an acquisition unit 701, an orchestration unit 702, a determination unit 703 and a micro-service invocation unit 704.
Among other things, acquisition unit 7201 may be used to support server 700 in performing steps 601, etc., described above, and/or other processes for the techniques described herein.
Orchestration unit 702 may be used to support server 700 in performing steps 602, etc., described above, and/or other processes for the techniques described herein. The orchestration engine involved in embodiments of the present application may be located in orchestration unit 702.
Determination unit 703 may be used to enable server 700 to perform steps 603, etc., described above, and/or other processes for the techniques described herein.
Microservice invocation unit 704 may be used to support server 700 in performing, among other things, above-described step 604, and/or other processes for the techniques described herein.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The server provided by the embodiment is used for executing the control method, so that the same effect as the implementation method can be achieved.
In case of an integrated unit, the server may comprise a processing module, a storage module and a communication module. The processing module may be configured to control and manage actions of the server, and for example, may be configured to support the server to execute steps performed by the above units. The storage module may be used to support server execution, storage of program code and data (e.g., load balancing policies), and the like. And the communication module can be used for supporting the communication between the server and the target micro service node.
The processing module may be a processor or a controller. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., a combination of one or more microprocessors, a Digital Signal Processing (DSP) and a microprocessor, or the like. The storage module may be a memory. Tong (Chinese character of 'tong')
The present embodiment also provides a computer storage medium, where a computer instruction is stored in the computer storage medium, and when the computer instruction runs on an electronic device, the electronic device executes the above related method steps to implement the load balancing method in the above embodiments.
The present embodiment also provides a computer program product, which when running on a computer, causes the computer to execute the relevant steps described above, so as to implement the load balancing method in the foregoing embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A load balancing method is applied to a micro service consumer, the micro service consumer comprises a plurality of load balancing strategies, and the method comprises the following steps:
the micro-service consumption end acquires a first micro-service calling request message;
the micro-service consumption end arranges the plurality of load balancing strategies according to the first micro-service calling request message to obtain a plurality of arranged load balancing strategies;
the micro-service consumption end determines a first target micro-service node from a plurality of micro-service nodes according to the arranged load balancing strategies;
and the micro-service consumption end initiates micro-service calling to the first target micro-service node.
2. The method of claim 1, further comprising:
the micro-service consumption end acquires a second micro-service calling request message;
the micro-service consumption end inserts a first load balancing strategy into the arranged multiple load balancing strategies according to the second micro-service calling request message to obtain a plurality of rearranged load balancing strategies;
the micro service consumption end determines a second target as a service node from the plurality of micro service nodes according to the rearranged load balancing strategies;
and the micro-service consumption end initiates micro-service calling to the second target micro-service node.
3. The method of claim 1, further comprising:
the micro-service consumption end acquires a third micro-service calling request message;
the micro-service consumption end replaces a second load balancing strategy in the arranged multiple load balancing strategies with a third load balancing strategy according to the third micro-service calling request message to obtain a plurality of rearranged load balancing strategies;
the micro service consumption end determines a third target as a service node from the plurality of micro service nodes according to the rearranged load balancing strategies;
and the micro-service consumption end initiates micro-service calling to the third target micro-service node.
4. The method of claims 1 to 3, wherein the orchestrating the plurality of load balancing policies by the microservice consumer according to the first microservice invocation request message comprises:
the micro-service consumption end determines user information according to the first micro-service calling request message, wherein the user information comprises information of a user registration place and version number information of an application;
and the micro-service consumption end arranges the load balancing strategies according to the user information.
5. The method according to any one of claims 1 to 4, wherein the acquiring, by the microservice consumer, the first microservice invocation request message comprises:
and the micro-service consumer receives the micro-service calling request sent by the micro-service client.
6. A server, comprising:
one or more processors;
one or more memories;
the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the server to perform the steps of:
acquiring a first micro-service calling request message;
arranging the plurality of load balancing strategies according to the first micro-service calling request message to obtain a plurality of arranged load balancing strategies;
determining a first target micro service node from a plurality of micro service nodes according to a plurality of arranged load balancing strategies;
and initiating micro-service call to the first target micro-service node.
7. The server of claim 6, wherein the instructions, when executed by the one or more processors, cause the server to perform the steps of:
acquiring a second micro-service calling request message;
inserting a first load balancing strategy into the plurality of arranged load balancing strategies according to the second micro-service calling request message to obtain a plurality of rearranged load balancing strategies;
determining a second target as a service node from the plurality of micro service nodes according to the rearranged load balancing strategies;
and initiating a micro-service call to the second target micro-service node.
8. The server of claim 6, wherein the instructions, when executed by the one or more processors, cause the server to perform the steps of:
acquiring a third micro-service calling request message;
replacing a second load balancing strategy in the arranged plurality of load balancing strategies with a third load balancing strategy according to the third micro-service call request message to obtain a plurality of rearranged load balancing strategies;
determining a third target as a service node from the plurality of micro service nodes according to the rearranged load balancing strategies;
and initiating a micro-service call to the third target micro-service node.
9. The server according to any one of claims 6 to 8, wherein the instructions, when executed by the one or more processors, cause the server to perform the steps of:
determining user information according to the first micro-service calling request message, wherein the user information comprises information of a user registration place and version number information of an application;
and arranging the plurality of load balancing strategies according to the user information.
10. The server according to any one of claims 6 to 9, wherein the instructions, when executed by the one or more processors, cause the server to perform the steps of:
and receiving the micro-service call request sent by the micro-service client.
CN201911382060.3A 2019-12-27 2019-12-27 Load balancing method and server Active CN110944067B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911382060.3A CN110944067B (en) 2019-12-27 2019-12-27 Load balancing method and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911382060.3A CN110944067B (en) 2019-12-27 2019-12-27 Load balancing method and server

Publications (2)

Publication Number Publication Date
CN110944067A true CN110944067A (en) 2020-03-31
CN110944067B CN110944067B (en) 2021-07-16

Family

ID=69913020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911382060.3A Active CN110944067B (en) 2019-12-27 2019-12-27 Load balancing method and server

Country Status (1)

Country Link
CN (1) CN110944067B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111770176A (en) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 Traffic scheduling method and device
CN112671882A (en) * 2020-12-18 2021-04-16 上海安畅网络科技股份有限公司 Same-city double-activity system and method based on micro-service
CN113157352A (en) * 2021-04-15 2021-07-23 山东浪潮通软信息科技有限公司 Method, device, equipment and medium for realizing programmable front-end controller

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105282195A (en) * 2014-06-27 2016-01-27 中兴通讯股份有限公司 Network service providing, strategy rule evaluating and service component selecting method and device
US20160119379A1 (en) * 2014-10-26 2016-04-28 Mcafee, Inc. Security orchestration framework
CN108768688A (en) * 2018-04-11 2018-11-06 无锡华云数据技术服务有限公司 Visual mixing cloud resource method of combination and device
CN109981716A (en) * 2017-12-28 2019-07-05 北京奇虎科技有限公司 A kind of micro services call method and device
CN110011928A (en) * 2019-04-19 2019-07-12 平安科技(深圳)有限公司 Flow equalization carrying method, device, computer equipment and storage medium
CN110149397A (en) * 2019-05-20 2019-08-20 湖北亿咖通科技有限公司 A kind of micro services integration method and device
CN110149396A (en) * 2019-05-20 2019-08-20 华南理工大学 A kind of platform of internet of things construction method based on micro services framework

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105282195A (en) * 2014-06-27 2016-01-27 中兴通讯股份有限公司 Network service providing, strategy rule evaluating and service component selecting method and device
US20160119379A1 (en) * 2014-10-26 2016-04-28 Mcafee, Inc. Security orchestration framework
CN109981716A (en) * 2017-12-28 2019-07-05 北京奇虎科技有限公司 A kind of micro services call method and device
CN108768688A (en) * 2018-04-11 2018-11-06 无锡华云数据技术服务有限公司 Visual mixing cloud resource method of combination and device
CN110011928A (en) * 2019-04-19 2019-07-12 平安科技(深圳)有限公司 Flow equalization carrying method, device, computer equipment and storage medium
CN110149397A (en) * 2019-05-20 2019-08-20 湖北亿咖通科技有限公司 A kind of micro services integration method and device
CN110149396A (en) * 2019-05-20 2019-08-20 华南理工大学 A kind of platform of internet of things construction method based on micro services framework

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111770176A (en) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 Traffic scheduling method and device
CN112671882A (en) * 2020-12-18 2021-04-16 上海安畅网络科技股份有限公司 Same-city double-activity system and method based on micro-service
CN112671882B (en) * 2020-12-18 2022-03-01 上海安畅网络科技股份有限公司 Same-city double-activity system and method based on micro-service
CN113157352A (en) * 2021-04-15 2021-07-23 山东浪潮通软信息科技有限公司 Method, device, equipment and medium for realizing programmable front-end controller
CN113157352B (en) * 2021-04-15 2024-01-26 浪潮通用软件有限公司 Programmable front-end controller implementation method, programmable front-end controller implementation device, programmable front-end controller implementation equipment and programmable front-end controller implementation medium

Also Published As

Publication number Publication date
CN110944067B (en) 2021-07-16

Similar Documents

Publication Publication Date Title
EP3637733B1 (en) Load balancing engine, client, distributed computing system, and load balancing method
US20180102945A1 (en) Graceful scaling in software driven networks
CN110944067B (en) Load balancing method and server
US9661071B2 (en) Apparatus, systems and methods for deployment and management of distributed computing systems and applications
EP3170071B1 (en) Self-extending cloud
US9703610B2 (en) Extensible centralized dynamic resource distribution in a clustered data grid
JP6514241B2 (en) Service orchestration method and apparatus in software defined network, storage medium
US20190109756A1 (en) Orchestrator for a virtual network platform as a service (vnpaas)
US20080263553A1 (en) Dynamic Service Level Manager for Image Pools
US20040230670A1 (en) Method and system for representing, configuring and deploying distributed applications
CN105786603B (en) Distributed high-concurrency service processing system and method
CN111274033B (en) Resource deployment method, device, server and storage medium
US10761869B2 (en) Cloud platform construction method and cloud platform storing image files in storage backend cluster according to image file type
CN112463535A (en) Multi-cluster exception handling method and device
CN112905338B (en) Automatic computing resource allocation method and device
CN116800825A (en) Calling method, device, equipment and medium based on micro-service splitting
Tseng et al. An mec-based vnf placement and scheduling scheme for ar application topology
US20200007819A1 (en) Automatic deployment of distributed video conference system
EP3528112A1 (en) Management ecosystem of superdistributed hashes
Makris et al. Streamlining XR application deployment with a localized docker registry at the edge
CN115361280B (en) Method, device, equipment and storage medium for invoking calculation power network
CN115102999B (en) DevOps system, service providing method, storage medium and electronic device
CN114338763B (en) Micro-service calling method, micro-service calling device, server and computer readable storage medium
CN118381822B (en) Service migration method, device, system, electronic equipment and storage medium
CN109379405A (en) Virtual disk construction method, virtual disk system and Dropbox

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant