CN114422371A - Elastic micro-service system based on distributed and container virtualization and implementation method - Google Patents

Elastic micro-service system based on distributed and container virtualization and implementation method Download PDF

Info

Publication number
CN114422371A
CN114422371A CN202210066486.3A CN202210066486A CN114422371A CN 114422371 A CN114422371 A CN 114422371A CN 202210066486 A CN202210066486 A CN 202210066486A CN 114422371 A CN114422371 A CN 114422371A
Authority
CN
China
Prior art keywords
service
micro
layer
message
mirror image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210066486.3A
Other languages
Chinese (zh)
Inventor
汪建
林锦文
陈庆生
汪金宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202210066486.3A priority Critical patent/CN114422371A/en
Publication of CN114422371A publication Critical patent/CN114422371A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5041Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to a distributed and container virtualization-based elastic micro-service system and an implementation method thereof, belonging to the technical field of communication. The system comprises: a service logic layer, a network topology layer and a virtual container layer. The service logic layer consists of a logic processing module, a data storage module and a message processing module; the network topology layer consists of an asynchronous message queue communication mechanism, a load balancing mechanism, a service registration mechanism, a request/response model, a publishing/subscribing model, a pipeline model and a composite model; the virtual container layer is composed of a container generation module, a mirror image construction module, a mirror image management module and a container virtualization engine. The invention overcomes the performance bottleneck and the expansion defect of the traditional development mode; the distributed network topology structure and the service registration mechanism are combined with various topology models, so that the problem of distributed complex network topology structure deployment possibly existing in the edge computing node is solved; and a container virtualization technology is adopted to provide heterogeneous node operation support for the micro-service.

Description

Elastic micro-service system based on distributed and container virtualization and implementation method
Technical Field
The invention belongs to the technical field of communication, relates to the technical field of distributed and container virtualization micro-services, and particularly relates to an elastic micro-service system based on distributed and container virtualization and an implementation method thereof.
Background
The modern water conservancy industry is built by taking digitalization, networking and intelligence as a main line, taking virtual simulation, accurate prediction and intelligent decision as paths and realizing digital enabling by comprehensively promoting calculation data, algorithm and calculation force construction. Therefore, higher requirements are made on the advancement, concurrency, safety and real-time performance of the intelligent water conservancy system.
Under the traditional single application architecture, developers can rapidly complete application development and deployment with smaller business volume and requirements, but factors such as business expansion and requirement change are continuously influenced along with the time advance, and the adjustment cost of single application is higher and higher. Meanwhile, due to performance bottlenecks encountered by single application, linear performance increase cannot be achieved only by means of cluster deployment and linear increase of single application instance nodes. Therefore, how to adapt to the influence of the business expansion and the demand change which are advanced over time and overcome the performance bottleneck problem caused by the business development is a problem that developers have to face in the design and architecture stage.
In addition, as virtualization technologies in the computer field are continuously innovated, and particularly, the appearance of container virtualization technologies brings new opportunities and challenges to the software and hardware industries. When the edge computing technology is gradually and widely applied to the traditional industry (such as the water conservancy industry), developers have to pay attention to the heterogeneous characteristics of the edge node where the compatible programming language and the dependent items thereof are located while realizing business logic, and the existence of the container virtualization technology makes it possible to rapidly produce and deploy the edge computing node for providing efficient, stable and flexible heterogeneous node support.
Therefore, a flexible microservice system based on distribution and container virtualization is needed to solve the problems encountered in the digital construction of the traditional industry (such as the water conservancy industry).
Disclosure of Invention
In view of this, the present invention provides an elastic microservice system based on distributed and container virtualization, which adopts a microservice development mode with a granularity finer than that of the traditional single application development mode, and overcomes the performance bottleneck and the expansion drawback of the traditional development mode; the distributed network topology structure and the service registration mechanism are combined with various topology models, so that the problem of distributed complex network topology structure deployment possibly existing in the edge computing node is solved; and a container virtualization technology is adopted to provide heterogeneous node operation support for the micro-service. A micro-service framework realized by integrating a micro-service development mode, a distributed network topology and a container virtualization technology ensures that the system has stronger portability and higher stability and focuses more on the realization of core services, thereby achieving the purposes of expanding the service application field and bringing new application development and deployment modes to more traditional fields (such as the water conservancy construction industry).
In order to achieve the purpose, the invention provides the following technical scheme:
the technical scheme 1: an elastic micro-service system based on distributed and container virtualization comprises a service logic layer, a network topology layer and a virtual container layer, and is characterized in that the service logic layer is used as the highest layer of the system and is responsible for micro-service message receiving and sending, logic processing and data storage, and connection is established with the network topology layer through a TCP/IP protocol to form a network node for data exchange with other nodes;
the network topology layer is used as a system middle layer, is integrated in the micro-service system, is responsible for providing support for constructing the distributed complex network topology, receives data input by the virtual container layer to an external open port, forwards the data to the service logic layer through service registry query, load balancing and network routing, and sends the processing result of the service logic layer to the container virtual layer through routing to complete the complete flow of message request and response;
the virtual container layer is used as a system bottom layer and is responsible for providing operation support of the micro-service heterogeneous nodes, and the external open port is communicated with the network topology layer inside the virtual container and the outside of the virtual container to establish a communication link with the outside.
Further, the service logic layer comprises: the system comprises a logic processing module, a data storage module and a message processing module;
the logic processing module is used for realizing the service processing work of a service logic layer, providing custom programming language support and selecting a proper programming language to process service logic according to actual requirements; the data storage module is used for storing temporary data or permanent data generated by the logic processing module, providing custom database support, and selecting a proper database to store business processing data according to actual requirements; the message processing module is used for receiving, sending, coding and decoding messages, establishing connection with a network topology layer through a TCP/IP protocol and exchanging data.
Further, the network topology layer includes: an asynchronous message queue communication mechanism, a load balancing mechanism, a service registration mechanism, a request/response model, a publish/subscribe model, a pipeline model and a composite model;
the asynchronous message queue communication mechanism is used for weakening a high concurrency peak possibly existing in a communication network through queue caching, reducing the influence on the whole communication network, designing a message queue supporting asynchronous communication by using epoll, select and other technologies, enabling a network topology layer to communicate with other micro-services in an asynchronous mode, decoupling data interaction among the micro-services, and keeping the characteristics of mutual independence and mutual noninterference among the micro-services;
the load balancing mechanism is used for intensively receiving messages sent by other nodes of the network topology layer to the node, and uniformly forwarding the messages to different instances of the same micro service by combining with the evaluation of the micro service performance state index, so that the nodes of the network topology layer keep good performance;
the service registration mechanism is used for monitoring the performance state index of the micro-service at regular time and flexibly adjusting the number of micro-service instances, when the system flexibly adjusts the number of the micro-service instances according to a use scene, routing information such as node addresses and the like corresponding to the micro-service and providing services for the outside can be actively registered in a service registration table, and when a network topology layer needs to send data to a target node, the routing information of the micro-service of the target node is inquired according to the service registration mechanism;
the request/response model is used to establish constraints that nodes in the same network topology need to follow to send and receive messages: the two connected parties are divided into a client and a server, wherein the party sending the message is the client, and the party receiving the message, processing the message and responding to the message is the server; after the client sends the message, the server cannot send the message again before receiving the response of the server to the message, the server must receive the message first, and cannot receive the message again before responding to the message, which is specifically represented as a question-answer situation;
the publish/subscribe model is used to establish constraints that nodes in the same network topology need to follow to send and receive messages: among all nodes establishing the indirection, a node responsible for sending information is called a publishing node, and a node responsible for receiving information is called a subscribing node; under the rule of the model, messages can only be published by publishers, all subscribers can receive the messages published by the publishers, but the publishers cannot receive the messages of the subscribers, the subscribers cannot publish the messages to the publishers, and a one-way communication link is established between the publishers and the subscribers;
the pipeline model is used for establishing the constraint that the nodes in the same network topology need to send and receive messages to follow: all nodes for establishing connection are divided into three types, wherein the node which is responsible for generating tasks is called a production node, and the node which is responsible for receiving the tasks and processing the tasks is called a consumption node; a pipeline model is established between the production nodes and the consumption nodes, and the consumption nodes maintain a message pipeline: the production node is connected to the message pipeline of the consumption node, and pushes the task to the message pipeline far end of the consumption node through PUSH operation, the consumption node PULLs the latest task from the message pipeline near end PULL operation of the consumption node, starts processing, and PULLs the next task from the pipeline after processing is finished;
the composite model is used for constructing a distributed complex network topology structure by combining a basic request/response model, a publish/subscribe model and a pipeline model according to actual requirements.
Further, the microservice performance status indicator includes: the message processing number per unit time, the idle time ratio, the memory occupancy rate, the network channel packet loss rate, the CPU occupancy rate and the network connection delay.
Further, the virtual container layer includes: the system comprises a container generation module, a mirror image construction module, a mirror image management module and a container virtualization engine;
the container generation module is used for instantiating a mirror image file, converting a static mirror image file into a dynamically-operated virtual container, and a network topology layer and a service logic layer operate in the container generated by the container generation module to form an independent operation environment;
the mirror image construction module is used for generating a mirror image file according to a standard mirror image construction instruction, is responsible for processing operations such as reference mirror image and file copy and constructing a static environment independent of a host node by executing the mirror image construction instruction;
the mirror image management module is used for managing the mirror images generated by all the mirror image construction modules, and comprises operations of mirror image updating, mirror image obtaining or mirror image deleting and the like;
the container virtualization engine is used as a bottom support of a virtual container layer, is used for an external abstract operating system and a hardware platform, and plays an important role in improving the influence of the deployment of different micro services of the same node on the global environment and the support of the micro services on the heterogeneous structures of the node software and the hardware platform, for an internal abstract programming language and other tools for realizing the core function.
Further, the mirror image management module needs to carry a token with authentication information to initiate a request when performing mirror image update, mirror image acquisition or mirror image deletion; when receiving a request for mirror image operation, the mirror image management module firstly checks whether the request is legal, and if the request does not carry a token, the token is forged, the token is expired or the token does not have the authority to execute the operation, the mirror image management module refuses to execute the request and returns an error prompt; if the request passes the validity check, the mirror management module executes the specified operation according to the message protocol of the request.
The technical scheme 2 is as follows: a method for realizing a flexible micro-service system based on distribution and container virtualization specifically comprises the following steps:
s1: analyzing core service requirements of the microservice, determining a proper programming language to implement service processing of a service logic layer, determining a proper data storage tool, storing important temporary or permanent data, and constructing a message processing logic by using the programming language;
s2: the message processing logic constructed according to step S1 includes the specific implementation of receiving, sending, encoding and decoding of the message and is added to the network topology through the TCP/IP protocol to perform data exchange with other network nodes;
s3: testing the completeness of the core service function of the micro-service, after the core service logic of the micro-service is initially realized, adding the micro-service into a topological network and carrying out network connectivity test, core service function test, database connection test and message concurrence test with a test node;
s4: after the completeness test of the core service function in step S3, actively registering the micro service node to the network topology layer through the service registration mechanism to become a formal micro service node, and preparing for subsequently receiving and processing externally-incoming messages;
s5: after the micro service nodes are registered to become formal micro service nodes, mirror image construction and mirror image updating are carried out to a mirror image management module, and the constructed mirror images are instantiated and deployed through a container generation module;
s6: the service registration mechanism regularly detects the message processing number, the idle time ratio, the memory occupancy rate, the network channel packet loss rate, the CPU occupancy rate and the network connection time delay of each registered micro-service comprehensive unit time to generate a corresponding micro-service performance state index;
s7: according to the micro-service performance status index of step S6, the micro-service system flexibly adjusts the number of instantiations of a single micro-service or dynamically adjusts the appropriate network topology.
Further, in step S7, the micro service system elasticity adjustment specifically includes the following steps:
s71: the method comprises the steps that a service registration mechanism micro-service performance state index judges whether the performance state index of a single instantiation object of a certain micro-service is healthy or not according to a load balancing mechanism;
s72: when the micro-service performance state index is high load, the service registration mechanism actively sends a request to a bottom container virtual layer, a request container generation module searches a mirror image file constructed by corresponding micro-service for a mirror image management module, and instantiates the mirror image file into a specific micro-service container instance;
s73: after the mirror image is instantiated into a specific micro service container instance, a service registration mechanism registers a new micro service container instance, the message processing pressure of different instances of the same micro service is shared through a load balancing mechanism, and the whole message processing capacity and the whole performance of a micro service system are improved through elastic expansion.
The invention has the beneficial effects that:
(1) in the development mode of the microservice system, developers are more leisurely in response to the change of system requirements. The increase or decrease of the single demand can be abstracted into the increase or decrease of the micro-services, and by means of the independent and low-coupling excellent characteristics of the micro-services, developers can easily perform the silk-spinning and cocoon-peeling on the system business and demand change. Meanwhile, the system development with complex business volume and large scale is completed, all personnel participating in the system development are not required to completely master the structure of the system, but the core function is realized through modular development, and each team or developer realizes the single core function of system decomposition and interacts with other micro services through certain specifications or standards.
(2) The microservice system of the present invention is not equivalent to a component, which is a concrete implementation of a single core function that can be used in a production environment, assembled, whereas a component is an element that requires further processing to be used to form a complete system or a single link. The container-packaged micro-service can be deployed to various heterogeneous nodes quickly, conveniently, stably and in a cross-platform manner. Once a new service chain is generated, if most or all of the core functions of the service after the new service chain is split have complete and specific micro-service implementation, the characteristics of micro-service independent deployment are combined with a container virtualization technology to be rapidly deployed on a new node and put into a production environment.
(3) The distributed elastic micro-service based on the elastic network structure has good dynamic expansibility, one micro-service can flexibly deploy one or more instances according to different actual use scenes, the registered service performance state indexes are regularly detected by combining a service registration mechanism, the number of edge computing nodes or micro-service instantiated objects is dynamically expanded or reduced, and the overall stability and the efficiency of the system are improved.
(4) The invention relates to an elastic micro-service system based on distribution and container virtualization, which effectively serves the construction of modern water conservancy industry with digitization, networking and intellectualization as mainlines. The convenient micro-service system development mode provides basic guarantee for comprehensively promoting calculation data, algorithm and calculation force construction of the modern water conservancy industry and finally realizing digital energization of the industry for software and hardware platform heterogeneous support provided by abstraction of programming languages and dependent tools thereof and container virtualization; the elastic distributed network foundation provided by various network topology models meets the requirements of advancement, concurrency, safety and instantaneity of an intelligent water conservancy system which is one of the cores of construction of the modern water conservancy industry.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a block diagram of the elastic microservice system based on distributed and container virtualization according to the present invention;
FIG. 2 is a block flow diagram of a message processing module of the present invention;
FIG. 3 is a schematic block diagram of the load balancing mechanism of the present invention;
FIG. 4 is a functional block diagram of a service registration mechanism of the present invention;
FIG. 5 is a block diagram of a request/response model flow of the present invention;
FIG. 6 is a block diagram of a publish/subscribe model flow of the present invention;
FIG. 7 is a block flow diagram of a pipeline model of the present invention;
FIG. 8 is a block diagram illustrating an authentication process of a mirror management module according to the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Referring to fig. 1 to 8, fig. 1 is a diagram illustrating a flexible microservice system based on distributed and container virtualization according to the present invention, which specifically includes a service logic layer, a network topology layer, and a virtual container layer.
The network topology layer is used as a framework middle layer, is integrated in a micro-service framework, and is responsible for providing support for constructing a distributed complex network topology, receiving data input by a virtual container layer to an external open port, forwarding the data to the service logic layer through service registry query, load balancing and network routing, sending a processing result of the service logic layer to the container virtual layer through routing, and completing a complete flow of message request and response.
The virtual container layer is used as a framework bottom layer and is responsible for providing operation support of the micro-service heterogeneous nodes, opening ports to the outside, communicating a network topology layer inside the virtual container with the outside of the virtual container and establishing a communication link with the outside.
The service logic layer is used as the highest layer of the architecture, provides functions of micro-service message receiving and sending, logic processing and data storage, establishes connection with the network topology layer through a TCP/IP protocol, and becomes a network node for data exchange with other nodes.
Specifically, the service logic layer includes a message processing module, a logic processing module, and a data storage module.
As shown in fig. 2, the message processing module, as a basic module for data exchange between the service logic layer and the middle layer network topology layer, is responsible for receiving data from the network through the TCP/IP communication protocol and decoding the obtained data. As a preferred embodiment, the present embodiment provides for using JSON format character strings for network communication, so that the message processing module is responsible for decoding binary data received from the network topology layer into JSON format character strings, and before sending the data to the network topology layer, the message processing module is responsible for encoding the JSON format character strings of the service logic layer into binary data that can be transmitted in the network topology layer.
As a preferred embodiment, the logic processing module adopts appropriate programming languages Python, C/C + +, Go and Java to implement business processing logic, and is used for implementing the convenient, efficient and stable microservice of the invention.
As a preferred embodiment, the data storage module implements data storage by using appropriate databases of MySQL, Redis, monghedb, and PostgreSQL, and is configured to provide important temporary and permanent data writing, querying, reading, and modifying functions for the microservice.
The network topology layer comprises an asynchronous message queue communication mechanism, a load balancing mechanism, a service registration mechanism, a request/response model, a publish/subscribe model and a pipeline model.
As a preferred embodiment, the asynchronous message queue communication mechanism uses epoll, select and other technologies to provide support, and uses queue buffering to weaken possible high concurrency peaks of the communication network, and reduce the impact on the entire communication network; the method has the advantages that the asynchronous mode is used for communicating with other micro services, the communication interaction among the micro services is decoupled, and the characteristics that the micro services are independent and not influenced mutually are kept.
As a preferred embodiment, the load balancing mechanism is used for intensively receiving messages sent by other nodes of the network topology layer to the node, and uniformly forwarding the messages to different instances of the same micro service by combining with the evaluation of the micro service performance state index, so that the good performance is kept among the nodes of the network topology layer; further, a schematic block diagram of the load balancing mechanism is shown in fig. 3.
The service registration mechanism is used for detecting the performance state index of the micro-service and flexibly adjusting the number of micro-service instances at regular time, when the system flexibly adjusts the number of the micro-service instances according to the using scene, routing information such as the node address and the like corresponding to the micro-service and providing service for the outside can be actively registered in the service registration table, and when the network topology layer needs to send data to the target node, the routing information of the micro-service of the target node is inquired according to the service registration mechanism. The micro-service performance state indexes comprise the message processing number in unit time, the idle time ratio, the memory occupancy rate, the network channel packet loss rate, the CPU occupancy rate and the network connection time delay. A schematic block diagram of the service registration mechanism is shown in fig. 4.
The request/response model is used to establish constraints that nodes in the same network topology need to follow to send and receive messages: the two connected parties are divided into a client and a server, wherein the party sending the message is the client, and the party receiving the message, processing the message and responding to the message is the server; after the client sends the message, the server cannot send the message again before receiving the response of the server to the message, and the server must receive the message first, and cannot receive the message again before responding to the message, which is specifically represented as a question-answer situation. A request/response model flow diagram is shown in fig. 5.
The publish/subscribe model is used to establish constraints that nodes in the same network topology need to follow to send and receive messages: among all nodes establishing the indirection, a node responsible for sending information is called a publishing node, and a node responsible for receiving information is called a subscribing node; under the rule of the model, messages can only be published by publishers, all subscribers can receive the messages published by the publishers, but the publishers cannot receive the messages of the subscribers, the subscribers cannot publish the messages to the publishers, and a one-way communication link is established between the publishers and the subscribers. A publish/subscribe model flow diagram is shown in figure 6.
The pipe model is used to establish constraints that nodes in the same network topology need to follow to send and receive messages: all nodes for establishing connection are divided into three types, wherein the node which is responsible for generating tasks is called a production node, and the node which is responsible for receiving the tasks and processing the tasks is called a consumption node; a pipeline model is established between the production nodes and the consumption nodes, and the consumption nodes maintain a message pipeline: the production node is connected to the message pipeline of the consumption node, the task is pushed to the far end of the message pipeline of the consumption node through PUSH operation, the consumption node PULLs the latest task from the near end of the message pipeline of the consumption node, the processing is started, and the next task is pulled from the pipeline after the processing is finished. The flow diagram of the pipeline model is shown in fig. 6.
The virtual container layer comprises a container generation module, a mirror image construction module, a mirror image management module and a container virtualization engine.
The container generation module is used for instantiating the mirror image file, converting the static mirror image file into a dynamically operated virtual container, and the network topology layer and the service logic layer in the micro-service framework operate in the container generated by the container generation module to form an independent operation environment. The container generation module adopts a podman container virtualization tool, is used for the elastic micro-service system based on distribution and container virtualization, and provides a mature, simple and stable container virtualization deployment scheme.
The mirror image construction module is used for generating a mirror image file according to a standard mirror image construction instruction, and the mirror image construction module is used for processing operations such as reference mirror image and file copy and constructing a static environment independent of the host node by executing the construction instruction. As a preferred embodiment, the mirror construction module uses a build mirror construction tool, and is used in the scheme of the virtual container layer of the present invention for providing mirror construction command parsing and execution.
The mirror image management module is used for managing the mirror images generated by all the mirror image modules, and comprises operations of mirror image updating, mirror image obtaining, mirror image deleting and the like. When executing a mirror image updating module, a mirror image obtaining module and a mirror image deleting module, a mirror image management module needs to carry a token with authentication information to initiate a request, when receiving a request for mirror image operation, the mirror image management module firstly checks whether the request is legal, and if the request does not carry the token, the token is forged, the token is expired and the token does not have the authority to execute the operation, the mirror image management module refuses to execute the request and returns an error prompt; if the request passes the validity check, the mirror management module executes the specified operation according to the message protocol of the request. The authentication flow block diagram of the mirror management module is shown in fig. 8.
The container virtualization engine is used for an external abstract operating system and a hardware platform, an internal abstract programming language and other tools for realizing core functions, and the container virtualization engine is used as a bottom layer support of a virtual container layer, so that the container virtualization engine plays an important role in improving the influence of the deployment of different micro services of the same node on the global environment and the support of the micro services on the heterogeneous software and hardware platforms of the node. In a preferred embodiment, the podman is used as a container virtualization engine, and the elastic micro-service system based on distributed and container virtualization provides a mature container virtualization bottom layer support.
The invention provides a method for realizing a distributed and container virtualization-based elastic microservice system, which specifically comprises the following steps:
step 1: analyzing core service requirements of the microservice, determining a proper programming language to implement service processing of a service logic layer, determining a proper data storage tool, storing important temporary or permanent data, and constructing a message processing logic by using the programming language;
step 2: the message processing logic according to the step 1 comprises the specific implementation of receiving, sending, encoding and decoding of the message, and is added into a network topology structure through a TCP/IP protocol to exchange data with other network nodes;
and step 3: testing the completeness of the core service function of the micro-service, after the core service logic of the micro-service is initially realized, adding the micro-service into a topological network and carrying out network connectivity test, core service function test, database connection test and message concurrence test with a test node;
and 4, step 4: according to the micro service in the step 3, after the completeness of the core service function is tested, actively registering the micro service node to a network topology layer through a service registration mechanism to become a formal micro service node and preparing for subsequently receiving and processing externally-transmitted messages;
and 5: according to the micro service in the step 4, after the micro service is registered to become a formal micro service node, mirror image construction is carried out, a mirror image is updated to a mirror image management module, and the constructed mirror image is instantiated and deployed through a container generation module;
step 6: the service registration mechanism regularly detects the message processing number, the idle time ratio, the memory occupancy rate, the network channel packet loss rate, the CPU occupancy rate and the network connection time delay of each registered micro-service comprehensive unit time to generate a corresponding performance state index;
and 7: and (4) according to the performance state index in the step (6), combining a service registration mechanism, a container generation module and a mirror image management module, elastically adjusting the instantiation number of the single micro-service and dynamically adjusting a proper network topology.
Specifically, the step 7 of flexibly adjusting the micro service architecture includes the following steps:
1) and judging whether the performance state index of the single instantiation object of a certain micro service is healthy or not according to the load balancing mechanism.
2) When the micro-service performance state index is high load, the service registration mechanism actively sends a request to the bottom container virtual layer, the request container generation module searches the mirror image file constructed by the corresponding micro-service to the mirror image management module, and instantiates the mirror image into a specific micro-service container instance.
3) After the mirror image is instantiated into a specific micro service container instance, a service registration mechanism registers a new micro service container instance, the message processing pressure of different instances of the same micro service is shared through a load balancing mechanism, and the whole message processing capacity and the whole performance of a micro service architecture are improved through elastic expansion.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (8)

1. The elastic microservice system based on distribution and container virtualization is characterized by comprising a business logic layer, a network topology layer and a virtual container layer;
the service logic layer is used as the highest layer of the system and is responsible for receiving and transmitting micro-service messages, logic processing and data storage, and establishes connection with the network topology layer through a TCP/IP protocol to form a network node for data exchange with other nodes;
the network topology layer is used as a system middle layer and is responsible for providing support for constructing a distributed complex network topology, receiving data input by the virtual container layer to an external open port, forwarding the data to the service logic layer through service registry query, load balancing and network routing, and sending a processing result of the service logic layer to the container virtual layer through the routing to complete a complete flow of message requests and responses;
the virtual container layer is used as a system bottom layer and is responsible for providing operation support of the micro-service heterogeneous nodes, and the external open port is communicated with the network topology layer inside the virtual container and the outside of the virtual container to establish a communication link with the outside.
2. The resilient microservice system of claim 1, wherein the business logic layer comprises: the system comprises a logic processing module, a data storage module and a message processing module;
the logic processing module is used for realizing the service processing work of a service logic layer, providing custom programming language support and selecting a proper programming language to process service logic according to actual requirements; the data storage module is used for storing temporary data or permanent data generated by the logic processing module, providing custom database support, and selecting a proper database to store business processing data according to actual requirements; the message processing module is used for receiving, sending, coding and decoding messages, establishing connection with a network topology layer through a TCP/IP protocol and exchanging data.
3. The resilient microservice system of claim 1, wherein the network topology layer comprises: an asynchronous message queue communication mechanism, a load balancing mechanism, a service registration mechanism, a request/response model, a publish/subscribe model, a pipeline model and a composite model;
the asynchronous message queue communication mechanism is used for weakening possible high concurrency peaks of a communication network through queue caching, and designing a message queue supporting asynchronous communication to enable a network topology layer to communicate with other micro-services in an asynchronous mode;
the load balancing mechanism is used for intensively receiving messages sent by other nodes of a network topology layer to the node and uniformly forwarding the messages to different instances of the same micro service by combining with the evaluation of the micro service performance state index;
the service registration mechanism is used for monitoring the performance state index of the micro-service at regular time and flexibly adjusting the number of micro-service instances, when the system flexibly adjusts the number of the micro-service instances according to a use scene, the routing information of the node address corresponding to the micro-service and providing the service to the outside is actively registered in the service registration table, and when a network topology layer needs to send data to a target node, the routing information of the micro-service of the target node is inquired according to the service registration mechanism;
the request/response model is used to establish constraints that nodes in the same network topology need to follow to send and receive messages: the two connected parties are divided into a client and a server, wherein the party sending the message is the client, and the party receiving the message, processing the message and responding to the message is the server; after the client sends the message, the server cannot send the message again before receiving the response of the server to the message, the server must receive the message first, and cannot receive the message again before responding to the message, which is specifically represented as a question-answer situation;
the publish/subscribe model is used to establish constraints that nodes in the same network topology need to follow to send and receive messages: among all nodes establishing the indirection, a node responsible for sending information is called a publishing node, and a node responsible for receiving information is called a subscribing node; under the rule of the model, messages can only be published by publishers, all subscribers can receive the messages published by the publishers, but the publishers cannot receive the messages of the subscribers, the subscribers cannot publish the messages to the publishers, and a one-way communication link is established between the publishers and the subscribers;
the pipeline model is used for establishing the constraint that the nodes in the same network topology need to send and receive messages to follow: all nodes for establishing connection are divided into three types, wherein the node which is responsible for generating tasks is called a production node, and the node which is responsible for receiving the tasks and processing the tasks is called a consumption node; a pipeline model is established between the production nodes and the consumption nodes, and the consumption nodes maintain a message pipeline: the production node is connected to the message pipeline of the consumption node, and pushes the task to the message pipeline far end of the consumption node through PUSH operation, the consumption node PULLs the latest task from the message pipeline near end PULL operation of the consumption node, starts processing, and PULLs the next task from the pipeline after processing is finished;
the composite model is used for constructing a distributed complex network topology structure by combining a basic request/response model, a publish/subscribe model and a pipeline model according to actual requirements.
4. The flexible microservice system of claim 3, wherein the microservice performance state metrics comprise: the message processing number per unit time, the idle time ratio, the memory occupancy rate, the network channel packet loss rate, the CPU occupancy rate and the network connection delay.
5. The flexible microservice system of claim 1, wherein the virtual container layer comprises: the system comprises a container generation module, a mirror image construction module, a mirror image management module and a container virtualization engine;
the container generation module is used for instantiating a mirror image file, converting a static mirror image file into a dynamically-operated virtual container, and a network topology layer and a service logic layer operate in the container generated by the container generation module to form an independent operation environment;
the mirror image construction module is used for generating a mirror image file according to a standard mirror image construction instruction, and is responsible for processing a reference mirror image and file copy and constructing a static environment independent of a host node by executing the mirror image construction instruction;
the mirror image management module is used for managing the mirror images generated by all the mirror image construction modules, and comprises mirror image updating, mirror image obtaining or mirror image deleting;
the container virtualization engine is used as a bottom layer support of a virtual container layer, and is used for an external abstract operating system, a hardware platform, an internal abstract programming language and other tools for realizing core functions.
6. The elastic microservice system of claim 5, wherein the mirror management module initiates a request with a token with authentication information when performing a mirror update, a mirror acquisition, or a mirror deletion; when receiving a request for mirror image operation, the mirror image management module firstly checks whether the request is legal, and if the request does not carry a token, the token is forged, the token is expired or the token does not have the authority to execute the operation, the mirror image management module refuses to execute the request and returns an error prompt; if the request passes the validity check, the mirror management module executes the specified operation according to the message protocol of the request.
7. A method for realizing a flexible microservice system based on distribution and container virtualization is characterized by specifically comprising the following steps:
s1: analyzing core service requirements of the microservice, determining a proper programming language to implement service processing of a service logic layer, determining a proper data storage tool, storing important temporary or permanent data, and constructing a message processing logic by using the programming language;
s2: the message processing logic constructed according to step S1 includes the specific implementation of receiving, sending, encoding and decoding of the message and is added to the network topology through the TCP/IP protocol to perform data exchange with other network nodes;
s3: testing the completeness of the core service function of the micro-service, after the core service logic of the micro-service is initially realized, adding the micro-service into a topological network and carrying out network connectivity test, core service function test, database connection test and message concurrence test with a test node;
s4: after the completeness test of the core service function in step S3, actively registering the micro service node to the network topology layer through the service registration mechanism to become a formal micro service node, and preparing for subsequently receiving and processing externally-incoming messages;
s5: after the micro service nodes are registered to become formal micro service nodes, mirror image construction and mirror image updating are carried out to a mirror image management module, and the constructed mirror images are instantiated and deployed through a container generation module;
s6: the service registration mechanism regularly detects the message processing number, the idle time ratio, the memory occupancy rate, the network channel packet loss rate, the CPU occupancy rate and the network connection time delay of each registered micro-service comprehensive unit time to generate a corresponding micro-service performance state index;
s7: according to the micro-service performance status index of step S6, the micro-service system flexibly adjusts the number of instantiations of a single micro-service or dynamically adjusts the appropriate network topology.
8. The method for implementing the elastic microservice system based on the distribution and the container virtualization as claimed in claim 7, wherein in step S7, the concrete steps of elastic adjustment of the microservice system are as follows:
s71: the method comprises the steps that a service registration mechanism micro-service performance state index judges whether the performance state index of a single instantiation object of a certain micro-service is healthy or not according to a load balancing mechanism;
s72: when the micro-service performance state index is high load, the service registration mechanism actively sends a request to a bottom container virtual layer, a request container generation module searches a mirror image file constructed by corresponding micro-service for a mirror image management module, and instantiates the mirror image file into a specific micro-service container instance;
s73: after the mirror image instantiates a specific micro service container instance, the service registration mechanism registers a new micro service container instance, and shares the message processing pressure of different instances of the same micro service through the load balancing mechanism.
CN202210066486.3A 2022-01-20 2022-01-20 Elastic micro-service system based on distributed and container virtualization and implementation method Pending CN114422371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210066486.3A CN114422371A (en) 2022-01-20 2022-01-20 Elastic micro-service system based on distributed and container virtualization and implementation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210066486.3A CN114422371A (en) 2022-01-20 2022-01-20 Elastic micro-service system based on distributed and container virtualization and implementation method

Publications (1)

Publication Number Publication Date
CN114422371A true CN114422371A (en) 2022-04-29

Family

ID=81275955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210066486.3A Pending CN114422371A (en) 2022-01-20 2022-01-20 Elastic micro-service system based on distributed and container virtualization and implementation method

Country Status (1)

Country Link
CN (1) CN114422371A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726901A (en) * 2022-05-23 2022-07-08 安徽华云安科技有限公司 Container access method, device, equipment and storage medium
CN115720236A (en) * 2022-11-18 2023-02-28 北京航天发射技术研究所 Lightweight communication middleware based on heterogeneous network
CN115996229A (en) * 2022-11-30 2023-04-21 中国电子科技集团公司第十研究所 Micro-service data synchronization method for cloud measurement and control baseband pool
CN117075919A (en) * 2023-10-17 2023-11-17 江苏鑫业智慧技术有限公司 Heterogeneous information system integration method and system based on ROSO model
CN117560415A (en) * 2024-01-11 2024-02-13 德阳城市智慧之心信息技术有限公司 Asynchronous and synchronous communication method and system for smart community micro-service architecture (MQTT)
CN118017564A (en) * 2024-04-09 2024-05-10 深圳触觉智能科技有限公司 Energy storage method based on open source hong Meng system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160205518A1 (en) * 2015-01-14 2016-07-14 Kodiak Networks Inc. System and Method for Elastic Scaling using a Container-Based Platform
CN109144485A (en) * 2018-09-10 2019-01-04 南方电网科学研究院有限责任公司 A kind of dispositions method of micro services, device, equipment and readable storage medium storing program for executing
CN110149396A (en) * 2019-05-20 2019-08-20 华南理工大学 A kind of platform of internet of things construction method based on micro services framework
CN110716744A (en) * 2019-10-21 2020-01-21 中国科学院空间应用工程与技术中心 Data stream processing method, system and computer readable storage medium
CN110737653A (en) * 2019-10-17 2020-01-31 天津锐网科技股份有限公司 micro-service-based enterprise data processing system and method
CN111314455A (en) * 2020-02-12 2020-06-19 深圳供电局有限公司 Container micro-service IT monitoring system and method
CN111831269A (en) * 2020-07-21 2020-10-27 腾讯科技(深圳)有限公司 Application development system, operation method, equipment and storage medium
US20200366752A1 (en) * 2019-05-17 2020-11-19 Citrix Systems, Inc. Systems and methods for recording metadata about microservices for requests to the microservices
CN112000448A (en) * 2020-07-17 2020-11-27 北京计算机技术及应用研究所 Micro-service architecture-based application management method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160205518A1 (en) * 2015-01-14 2016-07-14 Kodiak Networks Inc. System and Method for Elastic Scaling using a Container-Based Platform
CN109144485A (en) * 2018-09-10 2019-01-04 南方电网科学研究院有限责任公司 A kind of dispositions method of micro services, device, equipment and readable storage medium storing program for executing
US20200366752A1 (en) * 2019-05-17 2020-11-19 Citrix Systems, Inc. Systems and methods for recording metadata about microservices for requests to the microservices
CN110149396A (en) * 2019-05-20 2019-08-20 华南理工大学 A kind of platform of internet of things construction method based on micro services framework
CN110737653A (en) * 2019-10-17 2020-01-31 天津锐网科技股份有限公司 micro-service-based enterprise data processing system and method
CN110716744A (en) * 2019-10-21 2020-01-21 中国科学院空间应用工程与技术中心 Data stream processing method, system and computer readable storage medium
CN111314455A (en) * 2020-02-12 2020-06-19 深圳供电局有限公司 Container micro-service IT monitoring system and method
CN112000448A (en) * 2020-07-17 2020-11-27 北京计算机技术及应用研究所 Micro-service architecture-based application management method
CN111831269A (en) * 2020-07-21 2020-10-27 腾讯科技(深圳)有限公司 Application development system, operation method, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TAKASHI SHIRAISHI等: "Real-time Monitoring System for Container Networks in the Era of Microservices", 2020 21ST ASIA-PACIFIC NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM (APNOMS), 23 October 2020 (2020-10-23) *
徐天桥: "微服务对传统应用的改造及设计", 电子制作, no. 08, 15 April 2020 (2020-04-15) *
谢伟: "基于微服务架构的开放平台消息系统的设计与实现", 万方, 3 June 2019 (2019-06-03) *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726901A (en) * 2022-05-23 2022-07-08 安徽华云安科技有限公司 Container access method, device, equipment and storage medium
CN114726901B (en) * 2022-05-23 2022-09-02 安徽华云安科技有限公司 Container access method, device, equipment and storage medium
CN115720236A (en) * 2022-11-18 2023-02-28 北京航天发射技术研究所 Lightweight communication middleware based on heterogeneous network
CN115720236B (en) * 2022-11-18 2024-04-19 北京航天发射技术研究所 Lightweight communication middleware based on heterogeneous network
CN115996229A (en) * 2022-11-30 2023-04-21 中国电子科技集团公司第十研究所 Micro-service data synchronization method for cloud measurement and control baseband pool
CN115996229B (en) * 2022-11-30 2024-04-05 中国电子科技集团公司第十研究所 Micro-service data synchronization method for cloud measurement and control baseband pool
CN117075919A (en) * 2023-10-17 2023-11-17 江苏鑫业智慧技术有限公司 Heterogeneous information system integration method and system based on ROSO model
CN117075919B (en) * 2023-10-17 2023-12-26 江苏鑫业智慧技术有限公司 Heterogeneous information system integration method and system based on ROSO model
CN117560415A (en) * 2024-01-11 2024-02-13 德阳城市智慧之心信息技术有限公司 Asynchronous and synchronous communication method and system for smart community micro-service architecture (MQTT)
CN117560415B (en) * 2024-01-11 2024-03-22 德阳城市智慧之心信息技术有限公司 Asynchronous and synchronous communication method and system for smart community micro-service architecture (MQTT)
CN118017564A (en) * 2024-04-09 2024-05-10 深圳触觉智能科技有限公司 Energy storage method based on open source hong Meng system

Similar Documents

Publication Publication Date Title
CN114422371A (en) Elastic micro-service system based on distributed and container virtualization and implementation method
Khriji et al. Design and implementation of a cloud-based event-driven architecture for real-time data processing in wireless sensor networks
US7548843B2 (en) Simulation of distributed networks
CN109687998B (en) Method for constructing satellite network resource management model for task service
CN110658794A (en) Manufacturing execution system
JP5479709B2 (en) Server-processor hybrid system and method for processing data
Gianni et al. A software architecture to ease the development of distributed simulation systems
CN104468299A (en) Enterprise service bus system based on user rule
CN115150274A (en) Unified configuration method, system and storage medium for time-sensitive network equipment
CN112527471B (en) Task processing method and device and storage medium
Kraemer et al. Aligning UML 2.0 state machines and temporal logic for the efficient execution of services
CN111512332B (en) Topological construction method and system for meeting partition tolerance under alliance chain consensus
CN113381870A (en) Message processing method and device
CN102882943B (en) Service copy reading/writing method and system
CN102144373A (en) Probabilistic dynamic router-server mesh routing
CN111131449A (en) Method for constructing service clustering framework of water resource management system
US20090201945A1 (en) Method, system, and computer program product for data exchange
CN115913426A (en) Virtual-real cooperative world integrated scene simulation system
CN116260878A (en) Service center system based on global service structure server of distributed computing and storage
KR100358156B1 (en) Converting Method of Managing Operation from service management system to Switching Command in a Integrated Network
Blanchet et al. Supporting adaptive web-service orchestration with an agent conversation framework
WO2013121572A1 (en) Method of providing different-type system data in distributed system
CN107360588B (en) Message processing method of small base station OAM
Li An integrated web service framework for mobile device hosted web service and its performance analysis
Mordinyi et al. Foundations for a model-driven integration of business services in a safety-critical application domain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination