CN114301783A - Optimization method and device for micro service, storage medium and electronic device - Google Patents

Optimization method and device for micro service, storage medium and electronic device Download PDF

Info

Publication number
CN114301783A
CN114301783A CN202111672094.3A CN202111672094A CN114301783A CN 114301783 A CN114301783 A CN 114301783A CN 202111672094 A CN202111672094 A CN 202111672094A CN 114301783 A CN114301783 A CN 114301783A
Authority
CN
China
Prior art keywords
service
micro
processing
request
mainline
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111672094.3A
Other languages
Chinese (zh)
Other versions
CN114301783B (en
Inventor
姜勇
杨雷
石京豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongqi Scc Beijing Finance Information Service Co ltd
Original Assignee
Zhongqi Scc Beijing Finance Information Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongqi Scc Beijing Finance Information Service Co ltd filed Critical Zhongqi Scc Beijing Finance Information Service Co ltd
Priority to CN202111672094.3A priority Critical patent/CN114301783B/en
Publication of CN114301783A publication Critical patent/CN114301783A/en
Application granted granted Critical
Publication of CN114301783B publication Critical patent/CN114301783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Computer And Data Communications (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application discloses an optimization method and device for micro service, a storage medium and an electronic device. Determining a main line service, wherein the main line service is decoupled from a service field; receiving a service request, and forwarding the service request to a corresponding micro service through a preset routing gateway, wherein the mainline service is deployed based on the micro service at least comprising two nodes; and after the business request is subjected to preset processing and recorded based on at least one micro service, setting the self state of the micro service into processing. The method and the device solve the technical problem that high availability cannot be achieved based on the micro-service architecture. The method and the device support application micro-service expansion and decoupling service processing.

Description

Optimization method and device for micro service, storage medium and electronic device
Technical Field
The present application relates to the field of microservices, and in particular, to an optimization method and apparatus, a storage medium, and an electronic apparatus for microservices.
Background
The micro-service divides a single application program into a group of small services, and the services are coordinated and matched with each other to provide final value for users. Each service runs in an independent process, and the services communicate with each other by adopting a lightweight communication mechanism.
By using the micro-service architecture, related services such as a provisioning service, a payment service, an accounting service, and the like are generally not decoupled from the main line service and the service field, which may cause inefficiency if there are a large number of concurrent requests. In addition, the architecture is mostly single-point deployment, and cannot support lateral extension.
Aiming at the problem that high availability cannot be realized based on a micro-service architecture in the related art, an effective solution is not provided at present.
Disclosure of Invention
The present application mainly aims to provide an optimization method and apparatus for micro services, a storage medium, and an electronic apparatus, so as to solve the problem that high availability cannot be achieved based on a micro service architecture.
To achieve the above object, according to one aspect of the present application, there is provided an optimization method for a microservice.
The optimization method for the microservice according to the application comprises the following steps: determining a main line service, wherein the main line service is decoupled from a service field; receiving a service request, wherein the mainline service is forwarded to a corresponding micro service through a preset routing gateway based on micro service deployment at least comprising two nodes; and after the business request is subjected to preset processing and recorded based on at least one micro service, setting the self state of the micro service into processing.
Further, after the business request is subjected to preset processing and recorded based on at least one micro service and the self state of the micro service is set to be in processing, the method further includes: starting the subsequent logic of the main line business for processing and auditing the sub thread; and after the current micro service is subjected to logic processing, sending a message queue message to a downstream micro service of non-mainline service for transaction message sending, wherein the accounting service message queue performs consumption processing after receiving the message.
Further, when the service request is received and forwarded to the corresponding micro service through the preset routing gateway, the deploying of the main line service based on the micro service including at least two nodes further includes: processing the main line service based on a Springcloud framework, and performing load balancing through Ribbon; service decoupling or peak eliminating processing is carried out by adopting a RocketMQ.
Further, the determining the main line service, wherein the decoupling between the main line service and the service domain includes: and based on the asynchronous decoupling service, performing asynchronous processing on the mainline service.
Further, the presetting and recording the service request based on at least one micro service includes: and performing preset processing on the service request based on at least one micro service and inquiring based on a cache.
Further, the decoupling between the mainline service and the service domain includes: one or more of the opening service, the payment service and the accounting service of the mainline service are decoupled with the service field.
Further, the at least two node microservice deployments receive the service request, and forward the service request to the corresponding microservice through a preset routing gateway, wherein the mainline service is deployed based on the microservice including at least two nodes, and includes: the mainline traffic is based on a microservice deployment comprising at least two nodes. The service request is received and forwarded to the corresponding micro service through a preset routing gateway; after the business request is subjected to preset processing and recorded based on at least one micro service and the state of the micro service is set to be in processing, the method further comprises the following steps: and sending the messages of the message queue to the accounting service through the RocktMQ cluster.
To achieve the above object, according to another aspect of the present application, there is provided an optimization system for microservices.
An optimization system for microservices according to the present application comprises: the decoupling module is used for determining the mainline business, wherein the mainline business is decoupled from the business field; the routing module is used for receiving a service request and forwarding the service request to a corresponding micro service through a preset routing gateway, wherein the mainline service is deployed on the basis of the micro service at least comprising two nodes; and the business processing module is used for presetting and recording the business request based on at least one micro service and then setting the self state of the micro service into processing.
In the method and the device for optimizing the micro-service, the storage medium and the electronic device in the embodiment of the application, a mode of determining the main line service is adopted, the main line service is forwarded to the corresponding micro-service through the preset routing gateway by receiving the service request, and the main line service achieves the purpose of setting the self state of the micro-service into the processing after the service request is subjected to preset processing and recorded based on at least one micro-service on the basis of the micro-service deployment at least comprising two nodes, so that the technical effect that the micro-service framework based on the micro-service framework can be elastically expanded according to the traffic volume is achieved, the user experience is improved, and the technical problem that the high availability cannot be achieved based on the micro-service framework is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
FIG. 1 is a hardware architecture diagram of an optimization method for microservices according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of an optimization method for microservices according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an optimization apparatus for microservices according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating an optimization method for microservices according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the present application and its embodiments, and are not used to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in this application will be understood by those of ordinary skill in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 is a schematic diagram of a hardware architecture of an optimization method for microservices according to an embodiment of the present application, where the method includes: user terminal 100, Nngix reverse proxy service 200, API gateway 300, open center 400, MQ cluster 500, accounting cluster 600, REDIS cluster 700. The deployment of 2 points in the setting-up center 400 is higher in processing efficiency and concurrency amount than the original single-point deployment, and at least 200 concurrencies can be borne through pressure measurement of setting-up audit, so that the effect is obvious compared with the effect under the condition of single-point deployment concurrency.
The specific process comprises the following steps: the user initiates an approval request, the request is loaded to an API gateway layer through an Nginx public network agent, the gateway forwards the request to a corresponding service by combining an API routing rule and an address, after the service receives the approval request, the service performs idempotent processing and recording on the corresponding request, and sets the service state of the service as processing. And starting the sub-thread processing and auditing subsequent logic, sending MQ messages to the downstream accounting service through the RockketMQ cluster after the logic processing, wherein the step is used as transaction message sending, and the processing of local transactions and the delivery of messages are guaranteed to be one transaction. And the billing service MQ receives the message for consumption processing.
As shown in fig. 2, the method includes steps S201 to S203 as follows:
step S201, determining a main line service, wherein the main line service is decoupled from the service field;
step S202, receiving a service request, and forwarding the service request to a corresponding micro service through a preset routing gateway, wherein the mainline service is deployed based on the micro service at least comprising two nodes;
step S203, after the business request is preset and recorded based on at least one micro service, the self state of the micro service is set to be in the process.
From the above description, it can be seen that the following technical effects are achieved by the present application:
by adopting the mode of determining the mainline service, the service request is deployed and received through at least two node micro-services, and the service request is forwarded to the corresponding micro-service through the preset routing gateway, so that the aim of setting the self state of the micro-service into processing after the service request is subjected to preset processing and recorded based on at least one micro-service is achieved, the technical effects of supporting application micro-service expansion and decoupling service processing are achieved, and the technical problem that high availability cannot be realized based on a micro-service framework is solved.
In step S201, the mainline service is determined. Generally speaking, for the related field and the service, the dominant line service and the non-dominant line service can be determined. For example, the open service is a main line service, the payment service is also a main line service, and the billing service may belong to a non-main line service.
As a preferred implementation, the mainline service is decoupled from the service domain. Because the decoupling is carried out between the service field and the mainline service, each service is more concentrated in processing the logic of the self field. That is to say, split according to concrete business field, open and concentrate on and handle the right of certainty, pay and concentrate on and handle circulation logic, original framework is because of the monomer framework, does not realize the layering, and logic processing is comparatively complicated.
As a preferred embodiment, asynchronous traffic handling is supported after decoupling.
In the step S202, the service requests are deployed and received at the at least two node microservices in the open center architecture, and are forwarded to the corresponding microservices through the preset routing gateway. After receiving a service request of a user based on the deployment of at least two node micro-services, the service request is forwarded to a corresponding (downstream) micro-service through a gateway.
In step S203, after the service request is subjected to preset processing and recorded based on at least one micro service, the state of the micro service itself is set to be processed.
In a preferred embodiment, the predetermined process includes, but is not limited to, a power calculation or the like to verify the vehicle.
As a preferred embodiment, placing the state of the microservice itself into processing synchronizes to the message queue.
As a preferable example in this embodiment, after the performing preset processing and recording on the service request based on at least one micro service, and after setting the self state of the micro service as in processing, the method further includes: starting the subsequent logic of the main line business for processing and auditing the sub thread; and after the current micro service is subjected to logic processing, sending a message queue message to a downstream micro service of non-mainline service for transaction message sending, wherein the accounting service message queue performs consumption processing after receiving the message.
In specific implementation, the service request is an open service, and the subsequent logic of the mainline service is checked by starting the sub-thread processing according to the state of the microservice. And then, after the self logic processing of the current micro service, sending the message queue message to the downstream micro service of the non-mainline service for transaction message sending. Preferably, the billing service message queue may receive the message for consumption processing.
As a preferred preference in this embodiment, the receiving a service request is forwarded to a corresponding microservice through a preset routing gateway, where the deploying of the mainline service based on a microservice including at least two nodes includes: at least two node micro-service deployments of the service request receive the service request, and the service request is forwarded to the corresponding micro-service through a preset routing gateway; after the business request is subjected to preset processing and recorded based on at least one micro service and the state of the micro service is set to be in processing, the method further comprises the following steps: and sending the messages of the message queue to the accounting service through the RocktMQ cluster.
For the open service and the billing service, the open service is used as a main line service, and the billing service is used as a non-main line service.
As a preferred option in this embodiment, when the service request is received and forwarded to the corresponding micro service through a preset routing gateway, the deploying of the mainline service based on the micro service that at least includes two nodes further includes: processing the main line service based on a Springcloud framework, and performing load balancing through Ribbon; service decoupling or peak eliminating processing is carried out by adopting a RocketMQ.
During specific implementation, a Springclosed framework is adopted in the whole architecture of the start-up center, soft load balance is realized through ribbon, the start-up service adopts RocktMQ for decoupling and peak elimination processing to realize large flow at the entrance level, and instant pressure brought by a service peak is avoided.
Specifically, by using a rockmq, the following characteristics due to support are considered: (1) strict message sequence can be guaranteed; (2) providing rich message pull modes; (3) efficient subscriber horizontal expansion capability; (4) a real-time message subscription mechanism; (5) billions of message stacking capabilities.
Furthermore, the following problems can be solved by using the above characteristics of rocktmq:
for sequential consumption and transaction message problems: the message queue (RocktMQ) can strictly ensure the message ordering by adopting a first-in first-out (FIFO) mechanism, and can ensure the ordering during the consumption of cloud credit accounting. The service system sends the message by using the transaction message, and the delivery of the transaction is ensured to be successful.
For the multi-traffic type problem: in order to reduce the complexity of the program and the development workload, multiple service types (such as pool entering and account keeping) share an account keeping program, and the pool Topic is defined as crcl-core-hole-Topic and the group is defined as crcl-core-hole-group in an isolation framework by utilizing a message queue Topic and a client group; the accounting topic is defined as crcl-core-tally-topic and the group is defined as crcl-core-tally-group. The client consumes different types of messages only by starting the consumer according to the appointed topic and group.
For reliability issues: the data in MQ is often lost due to sudden conditions such as program interruption and server downtime, and an irreversible disaster is caused. All messages sent to a browser in a message queue (RockketMQ) adopt a synchronous disk refreshing mechanism, and the messages are written into a physical file and then return to be successful, so that the method is very reliable.
For join alert notifications and failure retry issues: adding the issue that the processing result is not returned due to the billing of the opening and payment queue.
In the embodiment of the application, a preset sending table information table is adopted, and once the condition that the message is not returned in more than 30 minutes or the condition that the processing fails occurs, the table is updated, manual intervention is notified, and retrial can be performed for retrial.
As a preferred preference in this embodiment, the determining the main line service, where decoupling between the main line service and the service domain includes: and based on the asynchronous decoupling service, performing asynchronous processing on the mainline service.
During specific implementation, asynchronous processing is adopted to receive more request quantities under large flow, and the instantaneous pressure on an application and a database due to higher business is eliminated.
As a preferred embodiment in this embodiment, the performing preset processing and recording the service request based on at least one microservice includes: and performing preset processing on the service request based on at least one micro service and inquiring based on a cache.
In particular embodiments, cache-based querying of a large number of queries is supported, including but not limited to Redis or data bins.
As a preferred option in this embodiment, the decoupling between the mainline service and the service domain includes: one or more of the opening service, the payment service and the accounting service of the mainline service are decoupled with the service field.
When the method is specifically implemented, splitting is carried out according to specific service fields, concentration processing right confirmation is opened, payment is concentrated in processing circulation logic, and due to the fact that the architecture in the related technology is a single architecture, layering is not achieved, and logic processing is complex.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
There is also provided, in accordance with an embodiment of the present application, a high availability system for implementing the above method, as shown in fig. 3, the system including:
a decoupling module 301, configured to determine a main line service, where the main line service is decoupled from a service domain;
a routing module 302, configured to receive a service request, and forward the service request to a corresponding micro service through a preset routing gateway, where the main line service is deployed based on a micro service that at least includes two nodes;
and the service processing module 303 is configured to, after performing preset processing and recording on the service request based on at least one micro service, set a state of the micro service itself in processing.
The main line service is determined in the decoupling module 301 according to the embodiment of the present application. Generally speaking, for the related field and the service, the dominant line service and the non-dominant line service can be determined. For example, the open service is a main line service, the payment service is also a main line service, and the billing service may belong to a non-main line service.
As a preferred implementation, the mainline service is decoupled from the service domain. Because the decoupling is carried out between the service field and the mainline service, each service is more concentrated in processing the logic of the self field. That is to say, split according to concrete business field, open and concentrate on and handle the right of certainty, pay and concentrate on and handle circulation logic, original framework is because of the monomer framework, does not realize the layering, and logic processing is comparatively complicated.
As a preferred embodiment, asynchronous traffic handling is supported after decoupling.
In the routing module 302 of the embodiment of the present application, at least two node microservice deployments in an open center architecture receive a service request, and forward the service request to a corresponding microservice through a preset routing gateway. After receiving a service request of a user based on the deployment of at least two node micro-services, the service request is forwarded to a corresponding (downstream) micro-service through a gateway.
In the service processing module 303 of the embodiment of the present application, after the service request is subjected to preset processing and recorded based on at least one micro service, a state of the micro service itself is set to be processed.
In a preferred embodiment, the predetermined process includes, but is not limited to, a power calculation or the like to verify the vehicle.
As a preferred embodiment, placing the state of the microservice itself into processing synchronizes to the message queue.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present application is not limited to any specific combination of hardware and software.
In order to better understand the above-mentioned optimization method flow for microservice, the following explains the above-mentioned technical solutions with reference to the preferred embodiments, but the technical solutions of the embodiments of the present invention are not limited.
The method for optimizing the micro service in the embodiment of the application determines a main line service, receives a service request, and forwards the service request to the corresponding micro service through a preset routing gateway, wherein the main line service is deployed on the basis of the micro service at least comprising two nodes; and after the business request is subjected to preset processing and recorded based on at least one micro service, setting the self state of the micro service into processing. The method can support horizontal extension of the application micro-service, and synchronously and asynchronously decouple service processing, so that each service is more concentrated in processing self field logic. The asynchronous processing can receive more request amount under large flow, and the peak eliminating is carried out due to higher instant pressure of service on application and a database.
As shown in fig. 4, which is a schematic flow chart of an optimization method for microservices in the embodiment of the present application, a specific implementation process includes the following steps:
and step S401, decoupling and peak eliminating processing are carried out by using a RocktMQ, so that the instant pressure brought by a service peak is avoided.
The processing efficiency and the concurrency quantity are higher than those of the original single-point deployment due to the deployment of 2 nodes in the opening center, at least 200 concurrencies can be borne through pressure measurement of the opening audit, and the effect is obvious compared with that of the single-point deployment; the architecture can be transversely expanded due to specific service volume because of supporting elastic expansion, and the original architecture does not support. The architecture is split according to the specific business field, concentration processing right is opened, payment is concentrated in processing circulation logic, the original architecture is a single architecture, layering is not achieved, and logic processing is complex. By using the RocktMQ to decouple the original service, the user interaction experience is greatly improved under the condition of higher concurrency than that of the original framework.
For sequential consumption and transaction message problems: the message queue (RocktMQ) can strictly ensure the message ordering by adopting a first-in first-out (FIFO) mechanism, and can ensure the ordering during the consumption of cloud credit accounting. The service system sends the message by using the transaction message, and the delivery of the transaction is ensured to be successful.
For the multi-traffic type problem: in order to reduce the complexity of the program and the development workload, multiple service types (such as pool entering and account keeping) share an account keeping program, and the pool Topic is defined as crcl-core-hole-Topic and the group is defined as crcl-core-hole-group in an isolation framework by utilizing a message queue Topic and a client group; the accounting topic is defined as crcl-core-tally-topic and the group is defined as crcl-core-tally-group. The client consumes different types of messages only by starting the consumer according to the appointed topic and group.
For reliability issues: the data in MQ is often lost due to sudden conditions such as program interruption and server downtime, and an irreversible disaster is caused. All messages sent to a browser in a message queue (RockketMQ) adopt a synchronous disk refreshing mechanism, and the messages are written into a physical file and then return to be successful, so that the method is very reliable.
For join alert notifications and failure retry issues: adding the issue that the processing result is not returned due to the billing of the opening and payment queue.
And step S402, decoupling and peak eliminating processing are carried out by using a RocktMQ, so that the instant pressure brought by a service peak is avoided.
The user initiates an approval request, the request is loaded to an API gateway layer through an Nginx public network agent, the gateway forwards the request to a corresponding service by combining an API routing rule and an address, after the service receives the approval request, the service performs idempotent processing and recording on the corresponding request, and sets the service state of the service as processing.
Step S403, the RocktMQ cluster sends the MQ message to the downstream accounting service.
And starting the sub-thread processing and auditing subsequent logic, sending MQ messages to the downstream accounting service through the RockketMQ cluster after the logic processing, wherein the step is used as transaction message sending, and the processing of local transactions and the delivery of messages are guaranteed to be one transaction. And the billing service MQ receives the message for consumption processing.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An optimization method for microservices, comprising:
determining a main line service, wherein the main line service is decoupled from a service field;
receiving a service request, and forwarding the service request to a corresponding micro service through a preset routing gateway, wherein the mainline service is deployed based on the micro service at least comprising two nodes;
and after the business request is subjected to preset processing and recorded based on at least one micro service, setting the self state of the micro service into processing.
2. The method of claim 1, wherein after the pre-processing and recording the service request based on at least one of the micro services and after setting a state of the micro service itself in processing, the method further comprises:
starting the subsequent logic of the main line business for processing and auditing the sub thread;
and after the current micro service is subjected to logic processing, sending a message queue message to a downstream micro service of non-mainline service for transaction message sending, wherein the accounting service message queue performs consumption processing after receiving the message.
3. The method of claim 2, wherein receiving a service request and forwarding the service request to a corresponding micro service through a predetermined routing gateway, wherein deploying the mainline service based on the micro service including at least two nodes further comprises:
processing the main line service based on a Springcloud framework, and performing load balancing through Ribbon;
service decoupling or peak eliminating processing is carried out by adopting a RocketMQ.
4. The method of claim 1, wherein determining the mainline traffic, wherein decoupling the mainline traffic from the traffic realm comprises:
and based on the asynchronous decoupling service, performing asynchronous processing on the mainline service.
5. The method of claim 1, wherein the pre-processing and recording the service request based on the at least one microservice comprises:
and performing preset processing on the service request based on at least one micro service and inquiring based on a cache.
6. The method of claim 1, wherein decoupling the mainline traffic from the traffic realm comprises: one or more of the opening service, the payment service and the accounting service of the mainline service are decoupled with the service field.
7. The method of claim 6, wherein:
the at least two node microservice deployments receive service requests, and forward the service requests to corresponding microservices through a preset routing gateway, wherein the mainline service deployment based on the microservice deployment at least comprising two nodes comprises:
at least two node micro-service deployments of the service request receive the service request, and the service request is forwarded to the corresponding micro-service through a preset routing gateway;
after the business request is subjected to preset processing and recorded based on at least one micro service and the state of the micro service is set to be in processing, the method further comprises the following steps:
and sending the messages of the message queue to the accounting service through the RocktMQ cluster.
8. A highly available system, comprising:
the decoupling module is used for determining the mainline business, wherein the mainline business is decoupled from the business field;
the routing module is used for receiving a service request and forwarding the service request to a corresponding micro service through a preset routing gateway, wherein the mainline service is deployed on the basis of the micro service at least comprising two nodes;
and the business processing module is used for presetting and recording the business request based on at least one micro service and then setting the self state of the micro service into processing.
9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 7 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 7.
CN202111672094.3A 2021-12-31 2021-12-31 Optimization method and device for micro-service, storage medium and electronic device Active CN114301783B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111672094.3A CN114301783B (en) 2021-12-31 2021-12-31 Optimization method and device for micro-service, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111672094.3A CN114301783B (en) 2021-12-31 2021-12-31 Optimization method and device for micro-service, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN114301783A true CN114301783A (en) 2022-04-08
CN114301783B CN114301783B (en) 2024-05-28

Family

ID=80976338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111672094.3A Active CN114301783B (en) 2021-12-31 2021-12-31 Optimization method and device for micro-service, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN114301783B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115018325A (en) * 2022-06-10 2022-09-06 中国银行股份有限公司 Service processing method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111078426A (en) * 2019-12-03 2020-04-28 紫光云(南京)数字技术有限公司 High concurrency solution under back-end micro-service architecture
WO2020098075A1 (en) * 2018-11-12 2020-05-22 平安科技(深圳)有限公司 Financial data processing method, apparatus and device, and storage medium
CN112000448A (en) * 2020-07-17 2020-11-27 北京计算机技术及应用研究所 Micro-service architecture-based application management method
CN112968960A (en) * 2021-02-22 2021-06-15 同济大学 Micro-service architecture based on open source component
WO2021179841A1 (en) * 2020-03-12 2021-09-16 华为技术有限公司 Microservice invoking method and apparatus, device and medium
CN113742043A (en) * 2021-08-31 2021-12-03 中企云链(北京)金融信息服务有限公司 Asynchronous splitting method for server backend task

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020098075A1 (en) * 2018-11-12 2020-05-22 平安科技(深圳)有限公司 Financial data processing method, apparatus and device, and storage medium
CN111078426A (en) * 2019-12-03 2020-04-28 紫光云(南京)数字技术有限公司 High concurrency solution under back-end micro-service architecture
WO2021179841A1 (en) * 2020-03-12 2021-09-16 华为技术有限公司 Microservice invoking method and apparatus, device and medium
CN112000448A (en) * 2020-07-17 2020-11-27 北京计算机技术及应用研究所 Micro-service architecture-based application management method
CN112968960A (en) * 2021-02-22 2021-06-15 同济大学 Micro-service architecture based on open source component
CN113742043A (en) * 2021-08-31 2021-12-03 中企云链(北京)金融信息服务有限公司 Asynchronous splitting method for server backend task

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HENNADII FALATIUK; MARIYA SHIROKOPETLEVA; ZOIA DUDAR: "Investigation of Architecture and Technology Stack for e-Archive System", 2019 IEEE INTERNATIONAL SCIENTIFIC-PRACTICAL CONFERENCE PROBLEMS OF INFOCOMMUNICATIONS, SCIENCE AND TECHNOLOGY (PIC S&T), 9 April 2020 (2020-04-09) *
王世泽;: "基于微服务架构的企业服务总线在银行系统集成中的应用", 中国新技术新产品, no. 13, 10 July 2020 (2020-07-10) *
王方旭;: "基于Spring Cloud实现业务系统微服务化的设计与实现", 电子技术与软件工程, no. 08, 25 April 2018 (2018-04-25) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115018325A (en) * 2022-06-10 2022-09-06 中国银行股份有限公司 Service processing method and device
CN115018325B (en) * 2022-06-10 2024-05-24 中国银行股份有限公司 Service processing method and device

Also Published As

Publication number Publication date
CN114301783B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN107590182B (en) Distributed log collection method
US11121945B2 (en) Methods, systems, and computer readable media for request response processing
US20150067135A1 (en) Member-oriented hybrid cloud operating system architecture and communication method thereof
CN109873736A (en) A kind of micro services monitoring method and system
CN102663649B (en) Financial derivative transaction system
CN111930529B (en) Data synchronization method, device and system based on message queue and microservice
EP2838243B1 (en) Capability aggregation and exposure method and system
WO2021088641A1 (en) Data transmission method, data processing method, data reception method and device, and storage medium
CN105959349B (en) A kind of Distributed Services end operating system and method
CN110430275A (en) Data processing method, system, calculates equipment and medium at device
US20190362015A1 (en) Database replication system
US20090080432A1 (en) System and method for message sequencing in a broadband gateway
CN103475566A (en) Real-time message exchange platform and distributed cluster establishment method
KR20160147909A (en) System and method for supporting common transaction identifier (xid) optimization and transaction affinity based on resource manager (rm) instance awareness in a transactional environment
CN108632299A (en) Enhance method, apparatus, electronic equipment and the storage medium of registration center's availability
US8650324B2 (en) System and method for reliable distributed communication with guaranteed service levels
CN113037862B (en) Service request processing method, device, equipment and storage medium
CN113422842B (en) Distributed power utilization information data acquisition system considering network load
CN112288577B (en) Transaction processing method, device, electronic equipment and medium for distributed service
CN106027534A (en) System for implementing financial message processing based on Netty
CN114301783A (en) Optimization method and device for micro service, storage medium and electronic device
CN109451078A (en) Transaction methods and device under a kind of distributed structure/architecture
CN114615096A (en) Telecommunication charging method, system and related equipment based on event-driven architecture
US9515876B2 (en) System and method for network provisioning
US11646956B2 (en) Systems and methods for providing bidirectional forwarding detection with performance routing measurements

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant