CN114327949A - Service processing system and method for using same - Google Patents

Service processing system and method for using same Download PDF

Info

Publication number
CN114327949A
CN114327949A CN202111634852.2A CN202111634852A CN114327949A CN 114327949 A CN114327949 A CN 114327949A CN 202111634852 A CN202111634852 A CN 202111634852A CN 114327949 A CN114327949 A CN 114327949A
Authority
CN
China
Prior art keywords
service
layer
application
access requests
application layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111634852.2A
Other languages
Chinese (zh)
Inventor
乐敏睿
何寅华
瞿盛
姚峥嵘
王小松
夏元俊
裴大鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shopex Software Co ltd
Original Assignee
Shopex Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shopex Software Co ltd filed Critical Shopex Software Co ltd
Priority to CN202111634852.2A priority Critical patent/CN114327949A/en
Publication of CN114327949A publication Critical patent/CN114327949A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a service processing system and a method applicable to the same. The system comprises a request receiving layer, a business application layer, a service application layer and a data storage layer which are sequentially deployed, wherein the request receiving layer is suitable for receiving one or more external access requests; the service application layer is suitable for configuring a plurality of service applications required for processing one or more access requests according to the access requests; the service application layer is suitable for configuring one or more service applications required for completing each business application; and the data store layer is adapted to store data from one or more service applications. The service processing system and the applicable method thereof can effectively process high concurrency requests, flexibly expand services and have wide availability.

Description

Service processing system and method for using same
Technical Field
The present invention relates to the field of service application processing, and in particular, to a service processing system and a method for the same.
Background
The existing platform architecture generally uses a traditional single application architecture, has high complexity, can pack the existing functional modules integrally, causes fuzzy boundaries, unclear dependence relationship and disordered stacking of the modules, makes operation and maintenance, secondary development, troubleshooting problems and the like of the whole project very difficult, and causes hidden defects possibly caused by modification and function addition every time.
In particular, when using a conventional monolithic application architecture, if the monolithic application needs to be modified, the entire application needs to be redeployed, which wastes resources and is prone to errors. In addition, in some existing architectural models, there is not flexibility in invoking services, especially when dealing with highly concurrent access requests, and failure to accommodate often occurs.
With the rapid development of the internet, services and services are more and more complex, and the quantity of concurrent requests is more and more large, so that the existing architecture is difficult to meet the complex demand scene.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a service processing system and an applicable method thereof, which can effectively process high concurrent requests, flexibly expand services and have wide usability.
In order to solve the above technical problem, the present invention provides a service processing system, comprising a request receiving layer, a service application layer and a data storage layer, which are sequentially deployed, wherein,
the request receiving layer is suitable for receiving one or more external access requests;
the service application layer is suitable for configuring a plurality of service applications required for processing the one or more access requests according to the access requests;
the service application layer is suitable for configuring one or more service applications required for completing each business application; and
the data store layer is adapted to store data from the one or more service applications.
In an embodiment of the present invention, the system further includes a service application gateway and a service application gateway, wherein the service application gateway is adapted to verify one or more access requests from the outside, and the service application gateway is adapted to verify an interaction request issued by the service application layer to the service application layer.
In an embodiment of the invention, the system further comprises a buffer and a message queue, wherein the buffer and the message queue are adapted to be applied when the plurality of access requests are accessed simultaneously to reduce the frequency of reading the data storage layer.
In an embodiment of the present invention, the system further includes a service discovery registry, and the service discovery registry is adapted to register a plurality of service applications and provide service registration lists to the registered service applications respectively, so as to make the registered service applications mutually invoke each other.
In order to solve the above technical problem, the present invention further provides a service processing method, including the following steps:
receiving, by a request receiving layer, one or more external access requests;
configuring, by a service application layer, a plurality of service applications required for processing the one or more access requests according to the access requests;
one or more service applications required by each service application are configured and completed through a service application layer;
storing the data of the service application layer in a data storage layer; and
completing, by the one or more service applications, the each business application to complete the one or more access requests.
In an embodiment of the present invention, the method further includes verifying, by the service application gateway, the external one or more access requests, and/or verifying, by the service application gateway, the interaction request issued by the service application layer to the service application layer.
In an embodiment of the present invention, the method further includes reducing the frequency of reading the data storage layer when the plurality of access requests are accessed simultaneously by a buffer and a message queue.
In an embodiment of the present invention, when any service application invokes another service application, the service discovery registry registers the service and the another service application, respectively, where the service obtains a service registration list through the service discovery registry, and when the registration list includes the another service application, the service discovery registry allows the service discovery registry to invoke the another service.
Another aspect of the present invention further provides a service processing system, including:
a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the service processing method described above.
Another aspect of the present invention also proposes a computer-readable medium storing computer program code which, when executed by a processor, implements the service processing method described above.
Compared with the prior art, the invention has the following advantages:
the service processing system and the applicable method thereof of the invention deploy a plurality of service applications and service applications which can solve the access request on the service application layer and the service application layer through the distributed architecture design. And, business application and service application in business application layer and business application layer can all be modified, adjusted or added flexibly.
By adopting the service processing system and the applicable method thereof, direct interaction does not occur between the service application layer and the database, and the interaction of a plurality of service applications can be solved by one service application, so that the access amount of the database is reduced, and the problem of high concurrency is effectively solved through the cooperation of other modules.
The scheme of the invention has the advantages of simpler maintenance of single service application, quicker starting of the single service application and easy deployment of local modification. As described above, when the single application architecture is adopted, the whole application must be redeployed as long as the single application is modified, and the scheme effectively solves the problem. The service processing system and the applicable method thereof can be expanded and contracted as required according to different service scenes, and can realize fine-grained expansion according to the change of service requirements.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the principle of the invention. In the drawings:
FIG. 1 is a diagram illustrating a single application service system in the prior art;
FIG. 2 is a block diagram of a service processing system according to an embodiment of the present invention;
FIG. 3 is a timing diagram illustrating a service authentication verification mechanism in a service processing system according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a service discovery mechanism in a service processing system according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a service processing system for processing access requests, in which an embodiment of the invention is employed;
FIG. 6 is a flow chart illustrating a service processing method according to an embodiment of the invention; and
fig. 7 is a system block diagram of a service processing system according to another embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only examples or embodiments of the application, from which the application can also be applied to other similar scenarios without inventive effort for a person skilled in the art. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
The relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In the description of the present application, it is to be understood that the orientation or positional relationship indicated by the directional terms such as "front, rear, upper, lower, left, right", "lateral, vertical, horizontal" and "top, bottom", etc., are generally based on the orientation or positional relationship shown in the drawings, and are used for convenience of description and simplicity of description only, and in the case of not making a reverse description, these directional terms do not indicate and imply that the device or element being referred to must have a particular orientation or be constructed and operated in a particular orientation, and therefore, should not be considered as limiting the scope of the present application; the terms "inner and outer" refer to the inner and outer relative to the profile of the respective component itself.
Spatially relative terms, such as "above … …," "above … …," "above … …," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial relationship to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is turned over, devices described as "above" or "on" other devices or configurations would then be oriented "below" or "under" the other devices or configurations. Thus, the exemplary term "above … …" can include both an orientation of "above … …" and "below … …". The device may be otherwise variously oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
It should be noted that the terms "first", "second", and the like are used to define the components, and are only used for convenience of distinguishing the corresponding components, and the terms have no special meanings unless otherwise stated, and therefore, the scope of protection of the present application is not to be construed as being limited. Further, although the terms used in the present application are selected from publicly known and used terms, some of the terms mentioned in the specification of the present application may be selected by the applicant at his or her discretion, the detailed meanings of which are described in relevant parts of the description herein. Further, it is required that the present application is understood not only by the actual terms used but also by the meaning of each term lying within.
It will be understood that when an element is referred to as being "on," "connected to," "coupled to" or "contacting" another element, it can be directly on, connected or coupled to, or contacting the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly on," "directly connected to," "directly coupled to" or "directly contacting" another element, there are no intervening elements present. Similarly, when a first component is said to be "in electrical contact with" or "electrically coupled to" a second component, there is an electrical path between the first component and the second component that allows current to flow. The electrical path may include capacitors, coupled inductors, and/or other components that allow current to flow even without direct contact between the conductive components.
Fig. 1 is a schematic diagram of a single application service system 10 in the prior art. In the single application service system 10, it is composed of a client, a server (Nginx), a single application service, and a database db (database). It can be seen that if the single application service needs to be modified and adjusted, the entire service system 10 needs to be redeployed, and if a new service application needs to be added, the redeployment is also needed, which is complicated and tedious in procedure, and troublesome for secondary development, problem troubleshooting, and the like.
Based on such a problem, an embodiment of the present invention proposes a service processing system 20 with reference to fig. 2. The service processing system 20 can effectively process high concurrent requests, flexibly expand services, and has wide availability.
As shown in fig. 2, the service processing system 20 includes a request receiving layer 21, a business application layer 23, a service application layer 25, and a data storage layer 26, which are sequentially deployed. Preferably, in the embodiment shown in fig. 2, the service processing system 20 further comprises a business application gateway 22 and a service application gateway 24.
In particular, the request receiving layer 21 is adapted to receive one or more external access requests. Illustratively, the access request may be from a terminal of the client. Preferably, in the request receiving layer 21, a technical tool of slb (server Load balancer) may be used to balance the traffic.
The business application gateway 22 is adapted to authenticate one or more external access requests, such as login authentication.
The service application layer 23 is adapted to configure a plurality of service applications required for processing one or more access requests in accordance with the access requests. Illustratively, a plurality of business applications such as a business application of a configuration base, a business application in charge of authority, and the like.
The service application gateway 24 is adapted to validate interaction requests issued by the business application layer to the service application layer. It can be seen that the service application gateway 24 and the above-mentioned service application gateway 22 respectively implement different authentication verification functions, which also constitute a service authentication verification mechanism in the service processing system 20 of the present invention. More specifically, fig. 3 is a schematic flow chart illustrating the process of authenticating the external application by using the service processing system 20 shown in fig. 2 according to an embodiment of the present invention.
As shown in fig. 3, the external application applies for the AppKey through the service application gateway 22, and the service reference gateway applies for the AppKey to the authorization module 24 in the service application gateway 24 and generates the AppKey in the authorization module, and finally returns the AppKey to the external application through the service application gateway 22. At the same time, the authorization module 24 finally writes the AppKey to the database 26 through the cache 28. In the actual authentication process, the external application finally reaches the database 26 through the authorized AppKey via the authorization modules of the service application gateway 22 and the service application gateway 24, and the database 26 verifies the authority and returns the result. It should be understood that fig. 3 only illustrates one of the authentication verification processes by way of example, but the present invention is not limited thereto.
With further reference to fig. 2, the service application layer 25 is adapted to configure one or more service applications required to complete each business application. For example, in some embodiments, a certain type of business application may be completed by several service applications together, or one service application may be responsible for several types of business applications, which is not limited by the present invention.
Finally, a data store layer 26 is adapted to store data from one or more service applications. Therefore, the data storage layer 26 only interacts with stable service applications, so that frequent access to the database by a plurality of business applications is reduced, and defects such as system failure are caused.
In the embodiment shown in fig. 2, preferably, the service processing system 20 further includes a service discovery registry 27 (for example, using kubernets tool), and the service discovery registry 27 is adapted to register a plurality of service applications and provide service registration lists to the registered service applications respectively so as to make the registered service applications mutually call.
To better illustrate the role of the service discovery registry 27, the working principle thereof is explained below with reference to fig. 4. According to fig. 4, both ServiceA and ServiceB are in the service registry 27 in the system, that is, in the service registry which the service registry 27 can provide, have ServiceA and ServiceB. When the ServiceA acquires knowledge of the service list through the service discovery registry 27 and trusts the ServiceB located in the list, the ServiceB can be called, so that the service processing capability of the service processing system 20 can be expanded.
Preferably, in the embodiment shown in fig. 2, the service processing system 20 further comprises a buffer 28 and a message queue 29, wherein the buffer 28 and the message queue 29 are adapted to be applied when a plurality of access requests are accessed simultaneously to reduce the frequency of reading the data storage layer. In particular, the high concurrency resolution capability of the service processing system 20 relies on the interworking of various modules and functional layers including the aforementioned cache 28 and message queue 29. To better illustrate the resolution capability of the service processing system 20 of the present invention for high concurrent requests, the following description is provided with reference to fig. 5 for a flow of processing access requests in an embodiment of the service processing system 20 of the present invention as shown in fig. 2.
According to fig. 5, for external requests, the request receiving layer 21 is first utilized to digest the request with the load balancing tool SLB, distributing the pressure of the request. The retrieval request is entered into the inventory center through the service application gateway 22. Illustratively, to handle external requests, the inventory center first deploys business applications and service applications according to the architecture shown in FIG. 2. The inventory center, after being logically processed, requests the cache 28 for the required inventory data, during which the inventory retrieval request does not directly access the RDS cluster persistence layer (i.e., the data storage layer 26).
More specifically, the cache 28 of the data is divided into two layers, namely, Redis and ES, where the Redis stores routing information of the data, and because the amount of the data is too large, and a database is divided into tables, the Redis caches the routing information of the dimension of the table, and the inventory center acquires corresponding data information from the ES through the acquired routing information. For external data sources, a set of synchronization hierarchy is constructed to obtain synchronized data to the data store layer 26 in the system.
An external system calling system interface where the external request is located (or the external system provides a data acquisition interface for system calling) acquires inventory source data, and after certain logic processing, the inventory source data is synchronized to an RDS cluster (data storage layer 26) in the system in an asynchronous message (message queue 29) mode to perform data persistence. After the RDS cluster persistence is completed, the data is synchronized to the buffer 28 through the internal asynchronous message queue 29 for updating. Therefore, all processes of processing, data storage and the like related to the inventory request are completed, and the cache 28 and the message queue 29 are matched with other functional layers and modules to realize rapid processing of the request under a high-concurrency scene.
Through the service processing system, high-concurrency requests can be effectively processed, each service application and service application can be flexibly modified, adjusted and added, compared with a single application framework in the prior art, the service processing system can be stretched according to different service scenes as required, and can be updated at any time according to the change of service requirements to respond to different service requirements.
In another aspect of the present invention, referring to fig. 6, a service processing method 60 is further provided, which specifically includes the following steps:
step 61: receiving, by a request receiving layer, one or more external access requests;
step 62: configuring a plurality of service applications required for processing one or more access requests according to the access requests through a service application layer;
and step 63: one or more service applications required by each service application are configured and completed through a service application layer;
step 64: storing data of the service application layer in a data storage layer; and
step 65 completes each business application through one or more service applications to complete one or more access requests.
Flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations are added to or removed from these processes.
In some embodiments of the present invention, the service processing method of the present invention further includes verifying, by the service application gateway, one or more access requests external to the service application layer, and/or verifying, by the service application gateway, an interaction request issued by the service application layer to the service application layer.
In some embodiments of the present invention, the service processing method of the present invention further comprises reducing the frequency of reading the data storage layer when a plurality of access requests are accessed simultaneously by the cache and the message queue.
In some embodiments of the present invention, the service processing method further includes registering any service and other service applications through the service discovery registry when any service application calls other service applications, where any service obtains the service registration list through the service discovery registry, and when the registration list includes other service applications, allowing any application to call other services.
It is understood that a service processing method of the present invention can be applied to the service processing system described above with reference to fig. 1 to 5. For other details about the service processing method of the present invention, reference may be made to the above description of the service processing system, and further description is omitted here.
Referring to fig. 7, another aspect of the present invention further provides a service processing system 700. According to FIG. 7, service processing system 700 may include internal communication bus 701, Processor (Processor)702, Read Only Memory (ROM)703, Random Access Memory (RAM)704, and communication port 705. When implemented on a personal computer, service processing system 700 may also include a hard disk 706.
Internal communication bus 701 may enable data communication among the components of service processing system 700. The processor 702 may make the determination and issue the prompt. In some embodiments, the processor 702 may be comprised of one or more processors. The communication port 705 may enable the service processing system 700 to communicate data with the outside. In some embodiments, the service processing system 700 may send and receive information and data from a network through the communication port 705.
Service processing system 700 may also include various forms of program storage units and data storage units such as a hard disk 706, Read Only Memory (ROM)703 and Random Access Memory (RAM)704, capable of storing various data files used in computer processing and/or communications, as well as possible program instructions executed by processor 702. The processor executes these instructions to implement the main parts of the method. The results processed by the processor are communicated to the user device through the communication port and displayed on the user interface.
In addition, another aspect of the present invention also proposes a computer-readable medium storing computer program code, which when executed by a processor implements the service processing method described above.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing disclosure is by way of example only, and is not intended to limit the present application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, this application uses specific language to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the present application may be combined as appropriate.
Aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. The processor may be one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), digital signal processing devices (DAPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, or a combination thereof. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media. For example, computer-readable media may include, but are not limited to, magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips … …), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD) … …), smart cards, and flash memory devices (e.g., card, stick, key drive … …).
The computer readable medium may comprise a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, and the like, or any suitable combination. The computer readable medium can be any computer readable medium that can communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or device. Program code on a computer readable medium may be propagated over any suitable medium, including radio, electrical cable, fiber optic cable, radio frequency signals, or the like, or any combination of the preceding.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
Although the present application has been described with reference to the present specific embodiments, it will be recognized by those skilled in the art that the foregoing embodiments are merely illustrative of the present application and that various changes and substitutions of equivalents may be made without departing from the spirit of the application, and therefore, it is intended that all changes and modifications to the above-described embodiments that come within the spirit of the application fall within the scope of the claims of the application.

Claims (10)

1. A service processing system is characterized by comprising a request receiving layer, a business application layer, a service application layer and a data storage layer which are sequentially deployed, wherein,
the request receiving layer is suitable for receiving one or more external access requests;
the service application layer is suitable for configuring a plurality of service applications required for processing the one or more access requests according to the access requests;
the service application layer is suitable for configuring one or more service applications required for completing each business application; and
the data store layer is adapted to store data from the one or more service applications.
2. The system of claim 1, further comprising a business application gateway and a service application gateway, wherein the business application gateway is adapted to validate one or more access requests from the external, and wherein the service application gateway is adapted to validate interaction requests issued by the business application layer to the service application layer.
3. The system of claim 1, further comprising a cache and a message queue, wherein the cache and the message queue are adapted to be applied to reduce a frequency of reading the data storage layer when the plurality of access requests are accessed simultaneously.
4. The system according to any one of claims 1 to 3, further comprising a service discovery registry adapted to register a plurality of service applications and provide a service registration list to the registered plurality of service applications, respectively, so as to make the registered plurality of service applications mutually call.
5. A service processing method, comprising the steps of:
receiving, by a request receiving layer, one or more external access requests;
configuring, by a service application layer, a plurality of service applications required for processing the one or more access requests according to the access requests;
one or more service applications required by each service application are configured and completed through a service application layer;
storing the data of the service application layer in a data storage layer; and
completing, by the one or more service applications, the each business application to complete the one or more access requests.
6. The method of claim 5, further comprising authenticating, by a business application gateway, the external one or more access requests, and/or authenticating, by a service application gateway, interaction requests issued by the business application layer to the service application layer.
7. The method of claim 5, further comprising passing buffers and message queues to reduce the frequency of reading the data storage layer when the multiple access requests are accessed simultaneously.
8. The method according to any one of claims 5 to 7, further comprising registering any one service and other service applications respectively through a service discovery registry when the any one service application calls the other service applications, wherein the any one service obtains a service registration list through the service discovery registry, and when the registration list contains the other service applications, allowing the any one application to call the other services.
9. A service processing system, comprising:
a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the method of any one of claims 5-8.
10. A computer-readable medium having stored thereon computer program code which, when executed by a processor, implements the method of any of claims 5-8.
CN202111634852.2A 2021-12-29 2021-12-29 Service processing system and method for using same Pending CN114327949A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111634852.2A CN114327949A (en) 2021-12-29 2021-12-29 Service processing system and method for using same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111634852.2A CN114327949A (en) 2021-12-29 2021-12-29 Service processing system and method for using same

Publications (1)

Publication Number Publication Date
CN114327949A true CN114327949A (en) 2022-04-12

Family

ID=81017809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111634852.2A Pending CN114327949A (en) 2021-12-29 2021-12-29 Service processing system and method for using same

Country Status (1)

Country Link
CN (1) CN114327949A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107612955A (en) * 2016-07-12 2018-01-19 深圳市远行科技股份有限公司 Micro services provide method, apparatus and system
CN111083199A (en) * 2019-11-23 2020-04-28 上海畅星软件有限公司 High-concurrency, high-availability and service-extensible platform-based processing architecture
CN111814177A (en) * 2020-06-28 2020-10-23 中国建设银行股份有限公司 Multi-tenant data processing method, device, equipment and system based on micro-service
CN113160024A (en) * 2021-04-30 2021-07-23 中国银行股份有限公司 Business management system and method based on micro-service architecture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107612955A (en) * 2016-07-12 2018-01-19 深圳市远行科技股份有限公司 Micro services provide method, apparatus and system
CN111083199A (en) * 2019-11-23 2020-04-28 上海畅星软件有限公司 High-concurrency, high-availability and service-extensible platform-based processing architecture
CN111814177A (en) * 2020-06-28 2020-10-23 中国建设银行股份有限公司 Multi-tenant data processing method, device, equipment and system based on micro-service
CN113160024A (en) * 2021-04-30 2021-07-23 中国银行股份有限公司 Business management system and method based on micro-service architecture

Similar Documents

Publication Publication Date Title
US11575518B2 (en) Updateable smart contracts
CN106663033B (en) System and method for supporting a wraparound domain and proxy model and updating service information for cross-domain messaging in a transactional middleware machine environment
CN109542611A (en) Database, that is, service system, database dispatching method, equipment and storage medium
CN109063027A (en) A kind of method and device for business processing
US9830333B1 (en) Deterministic data replication with conflict resolution
US20190182341A1 (en) Global provisioning of millions of users with deployment units
EP3376403A1 (en) Method of accessing distributed database and device providing distributed data service
CN109639598A (en) Request processing method, server, storage medium and device based on micro services
CN106921721A (en) A kind of server, conversation managing method and system
CN114971827A (en) Account checking method and device based on block chain, electronic equipment and storage medium
US11636139B2 (en) Centralized database system with geographically partitioned data
CN111427918A (en) Transaction detail data comparison method and device
CN107438067A (en) A kind of multi-tenant construction method and system based on mesos container cloud platforms
CN110399309A (en) A kind of test data generating method and device
US11093477B1 (en) Multiple source database system consolidation
CN108664343A (en) A kind of stateful call method and device of micro services
CN114327949A (en) Service processing system and method for using same
CN111857979A (en) Information management method, system, storage medium and equipment of distributed system
CN105978744A (en) Resource allocation method, device and system
CN113259475B (en) Distributed session processing system and method based on micro-service architecture
US20210365332A1 (en) Rollback for dependency services in cloud native environment
CN103037063A (en) Method, system and assembly manager for mobile phone business dynamic loading
CN107526530A (en) Data processing method and equipment
CN112866351A (en) Data interaction method, device, server and storage medium
CN114730311A (en) Cross-data center read-write consistency in distributed cloud computing platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220412