CN113220481A - Request processing and feedback method and device, computer equipment and readable storage medium - Google Patents

Request processing and feedback method and device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN113220481A
CN113220481A CN202110481418.9A CN202110481418A CN113220481A CN 113220481 A CN113220481 A CN 113220481A CN 202110481418 A CN202110481418 A CN 202110481418A CN 113220481 A CN113220481 A CN 113220481A
Authority
CN
China
Prior art keywords
information
module
kernel module
feedback
request message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110481418.9A
Other languages
Chinese (zh)
Inventor
孙正浩
黄慧
何辉
周敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An E Wallet Electronic Commerce Co Ltd
Original Assignee
Ping An E Wallet Electronic Commerce Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An E Wallet Electronic Commerce Co Ltd filed Critical Ping An E Wallet Electronic Commerce Co Ltd
Priority to CN202110481418.9A priority Critical patent/CN113220481A/en
Publication of CN113220481A publication Critical patent/CN113220481A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/524Deadlock detection or avoidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/545Interprogram communication where tasks reside in different layers, e.g. user- and kernel-space

Abstract

The invention relates to the technical field of cloud service, and discloses a request processing and feedback method, a request processing and feedback device, computer equipment and a readable storage medium, wherein the method comprises the following steps: arranging a preset service model to obtain a kernel module, constructing a URL path of the kernel module and recording the URL path in a preset adaptation module; receiving a request message sent by a client, and carrying out adaptation processing on the request message to obtain a URL path; invoking a kernel module corresponding to the URL path to calculate the request message to obtain stage information; and summarizing and arranging the stage information to obtain feedback information, and sending the feedback information to the client. The invention also relates to a blockchain technique, where information can be stored in blockchain nodes. The invention does not need to consume a large amount of development resources, and the service model in the kernel module is convenient to increase, decrease and modify due to the self-definition of service arrangement, thereby avoiding the occurrence of the condition that a large amount of codes are compiled to realize the kernel module.

Description

Request processing and feedback method and device, computer equipment and readable storage medium
Technical Field
The invention relates to the technical field of cloud services, in particular to a request processing and feedback method, a request processing and feedback device, computer equipment and a readable storage medium.
Background
Under the current internet micro-service architecture system, a service gateway (such as a gateway server) takes the bridge roles of a background micro-service system and a front-end entrance. The service gateway mainly supports service functions, such as: protocol conversion, condition aggregation, service arrangement, response cutting and the like; the following are taken over in non-business functions, such as: the functions of current limiting, fusing, degrading, brushing prevention, authority verification, load balancing and the like.
In the construction process of the service gateway, the non-service function is used as basic capability, the construction can be generally completed by means of the capability of an RPC framework and an open source component inside an enterprise, but the service function is usually solved by adopting a mode of manually writing codes at present.
However, the inventor realizes that this way not only results in the service function development of the service gateway, which not only consumes a lot of development resources (e.g., manpower, material resources, time) and results in low development efficiency, but also results in a very limited application range of the service gateway because the manually written code is difficult to modify.
Disclosure of Invention
The invention aims to provide a request processing and feedback method, a request processing and feedback device, computer equipment and a readable storage medium, which are used for solving the problems that in the prior art, the development efficiency is low because a large amount of development resources are consumed for the development of service functions of a service gateway, and the application range of the service gateway is very limited because manually written codes are difficult to modify.
In order to achieve the above object, the present invention provides a request processing and feedback method, including:
arranging a preset service model to obtain a kernel module, and constructing a URL path of the kernel module, wherein the URL path is a resource locator reflecting the storage position of the kernel module;
receiving a request message sent by a client, and carrying out adaptation processing on the request message to obtain a URL path;
invoking a kernel module corresponding to the URL path to calculate the request message to obtain stage information;
and summarizing and arranging the stage information to obtain feedback information, and sending the feedback information to the client.
In the above scheme, before the preset service model is arranged to obtain the kernel module, the method further includes:
creating a service model for performing operation processing on the request message;
before receiving the request message sent by the client, the method further comprises:
and associating the URL path with a preset request identifier.
In the above scheme, the step of arranging the preset service model to obtain a kernel module and constructing the URL path of the kernel module includes:
receiving component information and component arrangement information sent by a control terminal;
acquiring a service model according to the component information, setting the service model as a target component, and arranging the target component according to the component arrangement information to form a service module;
receiving kernel arrangement information sent by a control terminal, arranging the service modules according to the kernel arrangement information, and enabling the service modules to be mutually associated through a forwarding path to form a kernel module, wherein the forwarding path refers to a network address of the service module;
and taking a forwarding path of a first service module in the kernel module as a URL path, and recording the URL path in the adaptation module.
In the above scheme, after the URL path of the kernel module is constructed, the method further includes:
creating a direct mapping cache formed by at least one cache block, and setting an index reflecting the memory address of the cache block on the cache block; the cache block is used for storing phase information, and the phase information is a feedback result generated by a service module in the kernel module according to the request message.
In the foregoing solution, the step of performing adaptation processing on the request message to obtain the URL path includes:
carrying out protocol conversion on the request message to obtain standard information;
and acquiring a URL path corresponding to the reference information of the standard information from a preset registration center.
In the foregoing solution, the step of calling the kernel module corresponding to the URL path to calculate the request message to obtain the phase information includes:
extracting the access information in the request message through a kernel module corresponding to the URL path;
calling a service cluster to calculate the parameter input information to obtain parameter output information;
judging whether the parameter information meets a preset parameter rule or not; if so, setting the parameter information as qualified information; if not, setting the reference information as abnormal information;
and performing remodeling treatment on the qualified information to obtain stage information.
In the above scheme, the step of performing summary arrangement processing on the phase information to obtain feedback information includes:
when the kernel module receives the request message, calling a preset timing module to time the kernel module and caching the stage information generated by the kernel module;
judging whether the timing time reaches a preset abnormal time threshold value according to a preset judgment cycle;
if the abnormal time threshold is reached, judging that the kernel module is abnormal; deleting the cached stage information and finishing the stage information, or setting the cached stage information as information to be processed;
if the abnormal time threshold value is not reached, judging whether termination information sent by the kernel module is received or not; if the termination information is received, setting the cached stage information as information to be processed;
summarizing the information to be processed to obtain summarized information, extracting reserved data corresponding to a specified field in the summarized information through a preset arrangement rule, and sequencing the data to obtain cutting recombination information;
rendering the cutting recombination information to obtain feedback information;
after the phase information is summarized and arranged to obtain the feedback information, the method further comprises the following steps:
and uploading the feedback information to a block chain. In order to achieve the above object, the present invention further provides a request processing and feedback device, including:
the system comprises an arranging and configuring module, a core module and a URL path, wherein the arranging and configuring module is used for arranging a preset service model to obtain the core module and constructing the URL path of the core module, and the URL path is a resource locator reflecting the storage position of the core module;
the message adaptation module is used for receiving a request message sent by a client and carrying out adaptation processing on the request message to obtain a URL path;
the calling operation module is used for calling a kernel module corresponding to the URL path to operate the request message to obtain stage information;
and the summarizing feedback module is used for summarizing and arranging the stage information to obtain feedback information and sending the feedback information to the client.
To achieve the above object, the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor of the computer device implements the steps of the request processing and feedback method when executing the computer program.
In order to achieve the above object, the present invention further provides a computer-readable storage medium, which stores a computer program, and the computer program stored in the computer-readable storage medium realizes the steps of the request processing and feedback method when being executed by a processor.
According to the request processing and feedback method, device, computer equipment and readable storage medium provided by the invention, the technical effect of self-defining service arrangement of the kernel module is realized by arranging and configuring the service model to obtain the kernel module, a large amount of development resources are not required to be consumed, the service model in the kernel module is convenient to increase, decrease and modify due to self-defining of service arrangement, and the occurrence of the condition that the kernel module is realized by writing a large amount of codes is avoided, so that the application range of the gateway server is expanded, the code amount and the development intensity are reduced, and the development resources required by service function development are saved.
By providing the thread for computing the request message to obtain the phase information for each service module of the kernel module and storing the phase information in the cache block as a node for ending the thread, the situation that the kernel module occupies one thread to continuously run the service module until the feedback information is obtained is avoided, the running time of each thread provided by the gateway server is shortened, the technical effect of processing the request message by asynchronously calling the service cluster to obtain the phase information is realized, and the problem of thread blocking caused by the high-concurrency situation of the gateway server is avoided.
By cutting and recombining the stage information, the stage information is cut in a self-defined manner, sensitive data in the stage information are prevented from being leaked to the outside, the stage information is recombined in a self-defined manner, the data in the feedback information can be arranged according to display requirements, and the application range of the gateway server is expanded. Meanwhile, the feedback information is obtained by rendering the cutting recombination information, so that the feedback information can be directly displayed after being sent to the client side, the rendering component of the client side is not required to be called to render the feedback information, and the display efficiency of the feedback information is greatly improved.
Drawings
FIG. 1 is a flow chart of a first embodiment of a request processing and feedback method according to the present invention;
FIG. 2 is a schematic diagram of an environment application of a request processing and feedback method according to a second embodiment of the present invention;
FIG. 3 is a flowchart of a request processing and feedback method according to a second embodiment of the present invention;
FIG. 4 is a block diagram of a third embodiment of a request processing and feedback device;
fig. 5 is a schematic diagram of a hardware structure of a computer device according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a request processing and feedback method, a request processing and feedback device, computer equipment and a readable storage medium, which are suitable for the technical field of cloud service and provide the request processing and feedback method based on an arrangement configuration module, a message adaptation module, a call operation module and a summary feedback module. According to the invention, a kernel module is obtained by arranging a service model, a URL path of the kernel module is constructed and recorded in an adaptation module; and calling an adaptation module to perform adaptation processing on the request message to obtain a URL path, calling a kernel module corresponding to the URL path to calculate the request message to obtain stage information, and summarizing and arranging the stage information to obtain feedback information.
The first embodiment is as follows:
referring to fig. 1, a request processing and feedback method of the present embodiment includes:
s102: arranging a preset service model to obtain a kernel module, and constructing a URL path of the kernel module, wherein the URL path is a resource locator reflecting the storage position of the kernel module.
S104: and receiving a request message sent by a client, and carrying out adaptation processing on the request message to obtain a URL path.
S105: and calling a kernel module corresponding to the URL path to calculate the request message to obtain stage information.
S106: and summarizing and arranging the stage information to obtain feedback information, and sending the feedback information to the client.
In an exemplary embodiment, the service model is organized into a kernel module, where the kernel module (URLModel) is configured to perform operation processing on a primary front-end request of a client and obtain feedback information, and the kernel module may have at least one service module. The technical effect of self-defining service arrangement of the kernel module is realized by arranging and configuring the service model to obtain the kernel module, a large amount of development resources are not required to be consumed, the service model in the kernel module is convenient to increase, decrease and modify due to self-defining of service arrangement, and the condition that a large amount of codes are compiled to realize the kernel module (namely, a functional module for realizing the service function in the background technology) is avoided, so that the application range of the gateway server is expanded, the code quantity and the development intensity are reduced, and the development resources required by service function development are saved.
By extracting the access information in the request message and calling the registration center of the adaptation module to identify the URL path corresponding to the access information, the adaptation processing of the request message is realized, that is: and obtaining the kernel module corresponding to the request message, and further realizing the technical effect of routing the request sent by the client to the kernel module required by the client.
And calculating the request message by calling a kernel module corresponding to the URL path to obtain stage information, storing the stage information into the cache block, and performing summary arrangement processing on the stage information in the cache block to obtain feedback information, wherein the summary arrangement processing refers to a process of cutting and recombining the stage information and rendering the cut and recombined information after cutting and recombining to obtain the feedback information. The gateway server can provide threads for computing the request message to obtain the phase information for each service module of the kernel module, and the phase information is stored in the cache block and serves as a node for thread ending, so that the situation that the kernel module occupies one thread to continuously run the service module until the feedback information is obtained is avoided, the running time of each thread provided by the gateway server is shortened, the technical effect that the request message is processed by the asynchronous calling service cluster to obtain the phase information is achieved, and the problem that the gateway server causes thread blocking due to high concurrency is avoided.
By cutting and recombining the stage information, the stage information is cut in a self-defined manner, sensitive data in the stage information are prevented from being leaked to the outside, the stage information is recombined in a self-defined manner, the data in the feedback information can be arranged according to display requirements, and the application range of the gateway server is expanded. Meanwhile, feedback information is obtained by rendering the cutting recombination information, so that the feedback information can be directly displayed after being sent to the client side, a rendering component of the client side is not required to be called to render the feedback information, and the display efficiency of the feedback information is greatly improved.
Example two:
the embodiment is a specific application scenario of the first embodiment, and the method provided by the present invention can be more clearly and specifically explained through the embodiment.
The method provided in this embodiment is specifically described below by taking as an example that the adaptation module is called to obtain a kernel module in a server running a request processing and feedback method, and the request message is calculated by the kernel module to obtain the feedback information. It should be noted that the present embodiment is only exemplary, and does not limit the protection scope of the embodiments of the present invention.
Fig. 2 schematically shows an environment application diagram of a request processing and feedback method according to the second embodiment of the present application.
In an exemplary embodiment, the servers 2 where the request processing and feedback methods are located are respectively connected with the clients 4 through the network 3; the server 2 may provide services through one or more networks 3, which networks 3 may include various network devices, such as routers, switches, multiplexers, hubs, modems, bridges, repeaters, firewalls, proxy devices, and/or the like. The network 3 may include physical links, such as coaxial cable links, twisted pair cable links, fiber optic links, combinations thereof, and/or the like. The network 3 may include wireless links, such as cellular links, satellite links, Wi-Fi links, and/or the like; the client 4 may be a computer device such as a smart phone, a tablet computer, a notebook computer, and a desktop computer.
Fig. 3 is a flowchart of a request processing and feedback method according to an embodiment of the present invention, which includes steps S201 to S206.
S201: and creating a service model for performing operation processing on the request message. In this step, the adapting module may be constructed by using a spring controller, and is configured to perform adapting processing on the request message, where the adapting processing refers to a process of routing the request message to a service model capable of processing the request message.
It should be noted that the spring controller is a controller responsible for processing a request distributed by the dispatcterservlet in the spring mvc, and encapsulates data requested by a user into a Model after being processed by the service processing layer, and then returns the Model to a corresponding View for presentation.
Illustratively, the business model includes: the system comprises a parameter extraction model, a protocol calling model, a response judgment model and a response remodeling model.
The parameter extraction model (ExtractModel) is used for extracting request parameters in a request message and assembling the request parameters to obtain the parameter entering information;
the protocol calling model (protocol model) is used for executing a protocol, is in communication connection with a service cluster running the protocol, and calls the protocol in the service cluster according to the access information to calculate the access information; the protocol calling model is a calling interface for calling the service cluster;
the model of protocol execution represents one RPC execution. Specific protocols can be realized in an extended mode, such as Dubbo requests, http requests and the like;
the response judgment model (Judggel model) is used for judging whether the result of the protocol execution conforms to the preset standard specification;
a response remolding model (restuctmodel) is used for remolding processing of the results returned by the current protocol. Such as the unified removal of the class attribute returned by the Dubbo return, etc. At this point, a complete Step remote call ends.
S202: arranging a preset service model to obtain a kernel module, and constructing a URL path of the kernel module, wherein the URL path is a resource locator reflecting the storage position of the kernel module.
In this step, a service model is organized into a kernel module, where the kernel module (URLModel) is configured to perform operation processing on a primary front-end request of a client and obtain feedback information, and the kernel module may have at least one service module.
The adaptation module is provided with a registration center, and the registration center is used for correlating the request identifier of the request message with the URL (uniform resource locator) of the kernel module, routing the received request message to the URL correlated with the request identifier of the request message, and enabling the kernel module of the URL to process the request message to obtain feedback information, so that a large amount of development resources do not need to be consumed, and the service model in the kernel module is convenient to increase, decrease and modify due to the self-definition of service arrangement.
In a preferred embodiment, the step of arranging the preset service model to obtain a kernel module and constructing the URL path of the kernel module includes:
s21: and receiving the component information and the component arrangement information sent by the control terminal.
In this step, the component information reflects a service model required for completing a specified task, and the component arrangement information defines an order in which the service model is arranged for completing the specified task.
S22: and acquiring a service model according to the component information, setting the service model as a target component, and arranging the target component according to the component arrangement information to form a service module.
In this step, the service model specified in the component information is set as a target component, and the target component is arranged in the sequence defined in the component arrangement information, so as to obtain a service module in which ruby completes the specified task. And the service module (StepModel) is used for carrying out operation processing on any rear-end request link in the primary front-end request of the client so as to complete the specified task. The front-end request refers to a complete request sent by a client, and at least comprises a step-by-step request of one node/link; the back-end request is a step request for any node/link whose value needs to be completed in the process of completing the front-end request. Illustratively, if the front-end request comprises four step requests of extraction of request parameters, protocol execution, result judgment and response remodeling. Then, the service module sequentially includes: the parameter extraction model, the protocol calling model, the protocol execution model, the response judgment model and the response remodeling model.
S23: receiving kernel arrangement information sent by a control terminal, arranging the service modules according to the kernel arrangement information, and enabling the service modules to be mutually associated through a forwarding path to form the kernel module, wherein the forwarding path refers to a network address of the service module.
In this step, after the previous-order service module generates stage information, the information may be forwarded to the next-order service module through the forwarding path, so as to implement correlation between the service modules in the kernel module, and enable the service modules to process the request message in sequence according to the sequence defined in the kernel arrangement information, so as to finally obtain the feedback information. The kernel scheduling information defines the sequence of scheduling the service modules for completing the specified service, and the request field required when the service module in the next sequence is called through the phase information generated by the service module in the previous sequence. And constructing a forwarding path of the service module according to the request field and the storage position of the service module, wherein the forwarding path is used for reflecting the network address of the service module. Wherein, the forwarding path can be set through a preset forwarding template, such as: hold location/{ { body. request field } }/{ { header. authentication information } }. In this embodiment, the authentication information may be obtained through a header of the acquisition request message.
Illustratively, the service of the kernel module is "inquire hotel reserved by user", the request message includes order information, and the feedback information of the service includes: including user information, hotel information, and room information.
The service module Step1 of the kernel module receives the order information (such as orderID parameter) and the authentication information (such as Authorization parameter) transmitted by the client. Then, the forwarding path of the traffic module Step1 is constructed by the forwarding template as follows: and/getOrderInfo/{ { body.orderID } }/{ { header.Authorization } } so that the service module Step1 can be obtained through the forwarding path, and the service cluster is called to obtain user information (such as userID) according to the order information.
The service module Step2 of the kernel module receives the user information (such as userID) generated by the service module Step1 and the authentication information (such as authentication parameter) transmitted by the client, and then the forwarding path of the service module Step2 is constructed through the forwarding template, such as: and/getUserInfo/{ { body1.userID } }/{ { head. authorization } } } so that the service module Step2 can be obtained through the forwarding path, and the service cluster is called to obtain hotel information (such as hotELID) according to the user information.
The service module Step3 of the kernel module receives the user information generated by the service module Step1, the hotel information generated by the service module Step2 and the authentication information transmitted by the client, and then the forwarding path of the service module Step3 constructed by the forwarding template is as follows: and/getUserInfo/{ { body1.userID. hotELD } }/{ { head. authorization } } so that a service module Step3 can be obtained through the forwarding path, and the service cluster is called to obtain room information (roomID) according to the user information and the hotel information.
S24: and taking a forwarding path of a first service module in the kernel module as a URL path, and recording the URL path in the adaptation module.
In this step, when receiving a request message, an adaptation module extracts entry information (e.g., orderID) of the request message, identifies a URL path (e.g.,/getOrderInfo/{ { body.orderid } }/{ { head.authority } }) corresponding to the entry information in the adaptation module, and uses the URL path as a request path, so as to achieve a technical effect of routing the request message to a kernel module required by the adaptation module.
In this embodiment, the URL path and the forwarding path of the kernel module are stored in the registration center of the adaptation module, so that the control end can adjust the sequencing of the service modules in the kernel module, the forwarding path of each service module, and the technical effect of the URL path by modifying the URL path in the registration center, without writing a large amount of codes to manage and modify the kernel module and the service modules therein, thereby improving the configuration and modification efficiency of the kernel module and reducing the code amount.
Further, after the recording the URL path in the adaptation module, the method further includes:
s25: and acquiring the kernel module through the registry and testing the health condition of the kernel module.
In this step, a kernel module and a service module therein are obtained according to the URL path and the forwarding path recorded in the registry, the kernel module and the service module therein are tested by a preset test script and a test result is generated, and the test result is set as the health condition.
The test script is recorded with test information, and the test information is a feedback result according with the test expectation of the test script.
S26: and if the health condition is abnormal downtime, setting the kernel module as an abnormal module, and sealing the URL path corresponding to the abnormal module.
In this step, if the test information cannot be obtained through the test script, a test result recording that the kernel module is abnormally down is generated. The URL path corresponding to the abnormal module is sealed, so that the situation that the server is abnormal, stuck or even halted due to the fact that the request message is operated by calling the abnormal module is avoided.
S27: and if the health condition is normal, continuously checking the next kernel module.
In this step, if the test information can be obtained through the test script, a test result that records that the kernel module is normal is generated.
S203: creating a direct mapping cache formed by at least one cache block, and setting an index reflecting the memory address of the cache block on the cache block; the cache block is used for storing phase information, and the phase information is a feedback result generated by a service module in the kernel module according to the request message.
The method comprises the steps that a cache block is built in a kernel module and used for storing phase information, the phase information generated by each service module is stored in the cache block in advance in a staged mode, and after the phase information generated by all the service modules is stored in the cache block, the threads of the gateway server are called to collect and arrange the phase information to obtain feedback information.
The gateway server provides a thread for computing the request message to obtain the phase information for each service module of the kernel module, and the phase information is stored in the cache block as a node for ending the thread, so that the situation that the kernel module occupies one thread to continuously run the service module until the feedback information is obtained is avoided, the running time of each thread provided by the gateway server is shortened, the execution thread of each service module is asynchronously run, and the processing of the request message in a high concurrency state is effectively coped with.
In this embodiment, a RenderModel is used as the cache block, where the RenderModel is a computer model that performs clipping and reassembly on the stage information and renders a result obtained after the clipping and reassembly to obtain feedback information.
Structurally, a Direct Mapped (Direct Mapped) Cache is composed of a plurality of Cache blocks (Cache blocks, or Cache lines). Each cache block stores a number of memory locations having contiguous memory addresses. On a 32-bit computer this is typically a double word (dword), i.e. four bytes. Thus, each doubleword has a unique intra-block offset.
Each cache block has an Index (Index) that is typically the low-end portion of the memory address, but does not contain the lowest number of bits occupied by the block offset and the byte offset. A direct-mapped cache with a total data size of 4KB and a cache block size of 16B has a total of 256 cache blocks with an index ranging from 0 to 255. By using a simple shift function, the index of the cache block corresponding to any memory address can be obtained. Since this is a many-to-one mapping, it is necessary to store a piece of data while indicating the exact location of the data in memory. Each cache block is provided with a Tag (Tag). And splicing the tag value and the index of the cache block to obtain the memory address of the cache block. If the offset in the block is added, the corresponding memory address of any block of data can be obtained.
In a preferred embodiment, after creating a direct-mapped cache comprising at least one cache block, and setting an index reflecting a memory address of the cache block on the cache block, the method further includes:
s31: and constructing a feedback rule in the cache block, wherein the feedback rule is used for summarizing and arranging the stage information and unpacking the stage information to obtain feedback information.
In this step, the feedback rule defines a specification for summarizing and arranging the phase information and a field required for unpacking the summarized and arranged phase information.
S32: and constructing a timing thread in the cache block, wherein the timing thread is used for monitoring the time consumed by the kernel module to calculate the request message.
In this step, the time consumed by the service module operation request message is recorded through the timing thread to judge whether the time exceeds a preset time threshold value, so as to judge whether the kernel module is abnormally down in time, and ensure the stability of the gateway server.
S204: and receiving a request message sent by a client, and carrying out adaptation processing on the request message to obtain a URL path.
In order to route a request sent by a client to a kernel module required by the client, the step extracts the access information in the request message and calls a registration center of the adaptation module to identify a URL path corresponding to the access information so as to realize adaptation processing of the request message, namely: and obtaining the kernel module corresponding to the request message.
In a preferred embodiment, before the receiving the request message sent by the client, the method further includes:
and associating the URL path with a preset request identifier.
In this embodiment, the request identifier of the request message and a URL (uniform resource locator) of the kernel module are associated with each other in a registry of the registry, and the received request message is routed to the URL associated with the request identifier, so that the kernel module of the URL processes the request message to obtain feedback information.
In a preferred embodiment, the step of performing adaptation processing on the request message to obtain the URL path includes:
s41: and carrying out protocol conversion on the request message to obtain standard information.
In this step, each service module may be implemented by many different protocols, such as HTTP, Dubbo, GRPC, etc., but many of them are not very friendly to users or are not exposed to the outside at all, such as Dubbo service; therefore, the request message is subjected to protocol conversion to obtain a standard protocol corresponding to the bottom layer, and in this embodiment, the standard protocol is a common protocol in json format or xml format.
S42: and acquiring a URL path corresponding to the reference information of the standard information from a preset registration center.
In this step, the URL path corresponding to the standard information is acquired from the registry, so as to quickly acquire the kernel module required by the request without constructing a domain name and without writing a large amount of codes for forwarding or configuring the request message.
Further, calling the registration center to poll the health condition of the kernel module;
and if the health condition is abnormal downtime, setting the kernel module as an abnormal module, and sealing a forwarding path corresponding to the abnormal module so as to avoid the occurrence of abnormal, stuck or even dead halt of the server caused by calling the abnormal module to operate the request message.
And if the health condition is normal, continuously checking the next kernel module.
S205: and calling a kernel module corresponding to the URL path to calculate the request message to obtain stage information.
In order to realize that the asynchronous call service cluster processes the request message to obtain the phase information and avoid the problem of thread blocking caused by high concurrency of a gateway server, the step obtains the phase information by calling a kernel module corresponding to the URL path to calculate the request message.
In a preferred embodiment, the step of invoking the kernel module corresponding to the URL path to calculate the request message to obtain the phase information includes:
s51: extracting the access information in the request message through a kernel module corresponding to the URL path;
s52: calling a service cluster to calculate the parameter input information to obtain parameter output information;
s53: judging whether the parameter information meets a preset parameter rule or not; if so, setting the parameter information as qualified information; if not, setting the reference information as abnormal information;
s54: and performing remodeling treatment on the qualified information to obtain stage information.
Specifically, a kernel module corresponding to the forwarding path is set as an execution module; extracting the access information in the request message through a parameter extraction model in the execution module; calling a service cluster to calculate the parameter entering information through a protocol calling model in the execution module to obtain parameter exiting information; judging whether the parameter information meets a preset parameter rule or not through a response judgment model in the execution module; if so, setting the parameter information as qualified information; if not, setting the reference information as abnormal information; and remodeling the qualified information through a response remodeling model in the execution module to obtain stage information.
In this embodiment, a specification for performing remodeling processing on the qualified information is predefined in the response remodeling model, and data deletion, data extraction, and data encapsulation are performed on the qualified information through the specification to obtain stage information, where the specification includes, but is not limited to, a blacklist rule, an unpacking rule, and an encapsulation rule.
Further, the blacklist rule is a computer rule for deleting data corresponding to the blacklist field in the qualified information according to a preset blacklist field, and the computer rule is used for deleting data of the qualified information.
The unpacking rule is a computer rule for extracting data corresponding to the unpacking field in the qualified information according to a preset unpacking field, and is used for extracting the data of the qualified information.
The packaging rule is a computer rule for integrally packaging the qualified information into an integral object, and is used for carrying out data packaging on the qualified information.
S206: and summarizing and arranging the stage information to obtain feedback information, and sending the feedback information to the client.
In order to achieve the technical effect of asynchronously processing the request message and obtaining the feedback information of the gateway server to reduce the risk of thread blocking of the gateway server, the step stores the stage information into the cache block, and performs summary arrangement processing on the stage information in the cache block to obtain the feedback information, wherein the summary arrangement processing refers to a process of cutting and recombining the stage information and rendering the cut and recombined result to obtain the feedback information, so as to achieve the technical effect of performing customized cutting and recombination rendering on the stage information.
In a preferred embodiment, the step of performing a summary arrangement process on the phase information to obtain feedback information includes:
s61: when the kernel module receives the request message, calling a preset timing module to time the kernel module and caching the stage information generated by the kernel module;
s62: judging whether the timing time reaches a preset abnormal time threshold value according to a preset judgment cycle;
s63: if the abnormal time threshold is reached, judging that the kernel module is abnormal; deleting the cached stage information and finishing the stage information, or setting the cached stage information as information to be processed;
s64: if the abnormal time threshold value is not reached, judging whether termination information sent by the kernel module is received or not; if the termination information is received, setting the cached stage information as information to be processed;
s65: summarizing the information to be processed to obtain summarized information, extracting reserved data corresponding to a specified field in the summarized information through a preset arrangement rule, and sequencing the data to obtain cutting recombination information;
s66: and rendering the cutting recombination information to obtain feedback information.
Specifically, when it is monitored that the kernel module receives the request message, a preset timing module is called to time the kernel module, and phase information generated by the kernel module is stored in the cache block; judging whether the timing time reaches a preset abnormal time threshold value according to a preset judgment cycle; if the abnormal time threshold is reached, judging that the kernel module is abnormal; deleting the stage information stored in the cache block and finishing the deleting, or setting the stage information stored in the cache block as information to be processed; if the abnormal time threshold value is not reached, judging whether termination information sent by the kernel module is received or not; the termination information is an end packet generated by the kernel module after completing processing the request information, for example: end.
And if the termination information is received, setting the stage information stored in the cache block as information to be processed. If the termination information is not received, judging whether the timing time reaches a preset abnormal time threshold value again according to a preset judgment cycle. Summarizing the information to be processed to obtain summarized information, extracting reserved data corresponding to a specified field in the summarized information through a preset arrangement rule, and sequencing the data to obtain cutting recombination information, wherein the arrangement rule comprises a reserved field for reserving the content in the summarized information and sequence information for sequencing the data corresponding to the reserved field.
Rendering the cutting recombination information to obtain feedback information, wherein the feedback information is obtained through a rendering component preset in the cache block, such as: and the render component renders the cutting recombination information to obtain feedback information, and at the moment, the feedback information is rendered, so that the feedback information is sent to the client side to be directly displayed without calling the rendering component of the client side to render the feedback information, and the display efficiency of the feedback information is greatly improved.
Preferably, after the step of summarizing and arranging the phase information to obtain the feedback information, the method further includes:
and uploading the feedback information to a block chain.
It should be noted that the corresponding digest information is obtained based on the feedback information, and specifically, the digest information is obtained by hashing the feedback information, for example, by using the sha256s algorithm. Uploading summary information to the blockchain can ensure the safety and the fair transparency of the user. The user equipment may download the summary information from the blockchain to verify whether the feedback information is tampered. The blockchain referred to in this example is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm, and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Example three:
referring to fig. 4, a request processing and feedback device 1 of the present embodiment includes:
the arrangement configuration module 12 is configured to arrange a preset service model to obtain a kernel module, construct a URL path of the kernel module, and record the URL path in a preset adaptation module, where the URL path is a resource locator that reflects a storage location of the kernel module;
the message adaptation module 14 is configured to receive a request message sent by a client, and perform adaptation processing on the request message to obtain a URL path;
a calling operation module 15, configured to call a kernel module corresponding to the URL path to operate the request message to obtain phase information;
and the summarizing feedback module 16 is used for summarizing and arranging the stage information to obtain feedback information, and sending the feedback information to the client.
Optionally, the request processing and feedback device 1 further includes:
the creation module 11 is configured to create an adaptation module for performing adaptation processing on a request message sent by a client, and create a service model for performing operation processing on the request message.
Optionally, the request processing and feedback device 1 further includes:
a building module 13, configured to build a cache block for storing phase information in the kernel module, where the phase information is a feedback result generated by a service module in the kernel module according to the request message.
Optionally, the orchestration configuration module 12 further includes:
an arrangement information input unit 121, configured to receive component information and component arrangement information sent by a control end;
the component arranging unit 122 is configured to obtain a service model according to the component information, set the service model as a target component, and arrange the target component according to the component arranging information to form a service module;
the module arranging unit 123 is configured to receive kernel arranging information sent by the control terminal, arrange the service modules according to the kernel arranging information, and enable each of the service modules to be associated with each other through a forwarding path to form a kernel module, where the forwarding path refers to a network address of a service module;
a path recording unit 124, configured to take a forwarding path of a first service module in the kernel module as a URL path, and record the URL path in the adaptation module.
Optionally, the orchestration configuration module 12 further includes:
a module testing unit 125, configured to obtain a kernel module through a registry and test a health status of the kernel module;
an exception sealing unit 126, configured to set the kernel module as an exception module when the health condition is an exception downtime, and seal a URL path corresponding to the exception module;
and a normal jump unit 127, configured to continue checking a next kernel module when the health condition is normal.
Optionally, the building module 13 further includes:
a rule constructing unit 131, configured to construct a feedback rule in the cache block, where the feedback rule is used to summarize and arrange the stage information, and perform unpacking processing on the stage information to obtain feedback information;
a thread constructing unit 132, configured to construct a timing thread in the cache block, and configured to monitor time consumed by the kernel module to calculate the request message.
Optionally, the message adaptation module 14 further includes:
a protocol conversion unit 141, configured to perform protocol conversion on the request message to obtain standard information;
a path obtaining unit 142, configured to obtain, from a preset registry, a URL path corresponding to the reference information of the standard information.
Optionally, the calling operation module 15 further includes:
an identification extraction unit 151, configured to extract the entry information in the request message through a kernel module corresponding to the URL path;
an information operation unit 152, configured to invoke a service cluster to operate the entry parameter information to obtain the exit parameter information;
an information determining unit 153, configured to determine whether the parameter information satisfies a preset parameter rule; if so, setting the parameter information as qualified information; if not, setting the reference information as abnormal information;
and an information reshaping unit 154, configured to perform reshaping processing on the qualified information to obtain stage information.
Optionally, the summary feedback module 16 further includes:
a timing cache unit 161, configured to call a preset timing module to time the kernel module when it is monitored that the kernel module receives the request message, and cache phase information generated by the kernel module;
time determination unit 162: judging whether the timing time reaches a preset abnormal time threshold value according to a preset judgment cycle;
a time exception handling unit 163, configured to determine that the kernel module is abnormal when the exception time threshold is reached; deleting the cached stage information and ending the process, or setting the cached stage information as information to be processed, and calling a summary arrangement unit 167;
a time normal processing unit 164, configured to determine whether termination information sent by the kernel module is received when the abnormal time threshold is not reached;
an information terminating unit 165, configured to set the cached stage information as information to be processed when the terminating information is received, and invoke the summarizing and arranging unit 167;
an information continuing unit 166, configured to invoke the time determining unit 162 when the termination information is not received;
the summarizing and arranging unit 167 is configured to summarize the to-be-processed information to obtain summarized information, extract reserved data corresponding to a specified field in the summarized information according to a preset arranging rule, and sort the data to obtain cutting and recombining information;
and an information rendering unit 168, configured to render the clipping and recombining information to obtain feedback information.
The technical scheme relates to and utilizes a container arrangement technology of cloud service, a preset service model is arranged to obtain a kernel module, and a URL path of the kernel module is constructed, wherein the URL path is a resource locator reflecting the storage position of the kernel module; receiving a request message sent by a client, and carrying out adaptation processing on the request message to obtain a URL path; invoking a kernel module corresponding to the URL path to calculate the request message to obtain stage information; and summarizing and arranging the stage information to obtain feedback information, and sending the feedback information to the client.
Example four:
in order to achieve the above object, the present invention further provides a computer device 5, in which components of the request processing and feedback device in the third embodiment can be distributed in different computer devices, and the computer device 5 can be a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server, or a rack server (including an independent server or a server cluster formed by multiple application servers) for executing programs, and the like. The computer device of the embodiment at least includes but is not limited to: a memory 51, a processor 52, which may be communicatively coupled to each other via a system bus, as shown in FIG. 5. It should be noted that fig. 5 only shows a computer device with components, but it should be understood that not all of the shown components are required to be implemented, and more or fewer components may be implemented instead.
In this embodiment, the memory 51 (i.e., a readable storage medium) includes a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the memory 51 may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. In other embodiments, the memory 51 may be an external storage device of a computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the computer device. Of course, the memory 51 may also include both internal and external storage devices of the computer device. In this embodiment, the memory 51 is generally used for storing an operating system and various application software installed on the computer device, such as program codes of the request processing and feedback device in the third embodiment. Further, the memory 51 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 52 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 52 is typically used to control the overall operation of the computer device. In this embodiment, the processor 52 is configured to run the program codes stored in the memory 51 or process data, for example, run the request processing and feedback device, so as to implement the request processing and feedback method of the first embodiment and the second embodiment.
Example five:
to achieve the above objects, the present invention also provides a computer readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application store, etc., on which a computer program is stored, which when executed by a processor 52, implements corresponding functions. The computer-readable storage medium of this embodiment is used for storing a computer program for implementing the request processing and feedback method, and when executed by the processor 52, implements the request processing and feedback method of the first embodiment and the second embodiment.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A request processing and feedback method is characterized by comprising the following steps:
arranging a preset service model to obtain a kernel module, and constructing a URL path of the kernel module, wherein the URL path is a resource locator reflecting the storage position of the kernel module;
receiving a request message sent by a client, and carrying out adaptation processing on the request message to obtain a URL path;
invoking a kernel module corresponding to the URL path to calculate the request message to obtain stage information;
and summarizing and arranging the stage information to obtain feedback information, and sending the feedback information to the client.
2. The request processing and feedback method of claim 1, wherein before the preset service model is organized to obtain the kernel module, the method further comprises:
creating a service model for performing operation processing on the request message;
before receiving the request message sent by the client, the method further comprises:
and associating the URL path with a preset request identifier.
3. The request processing and feedback method of claim 1, wherein the step of arranging the preset service model to obtain a kernel module and constructing the URL path of the kernel module comprises:
receiving component information and component arrangement information sent by a control terminal;
acquiring a service model according to the component information, setting the service model as a target component, and arranging the target component according to the component arrangement information to form a service module;
receiving kernel arrangement information sent by a control terminal, arranging the service modules according to the kernel arrangement information, and enabling the service modules to be mutually associated through a forwarding path to form a kernel module, wherein the forwarding path refers to a network address of the service module;
and taking the forwarding path of the first service module in the kernel module as a URL path.
4. The request processing and feedback method of claim 1, wherein after said constructing the URL path of the kernel module, the method further comprises:
creating a direct mapping cache formed by at least one cache block, and setting an index reflecting the memory address of the cache block on the cache block; the cache block is used for storing phase information, and the phase information is a feedback result generated by a service module in the kernel module according to the request message.
5. The method according to claim 1, wherein the step of adapting the request message to obtain the URL path comprises:
carrying out protocol conversion on the request message to obtain standard information;
and acquiring a URL path corresponding to the reference information of the standard information from a preset registration center.
6. The request processing and feedback method of claim 1, wherein the step of invoking the kernel module corresponding to the URL path to compute the request message to obtain the phase information comprises:
extracting the access information in the request message through a kernel module corresponding to the URL path;
calling a service cluster to calculate the parameter input information to obtain parameter output information;
judging whether the parameter information meets a preset parameter rule or not; if so, setting the parameter information as qualified information; if not, setting the reference information as abnormal information;
and performing remodeling treatment on the qualified information to obtain stage information.
7. The request processing and feedback method of claim 1, wherein the step of summarizing and arranging the phase information to obtain the feedback information comprises:
when the kernel module receives the request message, calling a preset timing module to time the kernel module and caching the stage information generated by the kernel module;
judging whether the timing time reaches a preset abnormal time threshold value according to a preset judgment cycle;
if the abnormal time threshold is reached, judging that the kernel module is abnormal; deleting the cached stage information and finishing the stage information, or setting the cached stage information as information to be processed;
if the abnormal time threshold value is not reached, judging whether termination information sent by the kernel module is received or not; if the termination information is received, setting the cached stage information as information to be processed;
summarizing the information to be processed to obtain summarized information, extracting reserved data corresponding to a specified field in the summarized information through a preset arrangement rule, and sequencing the data to obtain cutting recombination information;
rendering the cutting recombination information to obtain feedback information;
after the phase information is summarized and arranged to obtain the feedback information, the method further comprises the following steps:
and uploading the feedback information to a block chain.
8. A request processing and feedback apparatus, comprising:
the system comprises an arranging and configuring module, a core module and a URL path, wherein the arranging and configuring module is used for arranging a preset service model to obtain the core module and constructing the URL path of the core module, and the URL path is a resource locator reflecting the storage position of the core module;
the message adaptation module is used for receiving a request message sent by a client and carrying out adaptation processing on the request message to obtain a URL path;
the calling operation module is used for calling a kernel module corresponding to the URL path to operate the request message to obtain stage information;
and the summarizing feedback module is used for summarizing and arranging the stage information to obtain feedback information and sending the feedback information to the client.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the request processing and feedback method according to any of claims 1 to 7 are implemented when the computer program is executed by the processor of the computer device.
10. A computer-readable storage medium, on which a computer program is stored, the computer program stored in the computer-readable storage medium, when being executed by a processor, implementing the steps of the request processing and feedback method according to any one of claims 1 to 7.
CN202110481418.9A 2021-04-30 2021-04-30 Request processing and feedback method and device, computer equipment and readable storage medium Pending CN113220481A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110481418.9A CN113220481A (en) 2021-04-30 2021-04-30 Request processing and feedback method and device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110481418.9A CN113220481A (en) 2021-04-30 2021-04-30 Request processing and feedback method and device, computer equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113220481A true CN113220481A (en) 2021-08-06

Family

ID=77090499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110481418.9A Pending CN113220481A (en) 2021-04-30 2021-04-30 Request processing and feedback method and device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113220481A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113973139A (en) * 2021-10-20 2022-01-25 北京沃东天骏信息技术有限公司 Message processing method and device
CN114221874A (en) * 2021-12-14 2022-03-22 平安壹钱包电子商务有限公司 Traffic analysis and scheduling method and device, computer equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110764752A (en) * 2019-11-08 2020-02-07 普元信息技术股份有限公司 System and method for realizing graphical service arrangement of Restful service based on micro-service architecture
CN111078315A (en) * 2019-12-12 2020-04-28 拉扎斯网络科技(上海)有限公司 Microservice arranging and executing method and system, architecture, equipment and storage medium
CN111818182A (en) * 2020-08-31 2020-10-23 四川新网银行股份有限公司 Micro-service arranging and data aggregating method based on Spring closed gateway
US10832191B1 (en) * 2017-11-16 2020-11-10 Amdocs Development Limited System, method, and computer program for metadata driven interface orchestration and mapping
CN112015372A (en) * 2020-07-24 2020-12-01 北京百分点信息科技有限公司 Heterogeneous service arranging method, processing method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10832191B1 (en) * 2017-11-16 2020-11-10 Amdocs Development Limited System, method, and computer program for metadata driven interface orchestration and mapping
CN110764752A (en) * 2019-11-08 2020-02-07 普元信息技术股份有限公司 System and method for realizing graphical service arrangement of Restful service based on micro-service architecture
CN111078315A (en) * 2019-12-12 2020-04-28 拉扎斯网络科技(上海)有限公司 Microservice arranging and executing method and system, architecture, equipment and storage medium
CN112015372A (en) * 2020-07-24 2020-12-01 北京百分点信息科技有限公司 Heterogeneous service arranging method, processing method and device and electronic equipment
CN111818182A (en) * 2020-08-31 2020-10-23 四川新网银行股份有限公司 Micro-service arranging and data aggregating method based on Spring closed gateway

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113973139A (en) * 2021-10-20 2022-01-25 北京沃东天骏信息技术有限公司 Message processing method and device
CN114221874A (en) * 2021-12-14 2022-03-22 平安壹钱包电子商务有限公司 Traffic analysis and scheduling method and device, computer equipment and readable storage medium
CN114221874B (en) * 2021-12-14 2023-11-14 平安壹钱包电子商务有限公司 Traffic analysis and scheduling method and device, computer equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN108173938B (en) Server load distribution method and device
CN111556136B (en) Data interaction method between internal containers of power edge Internet of things agent
CN111083161A (en) Data transmission processing method and device and Internet of things equipment
CN108718347B (en) Domain name resolution method, system, device and storage medium
CN114500690B (en) Interface data processing method and device, electronic equipment and storage medium
CN113220481A (en) Request processing and feedback method and device, computer equipment and readable storage medium
CN110661853A (en) Data proxy method, device, computer equipment and readable storage medium
CN111885197B (en) Data transmission method and device based on Internet of things, cloud platform and computer equipment
CN113259479B (en) Data processing method and equipment
EP3014817B1 (en) Hardware management communication protocol
WO2022142153A1 (en) Electricity meter upgrading method and system, smart meter, and storage medium
CN115118705A (en) Industrial edge management and control platform based on micro-service
CN114124929A (en) Cross-network data processing method and device
WO2021098393A1 (en) Method and apparatus for intelligent system resource monitoring, electronic device, and storage medium
CN105577480A (en) Monitoring method and device of network connection performances
CN113163028B (en) Service data transmission method, device and system
CN110187986A (en) A kind of command management method, system, device and computer readable storage medium
CN113556359A (en) Communication protocol conversion method, device, system and gateway device
CN108460044B (en) Data processing method and device
CN112492055A (en) Method, device and equipment for redirecting transmission protocol and readable storage medium
CN109995782B (en) Information processing method, device, system and computer storage medium
CN112350859A (en) Method, device, equipment and storage medium for managing network function entity
CN102868559A (en) Method and system for generating weblog data
CN113922972B (en) Data forwarding method and device based on MD5 identification code
CN109327864A (en) Flow processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination