CN112286668A - Method and system for efficiently processing request data - Google Patents

Method and system for efficiently processing request data Download PDF

Info

Publication number
CN112286668A
CN112286668A CN202011298134.8A CN202011298134A CN112286668A CN 112286668 A CN112286668 A CN 112286668A CN 202011298134 A CN202011298134 A CN 202011298134A CN 112286668 A CN112286668 A CN 112286668A
Authority
CN
China
Prior art keywords
service
application system
request
client application
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011298134.8A
Other languages
Chinese (zh)
Inventor
常玉涛
柴建勇
王金亮
王月忠
段雯
李锵
邵兵
郭绍恺
石婷婷
张婧溪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong High Speed Information Group Co Ltd
Original Assignee
Shandong High Speed Information Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong High Speed Information Group Co Ltd filed Critical Shandong High Speed Information Group Co Ltd
Priority to CN202011298134.8A priority Critical patent/CN112286668A/en
Publication of CN112286668A publication Critical patent/CN112286668A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Abstract

The invention discloses a method and a system for efficiently processing request data, and relates to the technical field of computers. The method comprises the following steps: establishing standard specifications of each application system, and expressing the standard specifications in a universal interface mode; realizing a universal interface based on the standard specification of an interface bus; the client application system calls and sends the service request through the interface, the service application system inquires the service based on the data and the service connection number, judges whether the request has the same call within the cache time, and if so, obtains the cache request result data information and sends the cache request result data information to the client application system; if not, feeding back whether the current request connection number reaches the maximum connection number, then selecting whether to continue calling or not by the client application system based on the feedback information, and if so, starting a logic arrangement mechanism of the service. The invention solves the problems of waiting delay and request packet loss of the client application system in the concurrent operation through a specific cache concurrent processing mechanism.

Description

Method and system for efficiently processing request data
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a method and a system for efficiently processing request data.
Background
In the process of mutual access of the application systems of the intelligent park, due to the limitation of concurrent access quantity and the non-uniform communication protocol, the problems of waiting delay and request data packet loss are often caused. When the service application system reaches the maximum client application system access amount, if the client application system continues to send request data, the service application system often continues to receive the connection request of the client application system and places the connection request into a cache queue for queuing processing, which results in that the client application system passively selects to wait for data, and not only can not receive response data in time, but also can not know the current state of the service application system. The processing mode of the service application system has a small influence on the asynchronously processed client application system, but for the synchronously processed client application system, the service of the client application system is blocked, and the data processing efficiency of the client application system is greatly influenced.
Based on the problems in the prior art, the invention provides a method and a system for efficiently processing request data, which solve the problems of waiting delay and request packet loss of a client application system in concurrent operation through a specific cache concurrent processing mechanism.
Disclosure of Invention
The embodiment of the invention provides a method and a system for efficiently processing request data, which enable a client application system to autonomously select whether to wait or not according to state judgment through a specific cache concurrent processing mechanism, and keep requests which still enter after reaching the maximum concurrent limit through a queue mechanism, thereby preventing the requests from being lost.
In order to solve the technical problems, the invention discloses the following technical scheme:
one aspect of the present invention provides a method for efficiently processing request data, including the following steps:
establishing standard specifications of each application system, and expressing the standard specifications in a universal interface mode;
realizing a universal interface based on the standard specification of an interface bus;
the client application system sends a service request through interface call, the service application system inquires about the service based on the data and the service connection number, judges whether the request has the same call within the cache time,
if so, acquiring cache request result data information and sending the cache request result data information to a client application system;
if not, feeding back whether the current request connection number reaches the maximum connection number, then selecting whether to continue calling or not by the client application system based on the feedback information, and if so, starting a logic arrangement mechanism of the service.
Based on the scheme, the method is optimized as follows:
preferably, before the client application system sends the service request through the interface call, the method further includes the following steps:
the service application system establishes a service system based on the queue, and is used for receiving a request which is sent by the client application system and contains data and service, issuing corresponding events and providing query service based on the number of data and service connections.
Further, the logic scheduling mechanism for initiating the service specifically includes the following steps:
the service application system judges the interface service driving queue based on the information which is transmitted by the client application system and contains the equipment type and the equipment number, then judges whether the client application system in the interface service driving queue reaches the maximum connection number,
if yes, a cache queue mechanism is carried out to keep and carry out request polling;
and if not, performing real-time request and returning after obtaining the result.
Further, if the client application system in the interface service drive queue does not reach the maximum connection number, the real-time request is performed, which specifically includes the following steps:
firstly, sending a service request to an interface service driving queue cluster, and further sending the service request to a sub-service cluster corresponding to the service driving queue cluster;
then the sub-service cluster carries out drive calling, and service requests of all client application systems are processed through all hardware equipment or application systems;
and then, completing drive calling by each hardware device or application system, sending the request result to a corresponding cache queue within the set cache time, and further feeding back the request result to the client application system.
The invention provides a system for processing request data efficiently, firstly establishing standard specifications of each application system, and expressing the standard specifications in a universal interface mode, wherein each application system is divided into a client application system and a service application system according to the active and passive relations of the data request;
the client application system sends a service request to the service application system through interface calling, and then receives cache request result data information returned by the service application system, or receives current request connection number information fed back by the service application system to select whether to continue calling when the request does not have the same calling in the cache time;
the service application system receives a service request sent by a client application system, then inquires services based on data and service connection number, judges whether the request has the same call within the caching time, if so, obtains caching request result data information and sends the caching request result data information to the client application system, and if not, feeds back current request connection number information to the client application system.
Based on the system, when the client application system selects to continue calling according to the current request connection number information, the service application system judges the interface service driving queue based on the information which is transmitted by the client application system and contains the equipment type and the equipment number, then judges whether the client application system in the interface service driving queue reaches the maximum connection number, if so, the service application system carries out cache queue mechanism maintenance and carries out request polling, and if not, the service application system carries out real-time request and returns after obtaining the result.
Further, the service application system includes an interface service driving queue cluster and a sub-service cluster corresponding to the service driving queue cluster, and the service application system performs a real-time request, which specifically includes: the service application system sends the service request to the interface service driving queue cluster and further sends the service request to a sub-service cluster corresponding to the service driving queue cluster; the sub-service cluster carries out drive calling and processes service requests of all client application systems through all hardware equipment or all application systems; and each hardware device or application system completes drive calling, sends the request result to a corresponding cache queue within the set cache time, and further feeds the request result back to the client application system.
Furthermore, the service application system is also used for establishing a service system based on the queue, receiving a request which is sent by the client application system and contains data and service, issuing corresponding events and providing query service based on the number of data and service connections.
The effect provided in the summary of the invention is only the effect of the embodiment, not all the effects of the invention, and one of the above technical solutions has the following advantages or beneficial effects:
the method for efficiently processing the request data, provided by the embodiment of the application, is used for accessing the equipment or application with concurrency limit, the application system or the equipment is based on a cooperative linkage arrangement mechanism in the calling process, a concurrency limit cache processing mode is adopted, synchronous cooperation is carried out on the use and the release of a cache, the request which still enters after the maximum concurrency limit is reached is kept through a queue mechanism, the request is prevented from being lost, namely the requested equipment or application is accessed to the upper limit and is sent concurrently, when the access of one request is finished, the request state can be changed, the request resource is released in time, and the request in the request queue is obtained according to the arrangement mechanism and is added into the cache. The invention solves the problems of waiting delay and request data packet loss of equipment or application systems with access concurrency limitation in concurrent requests through a specific cache concurrency processing mechanism. The service application system returns the current queue state to the client service system according to the state judgment and the queue mechanism management, and further forwards the initiative of continuously waiting for request data or abandoning connection to the client service system. In the high-concurrency access process, the same request action of the client application system is returned within a limited time period by using a cache processing technology, so that repeated data processing and query are avoided, the processing efficiency is improved, and the energy consumption is saved.
The system for efficiently processing the request data provided by the embodiment of the application can realize the method of the first aspect and obtain the same effect.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flowchart of an embodiment of a method for efficiently processing request data provided herein;
FIG. 2 is a flow chart illustrating another embodiment of a method for efficiently processing request data according to the present application;
FIG. 3 is a schematic structural diagram of a system for efficiently processing request data according to an embodiment of the present application;
reference numerals:
1-client application system, 2-service application system, 21-interface service driven queue cluster, 22-sub-service cluster.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 is a flow chart illustrating an embodiment of a method for efficiently processing request data provided by the present application.
Referring to fig. 1, the method of this embodiment includes the steps of:
establishing standard specifications of each application system, and expressing the standard specifications in a universal interface mode;
realizing a universal interface based on the standard specification of an interface bus;
the client application system sends a service request through interface call, the service application system inquires about the service based on the data and the service connection number, judges whether the request has the same call within the cache time,
if so, acquiring cache request result data information and sending the cache request result data information to a client application system;
if not, feeding back whether the current request connection number reaches the maximum connection number, then selecting whether to continue calling or not by the client application system based on the feedback information, and if so, starting a logic arrangement mechanism of the service.
Specifically, in the above method, the logic orchestration mechanism for initiating the service includes the following steps:
the service application system judges the interface service driving queue based on the information which is transmitted by the client application system and contains the equipment type and the equipment number, then judges whether the client application system in the interface service driving queue reaches the maximum connection number,
if yes, a cache queue RedisMQ mechanism is carried out to keep and carry out request polling;
and if not, performing real-time request and returning after obtaining the result.
More specifically, if the client application systems in the interface service driving queue do not reach the maximum connection number, the real-time request is performed, which includes the following steps:
firstly, a service request is sent to an interface service drive queue (MQ) cluster, and further sent to a sub-service cluster corresponding to the service drive queue cluster;
then the sub-service cluster carries out drive calling, and service requests of all client application systems are processed through all hardware equipment or application systems;
and then, completing drive calling by each hardware device or application system, sending the request result to a corresponding cache queue within the set cache time, and further feeding back the request result to the client application system.
Before the client application system sends the service request through the interface call, the method for efficiently processing the request data further comprises the following steps:
the service application system establishes a service system based on the queue, and is used for receiving a request which is sent by the client application system and contains data and service, issuing corresponding events and providing query service based on the number of data and service connections.
Fig. 2 is a flowchart illustrating another embodiment of a method for efficiently processing request data according to the present application.
Referring to fig. 2, the method of this embodiment is described by taking the electricity consumption data of the smart campus meter as an example, and the specific implementation process is as follows:
standard specifications of services, data, events and the like are formulated, and the standard specifications are expressed in a universal interface mode;
the application system acquires electricity consumption data information of the electricity meter every month;
the property and manager inquires and calls the electricity consumption data information of the electricity meter through the application system, the application system transmits the information such as equipment type, equipment number and the like to the interface bus and sends a service request to the interface bus, the interface bus inquires the service based on the data and the service connection number and judges whether the request has the same call in the cache time,
if so, acquiring the electricity utilization result data information of the electric meter, which meets the cache condition, and sending the electricity utilization result data information to an application system;
if not, feeding back whether the current request connection number reaches the maximum connection number, then selecting whether to continue calling or not by the application system based on the feedback information, and if so, starting a logic arrangement mechanism of the service.
Specifically, the logic orchestration mechanism for initiating services comprises the following steps:
the interface bus judges the interface service driving queue based on the information including the equipment type and the equipment number transmitted by the application system, then judges whether the equipment in the interface service driving queue reaches the maximum connection number,
if yes, a cache queue RedisMQ mechanism is carried out to keep and carry out request polling;
and if not, performing real-time request and returning after obtaining the result.
Further, the real-time request includes the following steps:
firstly, a service request is sent to an interface service drive queue (MQ) cluster, and further sent to a sub-service cluster corresponding to the service drive queue cluster;
then the sub-service cluster is driven and called, and the intelligent park energy supervision platform provides electricity consumption data interface service for inquiring the electricity meter;
and completing drive calling, and realizing a mechanism for caching and storing the electricity utilization result of the electric meter within the set caching time.
Fig. 3 shows a schematic structural diagram of a system for efficiently processing request data according to an embodiment of the present application.
Referring to fig. 3, a system for efficiently processing request data establishes a standard specification of each application system, and expresses the standard specification in a universal interface manner, and each application system is divided into a client application system 1 and a service application system 2 according to an active-passive relationship of a data request;
the client application system 1 sends a service request to a service application system through interface calling, and then receives cache request result data information returned by the service application system, or receives current request connection number information fed back by the service application system to select whether to continue calling when the request does not have the same calling in the cache time;
the service application system 2 receives a service request sent by a client application system, then inquires services based on data and service connection number, judges whether the request has the same call within the cache time, if so, obtains cache request result data information and sends the cache request result data information to the client application system, and if not, feeds back current request connection number information to the client application system.
Specifically, in the above system, when the client application system 1 selects to continue to call according to the current request connection number information, the service application system 2 determines the interface service drive queue based on the information including the device type and the device number transmitted by the client application system, and then determines whether the client application system in the interface service drive queue reaches the maximum connection number, if so, the service application system performs cache queue mechanism maintenance and performs request polling, and if not, the service application system performs real-time request and returns after obtaining the result.
More specifically, the service application system 2 includes an interface service driving queue cluster 21 and a sub-service cluster 22 corresponding to the service driving queue cluster, and the service application system performs a real-time request, which specifically includes: the service application system 2 sends the service request to the interface service driving queue cluster 21, and further sends the service request to a sub-service cluster 22 corresponding to the service driving queue cluster; the sub-service cluster 22 performs drive calling, and processes the service request of each client application system 1 through each hardware device or application system; and each hardware device or application system completes drive calling, sends the request result to a corresponding cache queue within the set cache time, and further feeds the request result back to the client application system 1.
The system for efficiently processing the request data is further used for establishing a service system based on the queue, receiving the request which is sent by the client application system and contains the data and the service, issuing a corresponding event and providing the query service based on the connection number of the data and the service.
According to the method and the system for efficiently processing the request data, the problems that equipment or an application system with access concurrency limitation waits for delay and a request data packet is lost in a concurrent request are solved through a specific cache concurrent processing mechanism. When the service application system access reaches the upper limit and is concurrent, after one request access is finished, the request state changes, the request resource is released in time, and the request in the request queue is obtained according to the arranging mechanism and is added into the cache. In the high concurrent access process, the same request action of the client application system is returned within a limited time period by using a cache processing technology, so that repeated data processing and query are avoided, and the processing efficiency is improved.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method for efficiently processing requested data, comprising the steps of:
establishing standard specifications of each application system, and expressing the standard specifications in a universal interface mode;
realizing a universal interface based on the standard specification of an interface bus;
the client application system sends a service request through interface call, the service application system inquires about the service based on the data and the service connection number, judges whether the request has the same call within the cache time,
if so, acquiring cache request result data information and sending the cache request result data information to a client application system;
if not, feeding back whether the current request connection number reaches the maximum connection number, then selecting whether to continue calling or not by the client application system based on the feedback information, and if so, starting a logic arrangement mechanism of the service.
2. The method of claim 1, wherein the logic orchestration mechanism for initiating services comprises the steps of:
the service application system judges the interface service driving queue based on the information which is transmitted by the client application system and contains the equipment type and the equipment number, then judges whether the client application system in the interface service driving queue reaches the maximum connection number,
if yes, a cache queue mechanism is carried out to keep and carry out request polling;
and if not, performing real-time request and returning after obtaining the result.
3. The method of claim 2, wherein if the maximum number of connections of the client application systems in the interface service driven queue is not reached, performing a real-time request comprises the following steps:
firstly, sending a service request to an interface service driving queue cluster, and further sending the service request to a sub-service cluster corresponding to the service driving queue cluster;
then the sub-service cluster carries out drive calling, and service requests of all client application systems are processed through all hardware equipment or application systems;
and then, completing drive calling by each hardware device or application system, sending the request result to a corresponding cache queue within the set cache time, and further feeding back the request result to the client application system.
4. The method of claim 1, wherein before the client application system sends the service request through the interface call, the method further comprises the following steps:
the service application system establishes a service system based on the queue, and is used for receiving the request which is sent by the client application system and contains data and service classes, issuing corresponding events and providing inquiry service based on the connection number of the data and the service.
5. A system for processing request data efficiently is characterized in that a standard specification of each application system is established and expressed in a universal interface mode, and each application system is divided into a client application system and a service application system according to the active and passive relations of a data request;
the client application system sends a service request to the service application system through interface calling, and then receives cache request result data information returned by the service application system, or receives current request connection number information fed back by the service application system to select whether to continue calling when the request does not have the same calling in the cache time;
the service application system receives a service request sent by a client application system, then inquires services based on data and service connection number, judges whether the request has the same call within the caching time, if so, obtains caching request result data information and sends the caching request result data information to the client application system, and if not, feeds back current request connection number information to the client application system.
6. The system for efficiently processing request data according to claim 5, wherein when the client application system selects to continue calling according to the current request connection number information, the service application system determines an interface service driving queue based on the information including the device type and the device number transmitted by the client application system, and then determines whether the client application system in the interface service driving queue reaches the maximum connection number, if so, the service application system performs cache queue mechanism holding and performs request polling, and if not, the service application system performs real-time request and returns after obtaining the result.
7. The system for efficiently processing request data according to claim 6, wherein the service application system includes an interface service-driven queue cluster and a sub-service cluster corresponding to the service-driven queue cluster, and the service application system performs real-time request, specifically including: the service application system sends the service request to the interface service driving queue cluster and further sends the service request to a sub-service cluster corresponding to the service driving queue cluster; the sub-service cluster carries out drive calling and processes service requests of all client application systems through all hardware equipment or all application systems; and each hardware device or application system completes drive calling, sends the request result to a corresponding cache queue within the set cache time, and further feeds the request result back to the client application system.
8. The system of claim 5, wherein the service application system is further configured to establish a service framework based on the queue, and configured to receive a request sent by the client application system for containing data and services, and issue a corresponding event, and provide a query service based on the number of connections including data and services.
CN202011298134.8A 2020-11-18 2020-11-18 Method and system for efficiently processing request data Pending CN112286668A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011298134.8A CN112286668A (en) 2020-11-18 2020-11-18 Method and system for efficiently processing request data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011298134.8A CN112286668A (en) 2020-11-18 2020-11-18 Method and system for efficiently processing request data

Publications (1)

Publication Number Publication Date
CN112286668A true CN112286668A (en) 2021-01-29

Family

ID=74398478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011298134.8A Pending CN112286668A (en) 2020-11-18 2020-11-18 Method and system for efficiently processing request data

Country Status (1)

Country Link
CN (1) CN112286668A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500391A (en) * 2021-12-28 2022-05-13 上海弘积信息科技有限公司 Method for dealing with instantaneous overlarge flow

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999377A (en) * 2012-11-30 2013-03-27 北京东方通科技股份有限公司 Service concurrent access control method and device
CN104980468A (en) * 2014-04-09 2015-10-14 深圳市腾讯计算机系统有限公司 Method, device and system for processing service request
CN105450618A (en) * 2014-09-26 2016-03-30 Tcl集团股份有限公司 Operation method and operation system of big data process through API (Application Programming Interface) server
US9639546B1 (en) * 2014-05-23 2017-05-02 Amazon Technologies, Inc. Object-backed block-based distributed storage
CN111092877A (en) * 2019-12-12 2020-05-01 北京金山云网络技术有限公司 Data processing method and device, electronic equipment and storage medium
CN111756813A (en) * 2020-05-29 2020-10-09 邢台职业技术学院 Communication method for network data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999377A (en) * 2012-11-30 2013-03-27 北京东方通科技股份有限公司 Service concurrent access control method and device
CN104980468A (en) * 2014-04-09 2015-10-14 深圳市腾讯计算机系统有限公司 Method, device and system for processing service request
US9639546B1 (en) * 2014-05-23 2017-05-02 Amazon Technologies, Inc. Object-backed block-based distributed storage
CN105450618A (en) * 2014-09-26 2016-03-30 Tcl集团股份有限公司 Operation method and operation system of big data process through API (Application Programming Interface) server
CN111092877A (en) * 2019-12-12 2020-05-01 北京金山云网络技术有限公司 Data processing method and device, electronic equipment and storage medium
CN111756813A (en) * 2020-05-29 2020-10-09 邢台职业技术学院 Communication method for network data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500391A (en) * 2021-12-28 2022-05-13 上海弘积信息科技有限公司 Method for dealing with instantaneous overlarge flow

Similar Documents

Publication Publication Date Title
CN111580995B (en) Synchronous communication method and system of distributed cloud platform and Internet of things intelligent terminal based on MQTT asynchronous communication scene
CN108111931B (en) Virtual resource slice management method and device for power optical fiber access network
CN110958281B (en) Data transmission method and communication device based on Internet of things
CN107528891B (en) Websocket-based automatic clustering method and system
CN111083519A (en) VR content distribution system and method based on cloud and edge computing
CN110134534B (en) System and method for optimizing message processing for big data distributed system based on NIO
CN108055311B (en) HTTP asynchronous request method, device, server, terminal and storage medium
WO2011130940A1 (en) Multi-service integration processing method and service integration platform
CN112491675B (en) Data communication method, device, equipment and computer readable storage medium
CN113422842A (en) Distributed power utilization information data acquisition system considering network load
CN110535811B (en) Remote memory management method and system, server, client and storage medium
CN112104679B (en) Method, apparatus, device and medium for processing hypertext transfer protocol request
CN112328362A (en) Method for realizing function calculation service based on container technology
CN114338063A (en) Message queue system, service processing method, and computer-readable storage medium
CN112286668A (en) Method and system for efficiently processing request data
CN111586140A (en) Data interaction method and server
CN111124717A (en) Message delivery method, system and computer storage medium
CN108259605B (en) Data calling system and method based on multiple data centers
CN111131081B (en) Method and device for supporting high-performance one-way transmission of multiple processes
CN111294252B (en) Cluster test system
CN111885171A (en) VR model rapid cloud deployment method
CN108337285B (en) Communication system and communication method
CN116974655A (en) Capability scheduling method and capability scheduling functional entity
CN105516097B (en) Mixed architecture message system and method for message transmission based on Thrift data format
CN109639795B (en) Service management method and device based on AcitveMQ message queue

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210129

RJ01 Rejection of invention patent application after publication