CN114327854A - Method for processing service request by coroutine and related equipment - Google Patents

Method for processing service request by coroutine and related equipment Download PDF

Info

Publication number
CN114327854A
CN114327854A CN202111195386.2A CN202111195386A CN114327854A CN 114327854 A CN114327854 A CN 114327854A CN 202111195386 A CN202111195386 A CN 202111195386A CN 114327854 A CN114327854 A CN 114327854A
Authority
CN
China
Prior art keywords
request
service
service requests
groups
request groups
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111195386.2A
Other languages
Chinese (zh)
Inventor
许勇刚
王利斌
李祉岐
冯雅平
尹琴
李宁
罗富财
尚闻博
党倩
余入丽
王秋明
杨阳
任磊
林婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Siji Network Security Beijing Co ltd
State Grid Corp of China SGCC
State Grid Information and Telecommunication Co Ltd
State Grid Gansu Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
Original Assignee
State Grid Siji Network Security Beijing Co ltd
State Grid Corp of China SGCC
State Grid Information and Telecommunication Co Ltd
State Grid Gansu Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Siji Network Security Beijing Co ltd, State Grid Corp of China SGCC, State Grid Information and Telecommunication Co Ltd, State Grid Gansu Electric Power Co Ltd, State Grid Fujian Electric Power Co Ltd filed Critical State Grid Siji Network Security Beijing Co ltd
Priority to CN202111195386.2A priority Critical patent/CN114327854A/en
Publication of CN114327854A publication Critical patent/CN114327854A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Computer And Data Communications (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The present disclosure provides a method and related device for processing a service request by using a coroutine, including: acquiring the utilization rate of computing resources of a server; determining the number N of concurrent coroutines according to the utilization rate, wherein N is a positive integer; dividing a plurality of service requests to be processed into a plurality of request groups, so that the number of the request groups is less than or equal to N; for each request group in the plurality of request groups, putting all service requests of the plurality of request groups into a service channel; starting N co-programs to monitor the service channel; in response to monitoring that service data is input into the service channel, the N routines concurrently process service requests in the plurality of request groups in the service channel. The method and the device solve the problem of server crash caused by excessive establishment of the protocol in the prior art by grouping a plurality of service requests to be processed, establishing the protocol according to the number of the groups and simultaneously processing the service groups by a plurality of protocols.

Description

Method for processing service request by coroutine and related equipment
Technical Field
The present disclosure relates to the field of computer service processing technologies, and in particular, to a method and a related device for processing a service request by using a coroutine.
Background
In the prior art, in processing a service request, a gorutine coordination process is usually created according to the number of the service request, the amount of a CPU and the size of a memory of a processing device are not considered when the gorutine coordination process is created, and when the created gorutine coordination process is too many, the resource utilization rate of a server system is continuously increased, and finally the server is crashed. Moreover, when multiple protocols simultaneously process service requests, the database pressure is increased, the number of links is too large, and the links fail, thereby causing failure in processing the service requests.
Disclosure of Invention
In view of the above, an object of the present disclosure is to provide a method and related device for processing a service request by using a coroutine.
Based on the above object, the present disclosure provides a method for processing a service request by using a coroutine, including:
acquiring the utilization rate of computing resources of a server;
determining the number N of concurrent coroutines according to the utilization rate, wherein N is a positive integer;
dividing a plurality of service requests to be processed into a plurality of request groups, so that the number of the request groups is less than or equal to N;
for each request group of the plurality of request groups,
putting all the request groups into a service channel;
starting N co-programs to monitor the service channel;
in response to monitoring that service data is input into the service channel, the N routines concurrently process service requests in the plurality of request groups in the service channel.
Optionally, determining the number N of concurrent coroutines according to the usage rate includes:
determining N according to the utilization rate, the processing amount of a single coroutine and the to-be-processed service data amount,
wherein, the processing amount of the single coroutine and the service data amount to be processed are obtained by simulating the execution of one coroutine in the memory of the server.
Optionally, obtaining the usage rate of the computing resource of the server includes:
and acquiring the utilization rate of the computing resource through a built-in function of the server.
Optionally, the computing resource includes a central processing unit CPU and a memory;
obtaining the usage of the computing resource by a built-in function of the server comprises:
obtaining the utilization rate of the CPU through a cpu.percent function;
and acquiring the utilization rate of the memory through a mem.
Optionally, the coroutine includes goroutine.
Based on the same inventive concept, one or more embodiments of the present disclosure further provide an apparatus for processing a service request by using a coroutine, where the apparatus includes:
an acquisition module configured to acquire a usage rate of a computing resource of a server;
a determining module configured to determine a number N of concurrent coroutines according to the usage rate, wherein N is a positive integer;
the processing module is configured to divide a plurality of service requests to be processed into a plurality of request groups, so that the number of the request groups is smaller than or equal to N;
for each request group of the plurality of request groups,
putting all the request groups into a service channel;
starting N co-programs to monitor the service channel;
in response to monitoring that service data is input into the service channel, the N routines concurrently process service requests in the plurality of request groups in the service channel.
Optionally, the processing module includes:
grouping according to the service requests and the utilization rate to obtain the size of the request group;
and grouping the plurality of service requests according to the size of the request group to obtain the number of the request groups.
Optionally, the obtaining module includes:
and acquiring the utilization rate of the computing resource through a built-in function of the server.
Based on the same inventive concept, one or more embodiments of the present disclosure also provide an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method as described in any one of the above items when executing the program.
Based on the same inventive concept, one or more embodiments of the present disclosure also provide a non-transitory computer-readable storage medium, characterized in that the non-transitory computer-readable storage medium stores computer instructions for causing a computer to execute the method as described in any one of the above.
As can be seen from the foregoing, in the method, the apparatus, the electronic device, and the storage medium for processing service requests by using coroutines provided in one or more embodiments of the present disclosure, a plurality of request groups are obtained by grouping service requests to be processed, coroutines with a corresponding number are created according to the number of service requests in the request groups, and then the service requests of the request groups are concurrently processed by the coroutines, so that the number of created coroutines is effectively reduced, the number of created coroutines is within a tolerable range of a processing device, the quality of processing data and service requests by a server and a database is ensured, and meanwhile, normal operation of the device is facilitated, and processing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the present disclosure or related technologies, the drawings needed to be used in the description of the embodiments or related technologies are briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of a method for processing a service request by a coroutine according to an embodiment of the disclosure;
fig. 2 is a schematic structural diagram of an apparatus for processing a service request by using a coroutine according to an embodiment of the disclosure;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
It is to be noted that technical terms or scientific terms used in the embodiments of the present disclosure should have a general meaning as understood by those having ordinary skill in the art to which the present disclosure belongs, unless otherwise defined. The use of "first," "second," and similar terms in the embodiments of the disclosure is not intended to indicate any order, quantity, or importance, but rather to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
As described in the background section, in the prior art, each service request creates a coroutine, and when each coroutine simultaneously processes the service request, it puts a great stress on the database and the server, and also affects the connection between the database and the server, and finally causes the server to crash, so that the service request cannot be processed continuously. The Goroutine is the most basic execution unit in the Go concurrent model, all the Goroutine concurrently run in the same address space, multiplexing can be realized on a multi-thread operating system, and if one thread is blocked, for example, the thread waits for input/output, other threads run the same. At the same time, goroutine is lightweight, creating all its consumption is almost exclusively an allocation of stack space. Also, the stack is very small at the very beginning and changes with the allocation and release of heap space only when needed. In the prior art, some developers are used to open one route for each request, and consider that the routes are all used concurrently to do one thing, so that the efficiency is high. But this causes the resource utilization of the server system to increase until the program locks.
In view of the above existing situations in the prior art, the embodiments of the present disclosure determine the utilization rate of computer resources before creating a coroutine, evaluate the maximum number of coroutines that can be concurrently executed at most under the current configuration situation according to the utilization rate, then group the service requests, where the number of request groups is less than or equal to the maximum number of coroutines, create the coroutines according to the number of request groups, and finally process the service requests in each request group for each request group, and process one service request at a time. The protocol is created according to the number of the groups, so that the creation and the starting of unnecessary protocols are reduced, the waste of resources is reduced, and the creation of the protocol is in a controllable range, thereby reducing the pressure of a server and the pressure of connection between a database and the server.
One or more embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, one or more embodiments of the present disclosure provide a method for processing a service request using a coroutine, including:
s101, obtaining the utilization rate of computing resources of the server.
In some embodiments, built-in functions of the server are increased to obtain usage of computing resources. The computing resources include parameters of built-in hardware of the computer, and specifically, the built-in hardware may be a Central Processing Unit (CPU) and a memory, or other hardware related to data processing; the coroutine quantity which can be started is determined according to the utilization rate of computer resources, so that the coroutine quantity is in the bearable range of computer hardware, and pressure on a server is avoided.
In some embodiments, the present disclosure may obtain the current utilization rate of the CPU and the current utilization rate of the memory through a built-in function of the server, so as to obtain the current idle rate. The maximum number of service requests which can be processed by the computer under the current condition of the CPU and the memory can be calculated through the acquired utilization rate. For example, if the current computer is a CPU with 1 core and a memory of 2G, the server may concurrently start 10 coroutines, may process 1000 pieces of data, and may divide the 1000 pieces of data into 10 groups, where each group includes 100 pieces of data. The coroutine number 10 is the maximum range that the computer hardware system can bear under the current situation.
In some embodiments, the data processing capability of the channel is further detected and evaluated, and when the processing capability of the channel meets the requirement, data is put into the channel so that the coroutine can obtain the service request when the service is concurrently processed, so as to avoid that the channel is congested due to putting of too much data or the service request, and normal operation of the channel and the computer is affected.
And S102, determining the number N of concurrent coroutines according to the utilization rate, wherein N is a positive integer.
In some embodiments, N is determined based on usage and amount of pending traffic data.
Furthermore, the utilization rate of the CPU can be obtained through a CPU.Percent function, and the utilization rate of the memory can be obtained through a mem.VirtualMemory function; the specific codes are as follows:
// obtaining CPU Idle Rate
func GetCpuPercent()float64{
percent,_:=cpu.Percent(time.Second,false)
return 100-percent[0]
}
// obtaining memory spare ratio
func GetMemPercent()float64{
memInfo,_:=mem.VirtualMemory()
return 100-memInfo.UsedPercent
}
S103, dividing a plurality of service requests to be processed into a plurality of request groups, and enabling the number of the request groups to be smaller than or equal to N.
In some embodiments, the size of each request group is obtained by calculating according to the utilization rate of the calculation resources and the number of the plurality of service requests, and then the plurality of service requests are grouped according to the size of each request group through an array partition function to obtain a two-dimensional array, wherein the number of the two-dimensional array is the number of the request groups. (e.g., total array [1,2,3,4,5,6,7,8,9], request group size: 2, split array [ [1,2], [3,4], [5,6], [7,8], [9 ]), then request group number is 5); a number of coroutines are then created based on the number of two-dimensional arrays (i.e., request groups).
Further, after grouping is completed, the service requests are put into the channel in the form of request groups, the coroutine processes the service requests by respectively obtaining one service request from one group each time, and when all the service requests in the group are processed, the coroutine executes one-1 operation, which means that all the service requests in the current group are processed. For example, there are currently 95 service requests, which are divided into 10 request groups, and the built-in synchronization function sync is 10. Wherein, there are 10 service requests in the 1-9 request groups respectively, there are 5 service requests in the 10 th request group, then the coroutine must finish the processing to the service request in the 10 th request group first, the built-in synchronization function sync will execute once-1 after finishing the processing to the 10 th request group, after all 10 request groups finish processing, the synchronization function sync at this moment is 0, use sync.
In some embodiments, after the coroutine completes processing all service requests in one request group, service requests in other unprocessed request groups are acquired, and the acquired service requests are processed, so as to accelerate the processing speed of the service requests.
In some embodiments, for each request group of the plurality of request groups, placing all service requests of the plurality of request groups into a service channel; starting N co-programs to monitor the service channel; in response to monitoring that service data is input into the service channel, the N routines concurrently process service requests in the plurality of request groups in the service channel.
In some embodiments, the created coroutine is a gorutine coroutine, where the gorutine coroutine is a user-mode thread provided by Go language, also called coroutine.
It can be seen that, in the method for processing service requests by using coroutine, first, the maximum number N of concurrent coroutines is determined according to the maximum utilization rate that can be provided by the built-in hardware structure of the computer, then, grouping is performed according to the utilization rate of the CPU, the utilization rate of the memory, and the number of service requests to obtain a plurality of request groups, a coroutine is created according to the number of the request groups, channels are created, and after the creation of the channels is completed, the plurality of request groups are placed in the channels to wait for the coroutine to process the service requests in each request group. The coroutine processes the service requests in the request group, wherein each coroutine acquires one service request from the request group every time for processing, acquires the next service request after the current processing is finished, and processes the service requests in the request group in sequence until all the service requests are processed. The protocol is created according to the number of the groups, so that the number of protocol creation is effectively reduced, the quality of processing data and service requests by a server and a database is ensured, the normal operation of equipment is facilitated, and the processing efficiency is improved.
It should be noted that the method of the embodiments of the present disclosure may be executed by a single device, such as a computer or a server. The method of the embodiment can also be applied to a distributed scene and completed by the mutual cooperation of a plurality of devices. In such a distributed scenario, one of the devices may only perform one or more steps of the method of the embodiments of the present disclosure, and the devices may interact with each other to complete the method.
It should be noted that the above describes some embodiments of the disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Based on the same inventive concept, corresponding to the method of any embodiment, the disclosure also provides a device for processing the service request by using the coroutine.
Referring to fig. 2, the apparatus for processing a service request by using a coroutine includes:
an acquisition module configured to acquire a usage rate of a computing resource of a server;
a determining module configured to determine a number N of concurrent coroutines according to the usage rate, wherein N is a positive integer;
the processing module is configured to divide a plurality of service requests to be processed into a plurality of request groups, so that the number of the request groups is smaller than or equal to N;
for each request group of the plurality of request groups,
putting all service requests of the plurality of request groups into a service channel;
starting N co-programs to monitor the service channel;
in response to monitoring that service data is input into the service channel, the N routines concurrently process service requests in the plurality of request groups in the service channel.
In some embodiments, the processing module comprises:
grouping according to the service requests and the utilization rate to obtain the size of the request group;
and grouping the plurality of service requests according to the size of the request group to obtain the number of the request groups.
In some embodiments, the obtaining module comprises:
obtaining usage of the computing resource by a built-in function of the server
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, the functionality of the various modules may be implemented in the same one or more software and/or hardware implementations of the present disclosure.
The apparatus in the foregoing embodiment is used to implement the method for processing a service request by using a coroutine in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Based on the same inventive concept, corresponding to the method of any embodiment described above, the present disclosure further provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the program to implement the method for processing a service request by using a coroutine according to any embodiment described above.
Fig. 3 is a schematic diagram illustrating a more specific hardware structure of an electronic device according to this embodiment, where the electronic device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein the processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 are communicatively coupled to each other within the device via bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 1020 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 1020 and called to be executed by the processor 1010.
The input/output interface 1030 is used for connecting an input/output module to input and output information. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 1040 is used for connecting a communication module (not shown in the drawings) to implement communication interaction between the present apparatus and other apparatuses. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 1050 includes a path that transfers information between various components of the device, such as processor 1010, memory 1020, input/output interface 1030, and communication interface 1040.
It should be noted that although the above-mentioned device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
The electronic device in the foregoing embodiment is used to implement the method for processing a service request by using a coroutine in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Based on the same inventive concept, corresponding to any of the above-mentioned embodiment methods, the present disclosure also provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the method for processing a service request by a coroutine as described in any of the above embodiments.
Computer-readable media of the present embodiments, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The computer instructions stored in the storage medium of the foregoing embodiment are used to enable the computer to execute the method for processing a service request by using a coroutine according to any of the foregoing embodiments, and have the beneficial effects of corresponding method embodiments, which are not described herein again.
It should be noted that the embodiments of the present disclosure can be further described in the following ways:
those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of the present disclosure, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present disclosure as described above, which are not provided in detail for the sake of brevity.
In addition, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown in the provided figures for simplicity of illustration and discussion, and so as not to obscure the embodiments of the disclosure. Furthermore, devices may be shown in block diagram form in order to avoid obscuring embodiments of the present disclosure, and this also takes into account the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the embodiments of the present disclosure are to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that the embodiments of the disclosure can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.
The disclosed embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, equivalents, improvements, and the like that may be made within the spirit and principles of the embodiments of the disclosure are intended to be included within the scope of the disclosure.

Claims (10)

1. A method for processing service requests by using coroutines comprises the following steps:
acquiring the utilization rate of computing resources of a server;
determining the number N of concurrent coroutines according to the utilization rate, wherein N is a positive integer;
dividing a plurality of service requests to be processed into a plurality of request groups, so that the number of the request groups is less than or equal to N;
for each request group of the plurality of request groups,
putting all the request groups into a service channel;
starting N co-programs to monitor the service channel;
in response to monitoring that service data is input into the service channel, the N routines concurrently process service requests in the plurality of request groups in the service channel.
2. The method of claim 1, wherein the dividing the plurality of service requests to be processed into a plurality of request groups such that the number of request groups is less than or equal to N comprises:
grouping according to the service requests and the utilization rate to obtain the size of the request group;
and grouping the plurality of service requests according to the size of the request group to obtain the number of the request groups.
3. The method of claim 1, wherein the obtaining usage of computing resources of a server comprises:
and acquiring the utilization rate of the computing resource through a built-in function of the server.
4. The method of claim 3, wherein,
the computing resources comprise a Central Processing Unit (CPU) and a memory;
obtaining the usage of the computing resource by a built-in function of the server comprises:
obtaining the utilization rate of the CPU through a cpu.percent function;
and acquiring the utilization rate of the memory through a mem.
5. The method according to any one of claims 1 to 4, wherein the coroutine comprises goroutine.
6. An apparatus for processing service requests by coroutines, comprising:
an acquisition module configured to acquire a usage rate of a computing resource of a server;
a determining module configured to determine a number N of concurrent coroutines according to the usage rate, wherein N is a positive integer;
the processing module is configured to divide a plurality of service requests to be processed into a plurality of request groups, so that the number of the request groups is smaller than or equal to N;
for each request group of the plurality of request groups,
putting all the request groups into a service channel;
starting N co-programs to monitor the service channel;
in response to monitoring that service data is input into the service channel, the N routines concurrently process service requests in the plurality of request groups in the service channel.
7. The apparatus of claim 6, wherein the processing module comprises:
grouping according to the service requests and the utilization rate to obtain the size of the request group;
and grouping the plurality of service requests according to the size of the request group to obtain the number of the request groups.
8. The apparatus of claim 6, wherein the means for obtaining comprises:
and acquiring the utilization rate of the computing resource through a built-in function of the server.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1 to 5 when executing the program.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 5.
CN202111195386.2A 2021-10-13 2021-10-13 Method for processing service request by coroutine and related equipment Pending CN114327854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111195386.2A CN114327854A (en) 2021-10-13 2021-10-13 Method for processing service request by coroutine and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111195386.2A CN114327854A (en) 2021-10-13 2021-10-13 Method for processing service request by coroutine and related equipment

Publications (1)

Publication Number Publication Date
CN114327854A true CN114327854A (en) 2022-04-12

Family

ID=81045139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111195386.2A Pending CN114327854A (en) 2021-10-13 2021-10-13 Method for processing service request by coroutine and related equipment

Country Status (1)

Country Link
CN (1) CN114327854A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115987652A (en) * 2022-12-27 2023-04-18 北京深盾科技股份有限公司 Account management method, system, equipment and computer storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115987652A (en) * 2022-12-27 2023-04-18 北京深盾科技股份有限公司 Account management method, system, equipment and computer storage medium
CN115987652B (en) * 2022-12-27 2023-11-03 北京深盾科技股份有限公司 Account management method, system, equipment and computer storage medium

Similar Documents

Publication Publication Date Title
JP4456490B2 (en) DMA equipment
WO2015130262A1 (en) Multiple pools in a multi-core system
CN110300959B (en) Method, system, device, apparatus and medium for dynamic runtime task management
CN107479981B (en) Processing method and device for realizing synchronous call based on asynchronous call
CN112491581A (en) Service performance monitoring and management method and device
CN106897299A (en) A kind of data bank access method and device
CN111580974B (en) GPU instance allocation method, device, electronic equipment and computer readable medium
CN116541142A (en) Task scheduling method, device, equipment, storage medium and computer program product
US9753769B2 (en) Apparatus and method for sharing function logic between functional units, and reconfigurable processor thereof
WO2016202153A1 (en) Gpu resource allocation method and system
CN114327854A (en) Method for processing service request by coroutine and related equipment
CN116011562A (en) Operator processing method, operator processing device, electronic device and readable storage medium
CN114741389A (en) Model parameter adjusting method and device, electronic equipment and storage medium
CN112506992B (en) Fuzzy query method and device for Kafka data, electronic equipment and storage medium
CN111813541B (en) Task scheduling method, device, medium and equipment
CN107634978B (en) Resource scheduling method and device
US11301255B2 (en) Method, apparatus, device, and storage medium for performing processing task
CN106293670B (en) Event processing method and device and server
CN111459879A (en) Data processing method and system on chip
CN114942833A (en) Method and related device for dynamically scheduling timing task resources
CN109791534B (en) switchable topology machine
CN115754413A (en) Oscilloscope and data processing method
CN115269331A (en) Service topology monitoring method facing micro service group and related equipment
CN109344630B (en) Block generation method, device, equipment and storage medium
CN111984510B (en) Performance test method and device for dispatching system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination