CN111666154A - Service processing method, device and computer readable storage medium - Google Patents

Service processing method, device and computer readable storage medium Download PDF

Info

Publication number
CN111666154A
CN111666154A CN202010483518.0A CN202010483518A CN111666154A CN 111666154 A CN111666154 A CN 111666154A CN 202010483518 A CN202010483518 A CN 202010483518A CN 111666154 A CN111666154 A CN 111666154A
Authority
CN
China
Prior art keywords
service
service requests
service request
server
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010483518.0A
Other languages
Chinese (zh)
Inventor
吴巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Rongyimai Information Technology Co ltd
Original Assignee
Shenzhen Rongyimai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Rongyimai Information Technology Co ltd filed Critical Shenzhen Rongyimai Information Technology Co ltd
Priority to CN202010483518.0A priority Critical patent/CN111666154A/en
Publication of CN111666154A publication Critical patent/CN111666154A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/004Error avoidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Abstract

The embodiment of the application provides a service processing method, a service processing device and a computer readable storage medium, and the method comprises the following steps: receiving a first service request; and caching the first service request under the condition that the first number is greater than or equal to a first threshold value, wherein the first number is the number of the service requests which are not processed in the service requests received by the server before the first service request is received. According to the embodiment of the application, the server crash caused by the large number of the service requests can be avoided.

Description

Service processing method, device and computer readable storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for processing a service and a computer-readable storage medium.
Background
After receiving the service request, the server may process the service logic corresponding to the service request. When the number of received service requests is large, the server is easy to crash due to the limited processing capacity of the server. For example, when many users make a rush to purchase train tickets or other goods at the same time, the server receives 10000 service requests per second, but the server can process only 2000 service requests at the same time, so that the server crashes because too many service requests are not in time to process. Therefore, how to avoid the server crash caused by a large number of service requests is a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a service processing method, a service processing device and a computer readable storage medium, which are used for avoiding server crash caused by a large number of service requests.
In a first aspect, an embodiment of the present application provides a service processing method, where the method is applied to a server, and includes:
receiving a first service request;
and caching the first service request under the condition that a first number is larger than or equal to a first threshold, wherein the first number is the number of the service requests which are not processed in the service requests received by the server before the first service request is received.
In one possible implementation, the method further includes: processing the first service request if the first number is less than the first threshold.
In one possible implementation manner, in a case that the first service request includes a plurality of service requests, the processing the first service request includes:
processing the first service request under the condition that the sum of the first number and a second number is less than or equal to the first threshold, wherein the second number is the number of service requests included in the first service request;
and selecting a third number of service requests from the plurality of service requests to process under the condition that the sum of the first number and the second number is greater than the first threshold, wherein the third number is the difference value between the first threshold and the first number.
In a possible implementation manner, the selecting a third number of service requests from the plurality of service requests to process includes: and selecting a third number of service requests from the plurality of service requests according to the priority for processing.
In one possible implementation, the method further includes: determining a priority of the plurality of service requests.
In one possible implementation, the determining the priority of the plurality of service requests includes:
and determining the priorities of the plurality of service requests according to one or more of the priorities of the service requests corresponding to the plurality of service requests, the priorities of the service types corresponding to the plurality of service requests, the priorities of the user types corresponding to the plurality of service requests, the sizes of the services corresponding to the plurality of service requests and the processing time required by the plurality of service requests.
In a possible implementation manner, after the processing the first service request, the method further includes: and feeding back the processing result of the first service request.
In a second aspect, an embodiment of the present application provides a service processing apparatus, where the apparatus is disposed in a server, and includes:
a receiving unit, configured to receive a first service request;
a caching unit, configured to cache the first service request when a first number is greater than or equal to a first threshold, where the first number is a number of service requests that have not been processed in service requests received by the server before the server receives the first service request.
In one possible implementation, the apparatus further includes: a processing unit, configured to process the first service request when the first number is smaller than the first threshold.
In a possible implementation manner, in a case that the first service request includes a plurality of service requests, the processing unit is specifically configured to:
processing the first service request under the condition that the sum of the first number and a second number is less than or equal to the first threshold, wherein the second number is the number of service requests included in the first service request;
and selecting a third number of service requests from the plurality of service requests to process under the condition that the sum of the first number and the second number is greater than the first threshold, wherein the third number is the difference value between the first threshold and the first number.
In a possible implementation manner, the selecting, by the processing unit, a third number of service requests from the plurality of service requests to process includes:
and selecting a third number of service requests from the plurality of service requests according to the priority for processing.
In one possible implementation, the apparatus further includes: a determining unit, configured to determine priorities of the plurality of service requests.
In a possible implementation manner, the determining unit is specifically configured to:
and determining the priorities of the plurality of service requests according to one or more of the priorities of the service requests corresponding to the plurality of service requests, the priorities of the service types corresponding to the plurality of service requests, the priorities of the user types corresponding to the plurality of service requests, the sizes of the services corresponding to the plurality of service requests and the processing time required by the plurality of service requests.
In one possible implementation, the apparatus further includes: and the feedback unit is used for feeding back the processing result of the first service request after the processing unit processes the first service request.
In a third aspect, an embodiment of the present application provides a service processing apparatus, which includes a processor and a memory, where the processor is coupled to the memory, where the memory is used to store computer instructions, and the processor, by executing the computer instructions stored in the memory, causes the service processing apparatus to implement the first aspect and the service processing method provided in connection with any implementation manner of the first aspect.
A fourth aspect provides a computer-readable storage medium, in which a computer program or computer instructions are stored, and when the computer program or the computer instructions are executed by a computer device, the computer device is enabled to implement the first aspect and the service processing method provided in connection with any one implementation manner of the first aspect.
A fifth aspect provides a computer program product which, when run on a computer, causes the computer to implement the first aspect and the service processing method provided in connection with any one of the implementations of the first aspect.
In the embodiment of the application, a first service request is received; and caching the first service request under the condition that the first number is greater than or equal to a first threshold value, wherein the first number is the number of the service requests which are not processed in the service requests received by the server before the first service request is received. Therefore, the received first service requests can be cached, so that the number of the service requests processed by the server does not exceed the maximum number which can be borne by the server, and the purpose of avoiding the server crash is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a network architecture provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a service processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another service processing method provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of another service processing method provided in the embodiment of the present application;
fig. 5 is a schematic structural diagram of a service processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another service processing apparatus according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between 2 or more computers. In addition, these components can execute on various computers having various data structures stored thereon. The components may communicate by way of local and/or remote processes based on a signal having one or more data packets (e.g., data from two components interacting with another component in a local system, distributed system, and/or across a network, such as the internet with other systems by way of the signal).
The embodiment of the application provides a service processing method, a service processing device and a computer readable storage medium, which are used for avoiding server crash caused by a large number of service requests.
In order to better understand a service processing method, a service processing device, and a computer-readable storage medium provided in the embodiments of the present application, a network architecture of the embodiments of the present application is described below. Referring to fig. 1, fig. 1 is a schematic diagram of a network architecture according to an embodiment of the present disclosure. As shown in fig. 1, a network architecture provided by the embodiment of the present application may include a user equipment 101 and a server 102. The user device 101 and the server 102 may communicate with each other through a network, and the communication may be based on any wired and wireless network, including but not limited to the internet, a wide area network, a metropolitan area network, a local area network, a Virtual Private Network (VPN), a wireless communication network, and the like. The user device 101 may include a smart phone, a tablet computer, a palm computer, a Mobile Internet Device (MID), and other user devices.
The server 102 may receive a first service request from the user equipment 101, and in case that the first number is greater than or equal to a first threshold, the server 102 caches the first service request; in the event that the first number is less than a first threshold, the server 102 processes the first service request. The first number is the number of service requests that have not been processed in the service requests received by the server 102 before receiving the first service request. If the first service request includes a plurality of service requests, the server 102 processes the first service request if the sum of the first number and the second number is less than or equal to a first threshold; if the sum of the first number and the second number is greater than the first threshold, the server 102 selects a third number of service requests from the plurality of service requests to process. The second number is the number of the service requests included in the first service request, and the third number is a difference value between the first threshold value and the first number. After the server 102 processes the first service request, a processing result of the first service request may be fed back.
It is understood that the network architecture in fig. 1 is only an exemplary network architecture in the embodiment of the present application, and the service processing network architecture in the embodiment of the present application includes, but is not limited to, the above service processing network architecture.
Referring to fig. 2 based on the network architecture shown in fig. 1, fig. 2 is a schematic flowchart of a service processing method according to an embodiment of the present application. Wherein the business process method is described from the perspective of the server 102. As shown in fig. 2, the service processing method may include the following steps.
201. The server receives a first service request.
The server may receive a first service request from a user device, where the user device may include a smart phone, a tablet computer, a palmtop computer, an MID, and the like, and the first service request may be from a client in the user device, or may be from a web page or a browser in the user device. The first service request may be one service request from one user equipment, or a plurality of service requests from a plurality of user equipments.
202. The server caches the first service request in case the first number is greater than or equal to a first threshold.
After the server receives the first service request, the first service request may be cached if the first number is greater than or equal to a first threshold. The first threshold is the maximum number of the service requests that the server can process, and the first number is the number of the service requests that have not been processed in the service requests received by the server before the first service request is received. The server may cache the first service request in its own database, or send the first service request to a cache server, and cache the first service request in the cache server, where the cache server may be another server different from the server, which is not limited in this embodiment of the present application.
In the service processing method described in fig. 2, the server may receive the first service request, and cache the first service request when a first number is greater than or equal to a first threshold, where the first number is the number of service requests that have not been processed in the service requests received by the server before the first service request is received. Therefore, when the number of the service requests received by the server is large, the first service request can be cached due to limited processing capacity, and the server crash caused by the large number of the service requests is avoided.
Referring to fig. 3, fig. 3 is a schematic flow chart of another service processing method according to an embodiment of the present application based on the network architecture shown in fig. 1. Wherein the business process method is described from the perspective of the server 102. As shown in fig. 3, the service processing method may include the following steps.
301. The server receives a first service request.
Step 301 is the same as step 201, and please refer to step 201 for detailed description, which is not repeated herein.
302. The server caches the first service request in case the first number is greater than or equal to a first threshold.
Step 302 is the same as step 202, and please refer to step 202 for detailed description, which is not repeated herein.
303. The server processes the first service request in case the first number is smaller than a first threshold.
After the server receives the first service request, the first service request may be processed if the first number is less than a first threshold. The first threshold is the maximum number of the service requests that the server can process, and the first number is the number of the service requests that have not been processed in the service requests received by the server before the first service request is received.
Specifically, when the first service request includes a plurality of service requests, two service processing implementations may be used. In a first implementation, the server may process the first service request when a sum of the first number and the second number is less than or equal to a first threshold, where the second number is a number of service requests included in the first service request; in a second implementation, when the sum of the first number and the second number is greater than the first threshold, the server may select a third number of service requests from the plurality of service requests to process, where the third number is a difference between the first threshold and the first number.
The server may select a third number of service requests from the plurality of service requests for processing according to the priority. The priorities of the plurality of service requests may be determined before the server selects a third number of service requests from the plurality of service requests for processing according to the priorities. The server may also determine the priority of a plurality of service requests included in the first service request after receiving the first service request. Specifically, the server may determine the priorities of the multiple service requests according to one or more of the priorities of the service requests corresponding to the multiple service requests, the priorities of the service types corresponding to the multiple service requests (for example, the service urgency), the priorities of the user types corresponding to the multiple service requests (for example, the user types may be classified into a common user and a guest (VIP) user), the sizes of the services corresponding to the multiple service requests (the service carrying capacity), and the processing time required by the multiple service requests; the server may also determine the priorities of the plurality of service requests according to the priorities of the service requests corresponding to the plurality of service requests, the priorities of the service types corresponding to the plurality of service requests, the priorities of the user types corresponding to the plurality of service requests, the sizes of the services corresponding to the plurality of service requests, and the weighted sum of the processing time required by the plurality of service requests.
After selecting the third number of service requests from the plurality of service requests according to the priority, the server may process the selected third number of service requests at the same time, or may process the selected third number of service requests according to the priority, and preferentially process the service requests with a higher priority.
304. And the server feeds back the processing result of the first service request.
After the server processes the first service request, a processing result of the first service request may be fed back to the user equipment. The server may feed back the processing result immediately according to the type of the service request, or may feed back the processing result according to a feedback time or an interval time for the type of the service request, where the interval time may be, for example, 5min, 10min, 30min, or other interval time lengths.
In the service processing method described in fig. 3, the first service request may be received by the server, and in the case that the first number is greater than or equal to the first threshold, the server may cache the first service request; in the event that the first number is less than a first threshold, the server may process the first service request. After processing the first service request, the server may feed back a processing result of the first service request. Therefore, when the number of the service requests received by the server is large, due to the limited processing capacity, the first service request can be cached first, and then the cached service requests are processed, so that the server crash caused by the large number of the service requests is avoided.
Referring to fig. 4 based on the network architecture shown in fig. 1, fig. 4 is a schematic flowchart of another service processing method provided in the embodiment of the present application. The service processing method is described from the perspective of the user equipment 101 and the server 102. As shown in fig. 4, the service processing method may include the following steps.
401. The server receives a first service request from the user equipment.
The user equipment may send a first service request to the server, where the user equipment may include a smart phone, a tablet computer, a palmtop computer, an MID, and other devices, and the first service request may be from a client in the user equipment, or may be from a web page or a browser in the user equipment. The first service request may be one service request from one user equipment, or a plurality of service requests from a plurality of user equipments. After the user equipment sends the first service request to the server, the server may receive the first service request from the user equipment.
402. The server caches the first service request in case the first number is greater than or equal to a first threshold.
After the server receives the first service request from the user equipment, the first service request may be cached if the first number is greater than or equal to a first threshold. The first threshold is the maximum number of the service requests that the server can process, and the first number is the number of the service requests that have not been processed in the service requests received by the server before the first service request is received. The server may cache the first service request in its own database, or send the first service request to a cache server, and cache the first service request in the cache server, where the cache server may be another server different from the server, which is not limited in this embodiment of the present application.
403. The server processes the first service request in case the first number is smaller than a first threshold.
After the server receives the first service request from the user equipment, the first service request may be processed if the first number is less than a first threshold. Step 403 is the same as step 303, and please refer to step 303 for detailed description, which is not repeated herein.
404. And the server feeds back the processing result of the first service request to the user equipment.
After the server processes the first service request, a processing result of the first service request may be fed back to the user equipment. The server may feed back the processing result immediately according to the type of the service request, or may feed back the processing result according to a feedback time or an interval time for the type of the service request, where the interval time may be, for example, 5min, 10min, 30min, or other interval time lengths. After the server feeds back the processing result of the first service request to the user equipment, the user equipment receives the processing result of the first service request from the server and displays the processing result of the first service request to the user. The user equipment may display the processing result in the text form to the user through the display screen, and may also play the processing result in the voice form to the user through a voice device such as a speaker, which is not limited in this embodiment of the present application.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a service processing apparatus according to an embodiment of the present application. As shown in fig. 5, the service processing apparatus may be disposed in a server, and include:
a receiving unit 501, configured to receive a first service request;
the caching unit 502 is configured to cache the first service request when a first number is greater than or equal to a first threshold, where the first number is a number of service requests that have not been processed in the service requests received by the server before the first service request is received.
In a possible implementation manner, the service processing apparatus further includes: a processing unit 503, configured to process the first service request if the first number is smaller than the first threshold.
In a possible implementation manner, in a case that the first service request includes a plurality of service requests, the processing unit 503 is specifically configured to:
processing the first service request under the condition that the sum of the first quantity and the second quantity is less than or equal to a first threshold value, wherein the second quantity is the quantity of the service requests included by the first service request;
and under the condition that the sum of the first quantity and the second quantity is greater than a first threshold value, selecting a third quantity of service requests from the plurality of service requests for processing, wherein the third quantity is the difference value between the first threshold value and the first quantity.
In a possible implementation manner, the selecting, by the processing unit 503, a third number of service requests from the plurality of service requests for processing includes:
and selecting a third number of service requests from the plurality of service requests according to the priority for processing.
In a possible implementation manner, the service processing apparatus further includes: a determining unit 504 is configured to determine priorities of the plurality of service requests.
In a possible implementation manner, the determining unit 504 is specifically configured to:
and determining the priorities of the plurality of service requests according to one or more of the priorities of the service requests corresponding to the plurality of service requests, the priorities of the service types corresponding to the plurality of service requests, the priorities of the user types corresponding to the plurality of service requests, the sizes of the services corresponding to the plurality of service requests and the processing time required by the plurality of service requests.
In a possible implementation manner, the service processing apparatus further includes: a feedback unit 505, configured to feed back a processing result of the first service request after the processing unit 503 processes the first service request.
The detailed descriptions of the receiving unit 501, the buffering unit 502, the processing unit 503, the determining unit 504, and the feedback unit 505 may be directly obtained by referring to the relevant descriptions in the embodiments of the service processing method shown in fig. 2, fig. 3, and fig. 4, which are not described herein again.
Referring to fig. 6, fig. 6 is a schematic structural diagram of another service processing apparatus according to an embodiment of the present application. The service processing device may be a server or a device in the server. As shown in fig. 6, the service processing apparatus may include: a memory 601, a transceiver 602, and a processor 603 coupled to the memory 601 and the transceiver 602. In addition, the service processing device may further include general components such as an antenna, which will not be described in detail herein.
The memory 601 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor via a bus. The memory 601 may also be integrated with the processor 603.
The transceiver 602 may be a communication interface, a transceiver circuit, etc., wherein the communication interface is generally referred to and may include one or more interfaces, such as an interface between a service processing device and a terminal. The communication interface is used for communicating with other devices or communication networks, such as ethernet, Radio Access Network (RAN), core network, Wireless Local Area Network (WLAN), and so on.
The processor 603 may be a Central Processing Unit (CPU), a general purpose processor, a Digital Signal Processor (DSP), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic, hardware components, or any combination thereof. Which may implement or perform the various illustrative logical blocks, templates, and circuits described in connection with the disclosure provided herein. The processor 603 may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors.
Wherein the memory 601 is adapted to store a computer program comprising program instructions, the processor 603 is adapted to execute the program instructions stored in the memory 601, and the transceiver 602 is adapted to communicate with other devices under the control of the processor 603. The business process methods may be performed according to program instructions when the instructions are executed by the processor 603.
Optionally, the service processing apparatus may further include a bus 604, wherein the memory 601, the transceiver 602, and the processor 603 may be connected to each other through the bus 604. The bus 604 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 604 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
In addition to the memory 601, the transceiver 602, the processor 603 and the bus 604 shown in fig. 6, the service processing apparatus in the embodiment may also include other hardware according to the actual function of the service processing apparatus, which is not described again.
The embodiment of the present application further provides a storage medium, where a program is stored on the storage medium, and when the program runs, the service processing method shown in fig. 2, fig. 3, and fig. 4 is implemented.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and elements referred to are not necessarily required in this application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, and may specifically be a processor in the computer device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. The storage medium may include: a U disk, a removable hard disk, a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A service processing method is applied to a server and comprises the following steps:
receiving a first service request;
and caching the first service request under the condition that a first number is larger than or equal to a first threshold, wherein the first number is the number of the service requests which are not processed in the service requests received by the server before the first service request is received.
2. The method of claim 1, further comprising:
processing the first service request if the first number is less than the first threshold.
3. The method of claim 2, wherein in the case that the first service request comprises a plurality of service requests, the processing the first service request comprises:
processing the first service request under the condition that the sum of the first number and a second number is less than or equal to the first threshold, wherein the second number is the number of service requests included in the first service request;
and selecting a third number of service requests from the plurality of service requests to process under the condition that the sum of the first number and the second number is greater than the first threshold, wherein the third number is the difference value between the first threshold and the first number.
4. The method of claim 3, wherein the selecting a third number of service requests from the plurality of service requests for processing comprises:
and selecting a third number of service requests from the plurality of service requests according to the priority for processing.
5. The method of claim 4, further comprising:
determining a priority of the plurality of service requests.
6. The method of claim 5, wherein the determining the priority of the plurality of service requests comprises:
and determining the priorities of the plurality of service requests according to one or more of the priorities of the service requests corresponding to the plurality of service requests, the priorities of the service types corresponding to the plurality of service requests, the priorities of the user types corresponding to the plurality of service requests, the sizes of the services corresponding to the plurality of service requests and the processing time required by the plurality of service requests.
7. The method according to any of claims 2-6, wherein after processing the first service request, the method further comprises:
and feeding back the processing result of the first service request.
8. A service processing apparatus, wherein the apparatus is disposed in a server, and comprises:
a receiving unit, configured to receive a first service request;
a caching unit, configured to cache the first service request when a first number is greater than or equal to a first threshold, where the first number is a number of service requests that have not been processed in service requests received by the server before the server receives the first service request.
9. A transaction device comprising a processor and a memory, the processor and memory coupled, wherein the memory is configured to store computer instructions and the processor implements the method of any of claims 1-7 by executing the computer instructions stored by the memory.
10. A computer-readable storage medium, in which a computer program or computer instructions is stored which, when executed by a computer device, implements the method of any one of claims 1-7.
CN202010483518.0A 2020-06-01 2020-06-01 Service processing method, device and computer readable storage medium Withdrawn CN111666154A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010483518.0A CN111666154A (en) 2020-06-01 2020-06-01 Service processing method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010483518.0A CN111666154A (en) 2020-06-01 2020-06-01 Service processing method, device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111666154A true CN111666154A (en) 2020-09-15

Family

ID=72385410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010483518.0A Withdrawn CN111666154A (en) 2020-06-01 2020-06-01 Service processing method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111666154A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738199A (en) * 2020-12-25 2021-04-30 新东方教育科技集团有限公司 Scheduling method and scheduling system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738199A (en) * 2020-12-25 2021-04-30 新东方教育科技集团有限公司 Scheduling method and scheduling system
CN112738199B (en) * 2020-12-25 2023-02-17 新东方教育科技集团有限公司 Scheduling method and scheduling system

Similar Documents

Publication Publication Date Title
US20200250732A1 (en) Method and apparatus for use in determining tags of interest to user
JP2018117370A (en) Dynamic telephone number assignment
US7870272B2 (en) Preserving a user experience with content across multiple computing devices using location information
CN107423085B (en) Method and apparatus for deploying applications
CN105631035B (en) Date storage method and device
JP2016532915A (en) Estimating the visibility of content items
CN110598149A (en) Webpage access method, device and storage medium
CN105488125A (en) Page access method and apparatus
EP3528474B1 (en) Webpage advertisement anti-shielding methods and content distribution network
CN105337891A (en) Traffic control method and traffic control device for distributed cache system
CN110738436A (en) method and device for determining available stock
CN111611283A (en) Data caching method and device, computer readable storage medium and electronic equipment
CN111666154A (en) Service processing method, device and computer readable storage medium
GB2577309A (en) Apparatus and method for vehicle searching
CN111026532B (en) Message queue management method for voice data
CN111783010A (en) Webpage blank page monitoring method, device, terminal and storage medium
US8688542B2 (en) Method, system and apparatus for managing a bid tracking database
CN111010453A (en) Service request processing method, system, electronic device and computer readable medium
CN111444448B (en) Data processing method, server and system
CN113568738A (en) Resource allocation method and device based on multi-label classification, electronic equipment and medium
CN105589870B (en) Method and system for filtering webpage advertisements
CN113407339A (en) Resource request feedback method and device, readable storage medium and electronic equipment
CN112187667A (en) Data downloading method, device, equipment and storage medium
CN112311843A (en) Data loading method and device
CN107666497B (en) Data access method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200915