WO2024031931A1 - 批量请求下发的优先排队处理方法、装置、服务器及介质 - Google Patents

批量请求下发的优先排队处理方法、装置、服务器及介质 Download PDF

Info

Publication number
WO2024031931A1
WO2024031931A1 PCT/CN2023/072153 CN2023072153W WO2024031931A1 WO 2024031931 A1 WO2024031931 A1 WO 2024031931A1 CN 2023072153 W CN2023072153 W CN 2023072153W WO 2024031931 A1 WO2024031931 A1 WO 2024031931A1
Authority
WO
WIPO (PCT)
Prior art keywords
issued
requests
thread pool
request
preset
Prior art date
Application number
PCT/CN2023/072153
Other languages
English (en)
French (fr)
Inventor
乔波波
董俊明
Original Assignee
苏州元脑智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州元脑智能科技有限公司 filed Critical 苏州元脑智能科技有限公司
Publication of WO2024031931A1 publication Critical patent/WO2024031931A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of servers, and in particular to a priority queuing processing method, device, server and medium for batch request delivery.
  • the purpose of this application is to provide a priority queuing processing method, device, server and medium for batch request delivery, which are used to execute each request according to a predetermined priority order and improve the system's task processing capabilities and user experience.
  • this application provides a priority queuing processing method for batch request issuance, including:
  • the weight value of each request to be issued includes the weight value and attribute of each request to be issued.
  • the weight value of the sexual value includes the weight value and attribute of each request to be issued.
  • sorting the requests to be issued according to the preset weight values of the requests to be issued includes:
  • the weight value of each request to be issued is at least determined based on the usage frequency of each request to be issued; the weight value is positively correlated with the usage frequency.
  • the target thread pool is a ThreadPoolExecutor thread pool; the preset rules are based on at least the number of core threads of the ThreadPoolExecutor thread pool, the number of requests to be issued, the number of cache queues of the ThreadPoolExecutor thread pool, and the maximum number of ThreadPoolExecutor thread pools. The number of threads is determined.
  • delivering each request to be delivered to the target thread pool according to preset rules includes:
  • a first preset number of requests to be issued is issued to the core threads; where the first preset number is equal to the number of core threads;
  • the first difference is the first remaining number of requests to be issued in addition to the first preset number of requests to be issued.
  • the request to be issued is delivered to the ThreadPoolExecutor thread pool according to the relationship between the first difference value and the number of cache queues.
  • sending the request to be issued to the ThreadPoolExecutor thread pool according to the relationship between the first difference and the number of cache queues includes:
  • the first remaining requests to be issued are delivered to the ThreadPoolExecutor thread pool in sequence.
  • the to-be-issued request will be issued based on the relationship between the first difference and the number of cache queues.
  • Requests sent to the ThreadPoolExecutor thread pool include:
  • the priority queue starting from the request to be issued next to the first preset number of requests to be issued, skipping the second preset number of requests to be issued, and placing the requests to be issued In addition to the first preset number and the second preset number of requests to be issued, the second remaining requests to be issued are delivered to the cache queue;
  • the second preset number of requests to be issued is delivered to the ThreadPoolExecutor thread pool, and a new thread is started to execute the second preset number of requests to be issued; among them, the ThreadPoolExecutor thread pool has remaining
  • the number of accommodated requests is equal to the number of requests to be issued, the difference between the number of maximum threads and the number of core threads; the second preset number is equal to the number of requests to be issued, the number of core threads and the number of cache queues difference.
  • the to-be-issued request will be issued based on the relationship between the first difference and the number of cache queues.
  • Requests sent to the ThreadPoolExecutor thread pool include:
  • the priority queue starting from the request to be issued next to the first preset number of requests to be issued, skipping the third preset number of requests to be issued, and placing the requests to be issued In addition to the first preset number and the third preset number of requests to be issued, the third remaining requests to be issued are delivered to the cache queue;
  • the third preset number of requests to be issued to the ThreadPoolExecutor thread pool, and start a new thread to execute the third preset number of requests to be issued; wherein, the third preset number Equal to the difference between the number of maximum threads and the number of core threads.
  • delivering each request to be delivered to the target thread pool according to preset rules includes:
  • the requests to be issued will be sorted according to the preset weight values of the requests to be issued. A step of.
  • each request to be issued is delivered to the target thread pool according to the preset rules, it also includes:
  • the target thread pool is the Eager ThreadPoolExecutor thread pool; delivering each request to be issued to the target thread pool according to preset rules includes:
  • each request to be issued is delivered to the Eager ThreadPoolExecutor thread pool in sequence.
  • this application also provides a priority queuing processing device for batch request delivery, including:
  • the acquisition module is used to obtain the requests to be issued and the target thread pool for each request to be issued;
  • the sorting module is used to sort the requests to be issued according to the preset weight values of the requests to be issued, so as to form a priority sorting queue;
  • the delivery module is used to deliver the requests to be issued to the target thread pool according to the preset rules, so that the target thread pool processes the requests to be issued according to the priority queue; among them, the preset rules are based on the target thread pool's How to handle requests OK.
  • this application also provides a server, including:
  • Memory used to store computer programs
  • the processor is used to implement the steps of the above-mentioned priority queuing processing method for batch request delivery when executing a computer program.
  • this application also provides a non-volatile readable storage medium.
  • a computer program is stored on the non-volatile readable storage medium.
  • the above-mentioned batch request issuance is implemented.
  • the priority queuing processing method for batch request issuance includes: obtaining each request to be issued and the target thread pool for each request to be issued; The weight value sorts the requests to be issued to form a priority queue; the requests to be issued are issued to the target thread pool according to the preset rules, so that the target thread pool processes the requests to be issued according to the priority queue. Request; among them, the preset rules are determined based on the way the target thread pool handles the request.
  • each request is prioritized according to the weight value, and the thread pool is used.
  • different delivery logic is used to submit the requests to the thread pool to ensure that the requests are efficiently executed according to the established priority order. This can fully ensure that the execution order of tasks when issuing requests in batches can be executed in accordance with the predetermined priority order, ensure that high-priority tasks can be executed first, and improve the system's task processing capabilities and user experience.
  • this application also provides a priority queuing processing device, server and non-volatile readable storage medium for batch request issuance, which has the same or corresponding technology as the priority queuing processing method for batch request issuance mentioned above.
  • a priority queuing processing device server and non-volatile readable storage medium for batch request issuance, which has the same or corresponding technology as the priority queuing processing method for batch request issuance mentioned above.
  • Figure 1 is a flow chart of a priority queuing processing method for batch request issuance provided by an embodiment of the present application
  • Figure 2 is a structural diagram of a priority queuing processing device for batch request delivery provided by an embodiment of the present application
  • FIG. 3 is a structural diagram of a server provided by another embodiment of the present application.
  • Figure 4 is a flow chart of priority queuing processing for batch request issuance provided by the embodiment of this application.
  • the core of this application is to provide a priority queuing processing method, device, server and medium for batch request delivery, which is used to execute each request according to a predetermined priority order, improving the system's task processing capabilities and the user experience effect.
  • the requests are prioritized according to their importance, and then executed sequentially according to the predetermined priority order, ensuring that high-priority tasks can be executed first, and improving the system's task processing capabilities. And the user experience effect.
  • Figure 1 is a flow chart of a priority queuing processing method for batch request delivery provided by an embodiment of the present application. As shown in Figure 1, the method includes:
  • the order in which request tasks are processed during batch requests will affect the processing speed of the system and the user experience. Therefore, in order to process each request, it is first necessary to obtain each request to be issued. Each request to be issued needs to be issued to a thread before it can be processed. Since the use of a thread pool can reduce the number of threads created and destroyed, each worker thread can be reused, and the number of worker threads in the thread pool can be adjusted according to the system's capacity to prevent the server from crashing due to excessive memory consumption. Therefore, in some embodiments of this application, requests are sent to the thread pool for processing. There is no limit to the selected target thread pool, and it should be determined according to the actual situation. For example, the ThreadPoolExecutor thread pool can be selected to process each request.
  • S11 Sort the requests to be issued according to the preset weight values of the requests to be issued, so as to form a priority sorting queue.
  • each request is sorted according to its weight value. There is no limit to the preset weight value of each request to be issued, and it shall be determined according to the actual situation. Each request is sorted according to the weight value. A large weight value indicates that the request is of high importance and the request has a high priority; conversely, a small weight value indicates that the request is of low importance and has a low priority.
  • S12 Deliver the requests to be issued to the target thread pool according to the preset rules, so that the target thread pool processes the requests to be issued according to the priority queue; among them, the preset rules are based on the way the target thread pool processes the requests. Make sure.
  • the execution of tasks is coordinated with the efficiency of the thread pool. Different types of thread pools execute tasks in different ways, so the logic of thread pools issued by the queue is also different. Therefore, in some embodiments, preset rules are determined according to the way the target thread pool handles requests, and each request to be issued is delivered to the target thread pool according to the preset rules.
  • the priority queuing processing method for batch request delivery includes: obtaining each request to be issued and the target thread pool for each request to be issued; Sort the requests to be issued according to the weight value of the request to form a priority sorting queue; issue the requests to be issued to the target thread pool according to the preset rules so that the target thread pool can process the requests to be issued according to the priority sorting queue.
  • the issued request among them, the preset rules are determined based on the way the target thread pool handles the request.
  • each request is prioritized according to the weight value, and the thread pool is used.
  • different delivery logic is used to submit the requests to the thread pool to ensure that the requests are efficiently executed according to the established priority order. This can fully ensure that the execution order of tasks when issuing requests in batches can be executed in accordance with the predetermined priority order, ensure that high-priority tasks can be executed first, and improve the system's task processing capabilities and user experience.
  • the weight value of each request to be issued includes the weight value and attribute value of the attribute of each request to be issued. weight value.
  • Request attributes such as: method, source, type, data volume, etc.; add attribute values to each attribute, for example, the attribute values corresponding to the method attribute are POST, PUT, GET, DELETE; the attribute values corresponding to the source attribute are internal, Openstack, Dsm etc.; the attribute values corresponding to the type attribute are upgrade class, business class, service class, etc.; the attribute values corresponding to the data volume attribute are high, medium, low, etc.
  • the requested weight value provided in some embodiments includes the weight value of the attribute and the corresponding weight value is also set for the attribute value. Compared with the method of only setting the weight value for the attribute, the weight value obtained in some embodiments can be more accurate. used to determine the priority order of requests.
  • sorting the requests to be issued according to the weight values obtained in the above embodiment includes:
  • Sorting the requests to be issued according to the preset weight values of the requests to be issued includes:
  • the weight value corresponding to each attribute value is obtained from the request configuration according to the request parameters.
  • the weight value of the attribute value is multiplied by the weight of the corresponding attribute and summed to obtain the comprehensive score value of the request.
  • the comprehensive score value of all requests is sequentially Rating values are calculated and sorted according to the rating value from largest to smallest. Assume that the weight value of the method attribute, the weight value of the source attribute, the weight value of the type attribute, and the weight value of the data volume attribute are 10%, 40%, 20%, and 30% in order. The weight value of the source attribute can be found from the weight value of the attribute.
  • the weight values of the requests to be issued include the weight values of the requests to be issued.
  • the weight value of the attribute and the weight value of the attribute value and then comprehensively score each request according to the weight value.
  • the method of some embodiments obtains the priority of the request.
  • the order of priority is more accurate; secondly, each request is sorted from large to small. The one at the top indicates that the request is more important. Therefore, some embodiments sort each request from high to low priority. The order of arrangement makes it possible to intuitively understand the importance of each request, which facilitates subsequent priority processing of requests with high weight values.
  • the weight value represents the order in which requests are processed. In order to make the priority of the obtained requests more reasonable, in one or more embodiments, the weight value of each request to be issued is at least determined based on the usage frequency of each request to be issued; the weight value is positively correlated with the usage frequency. .
  • Each enterprise can determine the weight value of the request based on the frequency of use of the issued request. The higher the frequency of use, set a higher weight value; the lower the frequency of use, set a lower weight value.
  • the requests issued in batches are prioritized according to the predetermined weight configuration to form a priority queue sorted according to enterprise characteristics.
  • the target thread pool is the ThreadPoolExecutor thread pool; the preset rules are at least based on the core threads of the ThreadPoolExecutor thread pool. The number, the number of requests to be issued, the number of cache queues of the ThreadPoolExecutor thread pool, and the maximum number of threads of the ThreadPoolExecutor thread pool are determined.
  • ThreadPoolExecutor thread pool The processing flow of the ThreadPoolExecutor thread pool is as follows: Determine whether all core threads in the thread pool are executing the task. If not (the core thread is idle or there are still core threads that have not been created), create a new one. Worker threads to perform tasks. If all core threads are executing tasks, proceed Enter the next process; the thread pool determines whether the work queue is full. If the work queue is not full, the newly submitted tasks are stored in this work queue. If the work queue is full, enter the next process; determine whether all threads in the thread pool are in working status, and if not, create a new working thread to perform the task. If it is full, it is handed over to the saturation strategy to handle this task. Using the ThreadPoolExecutor thread pool to process requests can obtain the various statuses of threads in the thread pool in real time and dynamically adjust the thread pool size.
  • delivering each request to be issued to the target thread pool according to preset rules includes:
  • a first preset number of requests to be issued is issued to the core threads; where the first preset number is equal to the number of core threads;
  • the first difference is the first remaining number of requests to be issued in addition to the first preset number of requests to be issued.
  • the request to be issued is delivered to the ThreadPoolExecutor thread pool according to the relationship between the first difference value and the number of cache queues.
  • the first remaining requests to be issued are delivered to the ThreadPoolExecutor thread pool in sequence according to the order of the priority queue;
  • the first difference is greater than the number of cache queues and less than the number of requests that can be accommodated by the ThreadPoolExecutor thread pool
  • the second preset number of requests to be issued is skipped, and the second remaining number of requests to be issued is used except for the first preset number and the second preset number of requests to be issued.
  • the requests to be issued are delivered to the cache queue; according to the order of the priority queue, the second preset number of requests to be issued is delivered to the ThreadPoolExecutor thread pool, and a new thread is started to execute the second preset number of requests.
  • the number of requests that can be accommodated by the ThreadPoolExecutor thread pool is equal to the number of requests to be issued, the difference between the maximum number of threads and the number of core threads; the second preset number is equal to the number of requests to be issued The difference between the number of requests, the number of core threads and the number of cache queues;
  • the ThreadPoolExecutor thread pool according to the order of the priority queue, starting from the request to be issued next to the first preset number of requests to be issued, jump Pass the third preset number of requests to be issued, and divide the first preset number and the third preset number of requests to be issued from the requests to be issued.
  • the thread pool used is the ThreadPoolExecutor thread pool. According to the logic of processing tasks in this thread pool, when issuing a request queue (the total number of queues is not request) to the thread pool, first determine the core thread number of the thread pool corePoolSize, and the queue order will be equivalent to Tasks with the number of core threads are first sent to the thread pool to start the core thread execution task. Secondly, the number of remaining queue tasks (quest-corePoolSize) is determined. If it is less than the number of cache queues in the thread pool, queuelist, the remaining requests are sent to the thread pool in sequence.
  • request-corePoolSize If the number of remaining requests (quest-corePoolSize) is greater than the number of thread pool cache queues queuelist and less than the number of tasks that the thread pool can accommodate (queuelist+maximumPoolSize-corePoolSize), skip (quest-corePoolSize-queuelist) requests from the remaining request queues in order. Then the requests are sent to the cache queue of the thread pool in sequence, and then the skipped tasks are sent to the thread pool to start new thread execution tasks.
  • ThreadPoolExecutor thread pool is used, the number of core threads is 2.
  • Tasks 6 to 10 are sent to the thread pool in sequence and put into the cache queue. Finally, tasks 3 to 5 are sent to the thread pool to start 3 new threads to execute the tasks. In this way, the task execution order is still executed in the predetermined order.
  • Some embodiments provide that when processing requests using the ThreadPoolExecutor thread pool, each request to be issued is delivered to the target thread pool according to preset rules, so that tasks in the thread pool can complete request execution according to the predetermined request priority order.
  • despatching each request to be despatched to the target thread pool according to preset rules includes:
  • the requests to be issued will be sorted according to the preset weight values of the requests to be issued. A step of.
  • the preset time There is no limit to the specific value of the preset time and it will be determined according to the actual situation. For example, when setting the preset time, you can set an appropriate preset time based on the number of requests, the size of each request, etc. Through the preset time, users can understand the status of request issuance and avoid long waiting.
  • each request to be issued is delivered to the target thread pool according to the preset rules, it also includes:
  • ThreadPoolExecutor thread pool to process issued requests.
  • the request queue will be sent to the thread pool in sequence according to the logic of the thread pool (start the thread first and reach the maximum number of threads before entering the waiting queue). Can. If you can choose, use the Eager ThreadPoolExecutor thread pool first, so that tasks in the thread pool can complete request execution in the predetermined request priority order.
  • the priority queuing processing method for batch request delivery is described in detail.
  • This application also provides corresponding embodiments of the priority queuing processing device and server for batch request delivery. It should be noted that this application describes the embodiments of the device part from two perspectives, one is based on the perspective of functional modules, and the other is based on the perspective of hardware.
  • Figure 2 is a structural diagram of a priority queuing processing device for batch request delivery provided by an embodiment of the present application.
  • a priority queuing processing device for batch request delivery provided by an embodiment of the present application.
  • it includes:
  • the acquisition module 10 is used to obtain each request to be issued and the target thread pool for each request to be issued;
  • the sorting module 11 is used to sort the requests to be issued according to the preset weight values of the requests to be issued, so as to form a priority sorting queue;
  • the delivery module 12 is used to deliver each request to be issued to the target thread pool according to the preset rules, so that the target thread pool processes each request to be issued according to the priority queue; wherein, the preset rules are based on the target thread pool. Determine how the request will be handled.
  • the priority queuing processing device for batch request issuance obtains the requests to be issued and the target thread pool for each request to be issued through the acquisition module;
  • the weight value of the issued request sorts the requests to be issued in order to form a priority sorting queue;
  • the delivery module the requests to be issued are delivered to the target thread pool according to the preset rules, so that the target thread pool follows the preset rules.
  • the priority queue processes the requests to be issued; the preset rules are determined based on the way the target thread pool handles the requests.
  • each request is prioritized according to the weight value, and the thread pool is used.
  • different delivery logic is used to submit the requests to the thread pool to ensure that the requests are efficiently executed according to the established priority order. . This can fully ensure that the execution order of tasks when issuing requests in batches can be executed in accordance with the predetermined priority order, ensure that high-priority tasks can be executed first, and improve the system's task processing capabilities and user experience.
  • FIG 3 is a structural diagram of a server provided by another embodiment of the present application.
  • the server includes:
  • Memory 20 used to store computer programs
  • the processor 21 is configured to implement the steps of the priority queuing processing method for batch request issuance as mentioned in the above embodiment when executing a computer program.
  • Servers provided in some embodiments may include, but are not limited to, smartphones, tablets, laptops, or desktop computers.
  • the processor 21 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc.
  • the processor 21 can use a digital signal processor (Digital Signal Processor, DSP) or a field programmable gate array. (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA) to achieve at least one hardware form.
  • the processor 21 may also include a main processor and a co-processor.
  • the main processor is a processor used to process data in a wake-up state, also called a CPU; a co-processor is used to process data in a standby state. Low-power processor for processing.
  • the processor 21 may be integrated with a graphics processing unit (GPU), and the GPU is responsible for rendering and drawing content to be displayed on the display screen.
  • the processor 21 may also include an artificial intelligence (Artificial Intelligence, AI) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • Memory 20 may include one or more non-volatile readable storage media, which may be non-transitory.
  • the memory 20 may also include high-speed random access memory, and non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices.
  • the memory 20 is at least used to store the following computer program 201. After the computer program is loaded and executed by the processor 21, it can implement the priority queuing processing method for batch request issuance disclosed in any of the foregoing embodiments. Related steps.
  • the resources stored in the memory 20 may also include the operating system 202, data 203, etc., and the storage method may be short-term storage or permanent storage.
  • the operating system 202 may include Windows, Unix, Linux, etc.
  • the data 203 may include, but is not limited to, the data involved in the priority queuing processing method for batch request issuance mentioned above, etc.
  • the server may also include a display screen 22, an input and output interface 23, a communication interface 24, a power supply 25 and a communication bus 26.
  • FIG. 3 does not constitute a limitation on the server, and may include more or fewer components than shown in the figure.
  • the server provided by the embodiment of the present application includes a memory and a processor.
  • the processor executes the program stored in the memory, it can implement the following method: a priority queuing processing method for batch request issuance, with the same effect as above.
  • this application also provides a corresponding embodiment of a non-volatile readable storage medium.
  • a computer program is stored on the non-volatile readable storage medium.
  • the steps recorded in the above method embodiments are implemented.
  • the methods in the above embodiments are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , execute all or part of the steps of the methods of various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the non-volatile readable storage medium provided by this application includes the above-mentioned priority queuing processing method for batch request issuance, and the effect is the same as above.
  • Figure 4 is a priority queuing process for batch request issuance provided by the embodiment of the present application. picture. As shown in Figure 4, the method includes:
  • the interface attribute configuration includes method (a), source (b), type (c), and data (d).
  • Method (a) includes POST (a1) and PUT (a2). , GET(a3), DELETE(a4); the source (b) contains internal (b1), Dsm (b2), Openstack (b3); the type (c) contains upgrade (c1), business (c2), Service (c3); the data includes large (d1), medium (d2), and small (d3) (the numbers in brackets are the corresponding weight values); obtain the weight corresponding to each attribute value from the request configuration according to the request parameters Value, the weight of the attribute value is multiplied by the weight of the corresponding attribute and summed to obtain the comprehensive score value of the request.
  • the request includes request x, request y, request z, etc.;
  • the weight calculation method of other requests is similar and will not be repeated; sort Qx, Qy, Qz... from large to small to form a request priority queue, such as [x, y, z...]; put [x, y, z ...]
  • the request queue combines different thread pool types and enters the thread pool in an orderly manner according to different rules; tasks are executed in order.
  • the request attribute priority weight is first configured on the interface, and then the request weight is obtained according to the request attribute before batch request is issued to obtain the request comprehensive score value.
  • the requests are sorted by priority according to the comprehensive score value to cooperate with the use of different thread pools.
  • the logic delivers tasks to the thread pool for execution according to different rules to ensure that the execution order of tasks is according to the predetermined configured priority. This allows the system to flexibly prioritize tasks according to the focus of the enterprise when issuing batch requests, and Ensure that tasks are executed according to the predetermined priority; secondly, executing tasks in the predetermined order can ensure that urgent, important, and large-volume tasks are processed first when the system encounters high concurrent requests, ensuring that this type of business is executed first and quickly, and increasing the flexibility of system processing. efficiency, speed and stability. At the same time, it improves the customer experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请公开了一种批量请求下发的优先排队处理方法、装置、服务器及介质,涉及服务器领域。包括:获取各待下发的请求以及各待下发的请求下发的目标线程池;根据预先设定的各待下发的请求的权重值将各待下发的请求进行排序;按照预设规则将各待下发的请求下发至目标线程池,以便目标线程池按照优先排序队列处理各待下发的请求。该方法中根据权重值对各请求进行优先排序,配合利用线程池,根据不同线程池的执行任务特性,采用不同的下发逻辑将请求提交线程池以保证请求按照既定的优先级顺序高效执行,保证高优先级的任务可以被优先执行,提高系统任务处理能力以及用户的使用体验效果。

Description

批量请求下发的优先排队处理方法、装置、服务器及介质
相关申请的交叉引用
本申请要求于2022年08月11日提交中国专利局,申请号为202210958383.8,申请名称为“批量请求下发的优先排队处理方法、装置、服务器及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及服务器领域,特别是涉及一种批量请求下发的优先排队处理方法、装置、服务器及介质。
背景技术
目前大规模高并发的应用场景越来越多,虽然任务处理的速度在不断提升,但是任务的请求来源也在骤增,包括界面下发、大量的外部调用等都在不断加大请求的数量,而任务处理速度的影响因素又比较多,如:中央处理器(Central Processing Unit,CPU)、网络、内存等等,当这些影响因素一时无法解决,又同时出现高并发批量请求时,即使采用线程池的方式也难免会有任务处于等待状态。而批量请求的任务中有重要和一般之分,请求的数据量也有大小之分,请求的来源也有内外之分。处于等待中的任务和优先处理的任务当前还是随机排序处理的,这时候如果优先处理的是一般的、不紧急的业务则会严重影响系统的能力以及用户的使用效果。
发明内容
本申请的目的是提供一种批量请求下发的优先排队处理方法、装置、服务器及介质,用于按照预定的优先排序执行各请求,提高系统任务处理能力以及用户的使用体验效果。
为解决上述技术问题,本申请提供一种批量请求下发的优先排队处理方法,包括:
获取各待下发的请求以及各待下发的请求下发的目标线程池;
根据预先设定的各待下发的请求的权重值将各待下发的请求进行排序,以便形成优先排序队列;
按照预设规则将各待下发的请求下发至目标线程池,以便目标线程池按照优先排序队列处理各待下发的请求;其中,预设规则根据目标线程池的处理请求的方式进行确定。
在一些实施例中,各待下发的请求的权重值包括各待下发的请求的属性的权重值以及属 性值的权重值。
在一些实施例中,根据预先设定的各待下发的请求的权重值将各待下发的请求进行排序包括:
根据预先设定的各待下发的请求的属性的权重值以及属性值的权重值确定各待下发的请求的综合评分值;
将各待下发的请求按照综合评分值从大到小的顺序排列。
在一些实施例中,各待下发的请求的权重值至少是根据各待下发的请求的使用频率确定;权重值与使用频率呈正相关。
在一些实施例中,目标线程池为ThreadPoolExecutor线程池;预设规则至少根据ThreadPoolExecutor线程池的核心线程的数量、待下发的请求的数量、ThreadPoolExecutor线程池的缓存队列的数量、ThreadPoolExecutor线程池的最大线程的数量确定。
在一些实施例中,按照预设规则将各待下发的请求下发至目标线程池包括:
获取ThreadPoolExecutor线程池的核心线程的数量;
从优先排序队列的第一个待下发的请求开始,将第一预设数量的待下发的请求下发至核心线程;其中,第一预设数量等于核心线程的数量;
通过核心线程处理第一预设数量的待下发的请求;
获取待下发的请求的数量与核心线程的数量的第一差值;其中,第一差值为待下发的请求中除第一预设数量的待下发的请求外第一剩余的待下发的请求的数量;
根据第一差值与缓存队列的数量的关系将待下发的请求下发至ThreadPoolExecutor线程池。
在一些实施例中,在第一差值小于或等于缓存队列的数量的情况下,根据第一差值与缓存队列的数量的关系将待下发的请求下发至ThreadPoolExecutor线程池包括:
按照优先排序队列的顺序,将第一剩余的待下发的请求依次下发至ThreadPoolExecutor线程池。
在一些实施例中,在第一差值大于缓存队列的数量,且小于ThreadPoolExecutor线程池剩余可容纳的请求的数量的情况下,根据第一差值与缓存队列的数量的关系将待下发的请求下发至ThreadPoolExecutor线程池包括:
按照优先排序队列的顺序,从第一预设数量的待下发的请求的下一个待下发的请求开始,跳过第二预设数量的待下发的请求,并将待下发的请求中除第一预设数量、第二预设数量的待下发的请求外第二剩余的待下发的请求下发至缓存队列中;
按照优先排序队列的顺序,将第二预设数量的待下发的请求下发至ThreadPoolExecutor线程池,并启动新线程执行第二预设数量的待下发的请求;其中,ThreadPoolExecutor线程池剩余可容纳的请求的数量等于待下发的请求的数量、最大线程的数量与核心线程的数量的差值;第二预设数量等于待下发的请求的数量、核心线程的数量与缓存队列的数量的差值。
在一些实施例中,在第一差值大于缓存队列的数量,且大于ThreadPoolExecutor线程池剩余可容纳的请求的数量的情况下,根据第一差值与缓存队列的数量的关系将待下发的请求下发至ThreadPoolExecutor线程池包括:
按照优先排序队列的顺序,从第一预设数量的待下发的请求的下一个待下发的请求开始,跳过第三预设数量的待下发的请求,并将待下发的请求中除第一预设数量、第三预设数量的待下发的请求外第三剩余的待下发的请求下发至缓存队列中;
按照优先排序队列的顺序,将第三预设数量的待下发的请求下发至ThreadPoolExecutor线程池,并启动新线程执行第三预设数量的待下发的请求;其中,第三预设数量等于最大线程的数量与核心线程的数量的差值。
在一些实施例中,按照预设规则将各待下发的请求下发至目标线程池包括:
自开始下发各待下发的请求至目标线程池开始,判断预设时间内按照预设规则是否完全将各待下发的请求下发至目标线程池;
若预设时间内按照预设规则未完全将各待下发的请求下发至目标线程池,则返回根据预先设定的各待下发的请求的权重值将各待下发的请求进行排序的步骤。
在一些实施例中,在按照预设规则将各待下发的请求下发至目标线程池之后,还包括:
输出用于表征成功将各待下发的请求下发至目标线程池的提示信息。
在一些实施例中,目标线程池为Eager ThreadPoolExecutor线程池;按照预设规则将各待下发的请求下发至目标线程池包括:
按照优先排序队列的顺序,从优先排序队列中的第一个待下发的请求开始,依次将各待下发的请求下发至Eager ThreadPoolExecutor线程池。
为了解决上述技术问题,本申请还提供一种批量请求下发的优先排队处理装置,包括:
获取模块,用于获取各待下发的请求以及各待下发的请求下发的目标线程池;
排序模块,用于根据预先设定的各待下发的请求的权重值将各待下发的请求进行排序,以便形成优先排序队列;
下发模块,用于按照预设规则将各待下发的请求下发至目标线程池,以便目标线程池按照优先排序队列处理各待下发的请求;其中,预设规则根据目标线程池的处理请求的方式进 行确定。
为了解决上述技术问题,本申请还提供一种服务器,包括:
存储器,用于存储计算机程序;
处理器,用于执行计算机程序时实现上述的批量请求下发的优先排队处理方法的步骤。
为了解决上述技术问题,本申请还提供一种非易失性可读存储介质,非易失性可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现上述的批量请求下发的优先排队处理方法的步骤。
本申请所提供的批量请求下发的优先排队处理方法,包括:获取各待下发的请求以及各待下发的请求下发的目标线程池;根据预先设定的各待下发的请求的权重值将各待下发的请求进行排序,以便形成优先排序队列;按照预设规则将各待下发的请求下发至目标线程池,以便目标线程池按照优先排序队列处理各待下发的请求;其中,预设规则根据目标线程池的处理请求的方式进行确定。该方法中根据权重值对各请求进行优先排序,配合利用线程池,根据不同线程池的执行任务特性,采用不同的下发逻辑将请求提交线程池以保证请求按照既定的优先级顺序高效执行。这样可以充分保证批量下发请求时任务的执行顺序可以按照预定的优先排序进行顺序执行,保证高优先级的任务可以被优先执行,提高系统任务处理能力以及用户的使用体验效果。
此外,本申请还提供一种批量请求下发的优先排队处理装置、服务器以及非易失性可读存储介质,与上述提到的批量请求下发的优先排队处理方法具有相同或相对应的技术特征,效果同上。
附图说明
为了更清楚地说明本申请实施例,下面将对实施例中所需要使用的附图做简单的介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种批量请求下发的优先排队处理方法的流程图;
图2为本申请的一实施例提供的批量请求下发的优先排队处理装置的结构图;
图3为本申请另一实施例提供的服务器的结构图;
图4为本申请实施例提供的批量请求下发的优先排队处理流程图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下,所获得的所有其他实施例,都属于本申请保护范围。
本申请的核心是提供一种批量请求下发的优先排队处理方法、装置、服务器及介质,用于按照预定的优先排序执行各请求,提高系统任务处理能力以及用户的使用体验效果。
目前大规模高并发的应用场景越来越多,虽然任务处理的速度在不断提升,但是任务的请求来源也在骤增,包括界面下发、大量的外部调用等都在不断加大请求的数量,而任务处理速度的影响因素又比较多,如:CPU、网络、内存等等,当这些影响因素一时无法解决,又同时出现高并发批量请求时,即使采用线程池的方式也难免会有任务处于等待状态。因此,本申请中根据批量请求的任务重的重要程度,按照重要程度将请求进行优先排队,进而按照预定的优先排序进行顺序执行,保证高优先级的任务可以被优先执行,提高系统任务处理能力以及用户的使用体验效果。
为了使本技术领域的人员更好地理解本申请方案,下面结合附图和具体实施方式对本申请作进一步的详细说明。图1为本申请实施例提供的一种批量请求下发的优先排队处理方法的流程图,如图1所示,该方法包括:
S10:获取各待下发的请求以及各待下发的请求下发的目标线程池。
本申请一些实施例中是针对批量请求时请求任务处理的先后顺序会影响系统的处理速度以及影响用户的体验效果,因此为了对各请求进行处理,首先需要获取各待下发的请求。各待下发的请求是需要下发至线程上才能处理。由于使用线程池具有可以减少创建和销毁线程的次数,每个工作线程都可以重复使用,并且可以根据系统的承受能力,调整线程池中工作线程的数量,防止因为消耗过多内存导致服务器崩溃的优点,因此本申请一些实施例中将请求下发至线程池中进行处理。对于选取的目标线程池不作限定,根据实际情况进行确定,如可以选取ThreadPoolExecutor线程池对各请求进行处理。
S11:根据预先设定的各待下发的请求的权重值将各待下发的请求进行排序,以便形成优先排序队列。
为了对批量请求的重要程度进行排序,在一些实施例中根据各请求的权重值对各请求进行排序。对于预先设定的各待下发的请求的权重值不作限定,根据实际情况进行确定。根据权重值对各请求进行排序,权重值大,说明请求的重要程度高,请求的优先级高;反之,权重值小,说明请求的重要程度低,请求的优先级低。
S12:按照预设规则将各待下发的请求下发至目标线程池,以便目标线程池按照优先排序队列处理各待下发的请求;其中,预设规则根据目标线程池的处理请求的方式进行确定。
配合线程池的高效性进行任务的执行,不同类型的线程池执行任务的方式不同,因此队列下发线程池的逻辑也不同。故而,在一些实施例中根据目标线程池的处理请求的方式确定预设规则,按照预设规则将各待下发的请求下发至目标线程池。
在一些实施例中所提供的批量请求下发的优先排队处理方法,包括:获取各待下发的请求以及各待下发的请求下发的目标线程池;根据预先设定的各待下发的请求的权重值将各待下发的请求进行排序,以便形成优先排序队列;按照预设规则将各待下发的请求下发至目标线程池,以便目标线程池按照优先排序队列处理各待下发的请求;其中,预设规则根据目标线程池的处理请求的方式进行确定。该方法中根据权重值对各请求进行优先排序,配合利用线程池,根据不同线程池的执行任务特性,采用不同的下发逻辑将请求提交线程池以保证请求按照既定的优先级顺序高效执行。这样可以充分保证批量下发请求时任务的执行顺序可以按照预定的优先排序进行顺序执行,保证高优先级的任务可以被优先执行,提高系统任务处理能力以及用户的使用体验效果。
在实施中,为了能够较准确地得到各请求的优先级顺序,在一个或多个实施例中,各待下发的请求的权重值包括各待下发的请求的属性的权重值以及属性值的权重值。
请求的属性如:方式、来源、类型、数据量等;每种属性添加属性值,如方式属性对应的属性值为POST、PUT、GET、DELETE;来源属性对应的属性值为内部、Openstack、Dsm等;类型属性对应的属性值为升级类、业务类、服务类等;数据量属性对应的属性值为高、中、低等。分别对请求的属性设置权重值以及对属性值再设置权重值。
一些实施例所提供的请求的权重值包含属性的权重值以及对属性值也设置相应的权重值,相比于仅对属性设置权重值的方式,在一些实施例中得到的权重值能够更准确地用来确定各请求的优先级顺序。
在上述实施例的基础上,根据上述实施例中获得的权重值对各待下发的请求进行排序包括:
根据预先设定的各待下发的请求的权重值将各待下发的请求进行排序包括:
根据预先设定的各待下发的请求的属性的权重值以及属性值的权重值确定各待下发的请求的综合评分值;
将各待下发的请求按照综合评分值从大到小的顺序排列。
在批量请求下发过程中根据请求参数从请求配置中获取各属性值对应的权重值,属性值的权重乘以对应属性的加权并求和得到该请求的综合评分值,依次将所有请求的综合评分值计算出来并根据评分值从大到小进行排序。假设方式属性的权重值、来源属性的权重值、类型属性的权重值、数据量属性的权重值依次为10%、40%、20%、30%,从属性的权重值可以发现来源属性的权重值最大;假设来源属性中内部属性值的权重值为20%、openstack属性值的权重值为70%、Dsm属性值的权重值为10%,进而可以知道来自openstack的请求的综合评分值为40%*70%=28%;按照该方法,可以计算出各待下发的请求的权重值,将各待下发的请求按照综合评分值从大到小的顺序排列。
一些实施例所提供的根据预先设定的各待下发的请求的权重值将各待下发的请求进行排序的方法中,各待下发的请求的权重值包括各待下发的请求的属性的权重值以及属性值的权重值,进而根据权重值对各请求进行综合评分,相比于单独根据请求的属性的权重值确定的请求的优先级顺序,一些实施例的方法得到的请求的优先级的顺序更加准确;其次,按照从大到小的顺序对各请求进行排序,排在最前面的说明请求的重要程度越高,因此,一些实施例将各请求按照优先级从高到低的顺序排列,使得能够直观了解到各请求的重要程度,方便后续对权重值高的请求优先处理。
权重值的高低代表请求处理的顺序。为了使获得的请求的优先顺序较为合理,在一个或多个实施例中,各待下发的请求的权重值至少是根据各待下发的请求的使用频率确定;权重值与使用频率呈正相关。
各企业可以根据下发的请求的使用频率确定请求的权重值。使用频率越高,设置较高的权重值,使用频率越低,设置较低的权重值。将批量下发的请求按照预定的权重配置进行优先级排序,形成一个按照企业特性排序的优先顺序队列。
在实施中,为了实时获取线程池内线程的各种状态以及可以动态调整线程池大小,在一个或多个实施例中,目标线程池为ThreadPoolExecutor线程池;预设规则至少根据ThreadPoolExecutor线程池的核心线程的数量、待下发的请求的数量、ThreadPoolExecutor线程池的缓存队列的数量、ThreadPoolExecutor线程池的最大线程的数量确定。
提交一个任务到ThreadPoolExecutor线程池中,ThreadPoolExecutor线程池的处理流程如下:判断线程池里的核心线程是否都在执行任务,如果不是(核心线程空闲或者还有核心线程没有被创建)则创建一个新的工作线程来执行任务。如果核心线程都在执行任务,则进 入下个流程;线程池判断工作队列是否已满,如果工作队列没有满,则将新提交的任务存储在这个工作队列里。如果工作队列满了,则进入下个流程;判断线程池里的线程是否都处于工作状态,如果没有,则创建一个新的工作线程来执行任务。如果已经满了,则交给饱和策略来处理这个任务。采用ThreadPoolExecutor线程池处理请求可以实时获取线程池内线程的各种状态以及可以动态调整线程池大小。
在实施中,不同的线程池处理任务的内部逻辑不同,因此需要针对不同线程池采用不同的请求下发逻辑。在采用ThreadPoolExecutor线程池处理请求时,在一个或多个实施例中,按照预设规则将各待下发的请求下发至目标线程池包括:
获取ThreadPoolExecutor线程池的核心线程的数量;
从优先排序队列的第一个待下发的请求开始,将第一预设数量的待下发的请求下发至核心线程;其中,第一预设数量等于核心线程的数量;
通过核心线程处理第一预设数量的待下发的请求;
获取待下发的请求的数量与核心线程的数量的第一差值;其中,第一差值为待下发的请求中除第一预设数量的待下发的请求外第一剩余的待下发的请求的数量;
根据第一差值与缓存队列的数量的关系将待下发的请求下发至ThreadPoolExecutor线程池。
具体地,在第一差值小于或等于缓存队列的数量的情况下,按照优先排序队列的顺序,将第一剩余的待下发的请求依次下发至ThreadPoolExecutor线程池;
在第一差值大于缓存队列的数量,且小于ThreadPoolExecutor线程池剩余可容纳的请求的数量的情况下,按照优先排序队列的顺序,从第一预设数量的待下发的请求的下一个待下发的请求开始,跳过第二预设数量的待下发的请求,并将待下发的请求中除第一预设数量、第二预设数量的待下发的请求外第二剩余的待下发的请求下发至缓存队列中;按照优先排序队列的顺序,将第二预设数量的待下发的请求下发至ThreadPoolExecutor线程池,并启动新线程执行第二预设数量的待下发的请求;其中,ThreadPoolExecutor线程池剩余可容纳的请求的数量等于待下发的请求的数量、最大线程的数量与核心线程的数量的差值;第二预设数量等于待下发的请求的数量、核心线程的数量与缓存队列的数量的差值;
在第一差值大于ThreadPoolExecutor线程池剩余可容纳的请求的数量的情况下,按照优先排序队列的顺序,从第一预设数量的待下发的请求的下一个待下发的请求开始,跳过第三预设数量的待下发的请求,并将待下发的请求中除第一预设数量、第三预设数量的待下发的 请求外第三剩余的待下发的请求下发至缓存队列中;按照优先排序队列的顺序,将第三预设数量的待下发的请求下发至ThreadPoolExecutor线程池,并启动新线程执行第三预设数量的待下发的请求;其中,第三预设数量等于最大线程的数量与核心线程的数量的差值。
使用的线程池是ThreadPoolExecutor线程池,根据该线程池处理任务的逻辑,在下发请求队列(队列总数未quest)到线程池中时,首先判断线程池的核心线程数corePoolSize,按队列顺序将等同于核心线程数的任务优先下发到线程池启动核心线程执行任务,其次判断剩余队列任务数量(quest-corePoolSize)如果小于线程池的缓存队列数queuelist则依次下发剩余请求到线程池中即可,如果剩余请求数量(quest-corePoolSize)大于线程池缓存队列数queuelist且小于线程池还可容纳任务数(queuelist+maximumPoolSize-corePoolSize)则按照顺序从剩余请求队列跳过(quest-corePoolSize-queuelist)个请求后依次下发请求到线程池的缓存队列中,然后再将跳过的任务下发线程池启动新的线程执行任务,如果剩余请求数(quest-corePoolSize)大于(queuelist+maximumPoolSize-corePoolSize)则从剩余请求队列中跳过(maximumPoolSize-corePoolSize)个请求后下发queuelist个到线程池的缓存队列,然后将跳过的(maximumPoolSize-corePoolSize)个请求下发线程池并启动新线程执行任务,后续队列暂缓下发线程池,这样可以保证线程池的任务执行顺序按照预定的优先排序顺序执行。为了便于理解,以如下例子进行说明。
(1)请求优先排队序列:[1,2,3,4,5,6,7,8,9,10];
(2)若使用ThreadPoolExecutor线程池,核心线程数为2。
1)优先下发请求1,2至线程池,启动2个核心线程执行任务;
2)根据剩余任务队列与线程池属性分情况下发:
①剩余任务队列8个,若线程池缓存队列=10>8,则:将剩余3至10请求下发线程池放入缓存队列排队执行即可,任务的整体执行顺序依旧按预定顺序执行。
②剩余任务队列8个,若线程池缓存队列5,最大线程数7,线程池缓存队列=5<8<线程池还可容纳任务数5+7-2=10,则跳过10-2-5=3个任务将任务6至10依次下发线程池放入缓存队列,最后将3至5任务下发线程池启动3个新线程执行任务,这样任务执行顺序依旧按预定顺序执行。
③剩余任务队列8个,若线程池缓存队列5,最大线程数4,线程池还可容纳任务数5+4-2=7<8则从队列中跳过4-2=2个任务将5至9任务下发至线程池放入缓存队列,然后将任务3和4下发线程池中启动新线程执行任务。其他任务等待下发线程池即可,这样任务执行顺序依旧按预定顺序执行。
一些实施例所提供的采用ThreadPoolExecutor线程池处理请求时按照预设规则将各待下发的请求下发至目标线程池,使得能够实现线程池中的任务按照预定请求优先顺序完成请求执行。
在实施中,为了方便用户了解请求下发的情况以及避免用户长时间的等待,在一个或多个实施例中,按照预设规则将各待下发的请求下发至目标线程池包括:
自开始下发各待下发的请求至目标线程池开始,判断预设时间内按照预设规则是否完全将各待下发的请求下发至目标线程池;
若预设时间内按照预设规则未完全将各待下发的请求下发至目标线程池,则返回根据预先设定的各待下发的请求的权重值将各待下发的请求进行排序的步骤。
对于预设时间的具体的值不作限定,根据实际情况进行确定。如在设置预设时间时,可以根据请求的数量、各请求的大小等设置合适的预设时间。通过预设时间用户可以了解请求下发的情况以及避免长时间的等待。
在实施中,为了方便用户直观了解到请求的下发情况,在一个或多个实施例中,在按照预设规则将各待下发的请求下发至目标线程池之后,还包括:
输出用于表征成功将各待下发的请求下发至目标线程池的提示信息。
对于输出提示信息的方式、提示信息的内容等不作限定,根据实际情况进行确定。通过输出用于表征成功将各待下发的请求下发至目标线程池的提示信息,用户可以直观了解到请求的下发情况。
上述描述了使用ThreadPoolExecutor线程池处理下发的请求。实际中,当选取的目标线程池为Eager ThreadPoolExecutor线程池,则根据该线程池的逻辑(按先启线程达到最大线程数后再进入等待队列)依次将请求队列按照顺序下发至线程池中即可。如果可以选择,优先使用Eager ThreadPoolExecutor线程池,这样可以实现线程池中的任务按照预定请求优先顺序完成请求执行。
在上述实施例中,对于批量请求下发的优先排队处理方法进行了详细描述,本申请还提供批量请求下发的优先排队处理装置、服务器对应的实施例。需要说明的是,本申请从两个角度对装置部分的实施例进行描述,一种是基于功能模块的角度,另一种是基于硬件的角度。
图2为本申请的一实施例提供的批量请求下发的优先排队处理装置的结构图。在一些实施例中基于功能模块的角度,包括:
获取模块10,用于获取各待下发的请求以及各待下发的请求下发的目标线程池;
排序模块11,用于根据预先设定的各待下发的请求的权重值将各待下发的请求进行排序,以便形成优先排序队列;
下发模块12,用于按照预设规则将各待下发的请求下发至目标线程池,以便目标线程池按照优先排序队列处理各待下发的请求;其中,预设规则根据目标线程池的处理请求的方式进行确定。
由于装置部分的实施例与方法部分的实施例相互对应,因此装置部分的实施例请参见方法部分的实施例的描述,这里暂不赘述。
一些实施例所提供的批量请求下发的优先排队处理装置,通过获取模块获取各待下发的请求以及各待下发的请求下发的目标线程池;通过排序模块根据预先设定的各待下发的请求的权重值将各待下发的请求进行排序,以便形成优先排序队列;通过下发模块按照预设规则将各待下发的请求下发至目标线程池,以便目标线程池按照优先排序队列处理各待下发的请求;其中,预设规则根据目标线程池的处理请求的方式进行确定。该装置中,根据权重值对各请求进行优先排序,配合利用线程池,根据不同线程池的执行任务特性,采用不同的下发逻辑将请求提交线程池以保证请求按照既定的优先级顺序高效执行。这样可以充分保证批量下发请求时任务的执行顺序可以按照预定的优先排序进行顺序执行,保证高优先级的任务可以被优先执行,提高系统任务处理能力以及用户的使用体验效果。
图3为本申请另一实施例提供的服务器的结构图。在一些实施例中基于硬件角度,如图3所示,服务器包括:
存储器20,用于存储计算机程序;
处理器21,用于执行计算机程序时实现如上述实施例中所提到的批量请求下发的优先排队处理方法的步骤。
一些实施例提供的服务器可以包括但不限于智能手机、平板电脑、笔记本电脑或台式电脑等。
其中,处理器21可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器21可以采用数字信号处理器(Digital Signal Processor,DSP)、现场可编程门阵列 (Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器21也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU;协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器21可以集成有图形处理器(Graphics Processing Unit,GPU),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器21还可以包括人工智能(Artificial Intelligence,AI)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器20可以包括一个或多个非易失性可读存储介质,该非易失性可读存储介质可以是非暂态的。存储器20还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器20至少用于存储以下计算机程序201,其中,该计算机程序被处理器21加载并执行之后,能够实现前述任一实施例公开的批量请求下发的优先排队处理方法的相关步骤。另外,存储器20所存储的资源还可以包括操作系统202和数据203等,存储方式可以是短暂存储或者永久存储。其中,操作系统202可以包括Windows、Unix、Linux等。数据203可以包括但不限于上述所提到的批量请求下发的优先排队处理方法所涉及到的数据等。
在一些实施例中,服务器还可包括有显示屏22、输入输出接口23、通信接口24、电源25以及通信总线26。
本领域技术人员可以理解,图3中示出的结构并不构成对服务器的限定,可以包括比图示更多或更少的组件。
本申请实施例提供的服务器,包括存储器和处理器,处理器在执行存储器存储的程序时,能够实现如下方法:批量请求下发的优先排队处理方法,效果同上。
最后,本申请还提供一种非易失性可读存储介质对应的实施例。非易失性可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现如上述方法实施例中记载的步骤。
可以理解的是,如果上述实施例中的方法以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种 可以存储程序代码的介质。
本申请提供的非易失性可读存储介质包括上述提到的批量请求下发的优先排队处理方法,效果同上。
为了使本领域的技术人员更好的理解本申请的技术方案,下面结合附图4对上述本申请作进一步的详细说明,图4为本申请实施例提供的批量请求下发的优先排队处理流程图。如图4所示,该方法包括:
在服务器中增加请求权重配置功能界面,界面属性配置包括方式(a)、来源(b)、类型(c)、数据(d),在方式(a)中包含POST(a1)、PUT(a2)、GET(a3)、DELETE(a4);在来源(b)中包含内部(b1)、Dsm(b2)、Openstack(b3);在类型(c)中包含升级(c1)、业务(c2)、服务(c3);在数据中包含大(d1)、中(d2)、小(d3)(括号值中的数字为对应的权重值);根据请求参数从请求配置中获取各属性值对应的权重值,属性值的权重乘以对应属性的加权并求和得到该请求的综合评分值,如请求包括请求x、请求y、请求z等;
请求x的权重Qx=(ax*a+bx*b+cx*c+dx*d)/4;
请求y的权重Qy=(ay*a+by*b+cy*c+dy*d)/4;
请求z的权重Qz=(az*a+bz*b+cz*c+dz*d)/4;
其他请求的权重计算方法类似,不再赘述;按Qx、Qy、Qz……从大到小的排序,组成请求优先队列,如[x,y,z……];将[x,y,z……]请求队列结合线程池类型不同,按照不同规则有序进入线程池;按照顺序执行任务。
该方法中,首先在界面进行请求属性优先权重配置界面,然后批量请求下发前根据请求属性获取请求权重得到请求综合评分值,根据综合评分值将请求按优先级排序,配合不同线程池的使用逻辑按不同规则将任务下发线程池执行任务,保证任务的执行顺序按照预定配置的优先级进行,这样可以使系统在下发批量请求时可以灵活按照企业的侧重点对任务进行优先级排序,并保证任务按预定优先级执行;其次按预定顺序执行任务可以保证系统遇到高并发请求时优先处理紧急的、重要的、数据量大的任务,保证该类业务优先快速执行,增加系统处理的灵活性、快捷性以及稳定性。同时提高客户的使用体验效果。
以上对本申请所提供的批量请求下发的优先排队处理方法、装置、服务器及介质进行了详细介绍。说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实 施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以对本申请进行若干改进和修饰,这些改进和修饰也落入本申请权利要求的保护范围内。
还需要说明的是,在本说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。

Claims (22)

  1. 一种批量请求下发的优先排队处理方法,其特征在于,包括:
    获取各待下发的请求以及各所述待下发的请求下发的目标线程池;
    根据预先设定的各所述待下发的请求的权重值将各所述待下发的请求进行排序,以便形成优先排序队列;
    按照预设规则将各所述待下发的请求下发至所述目标线程池,以便所述目标线程池按照所述优先排序队列处理各所述待下发的请求;其中,所述预设规则根据所述目标线程池的处理请求的方式进行确定。
  2. 根据权利要求1所述的批量请求下发的优先排队处理方法,其特征在于,各所述待下发的请求的所述权重值包括各所述待下发的请求的属性的权重值以及属性值的权重值。
  3. 根据权利要求2所述的批量请求下发的优先排队处理方法,其特征在于,所述根据预先设定的各所述待下发的请求的权重值将各所述待下发的请求进行排序包括:
    根据预先设定的各所述待下发的请求的所述属性的权重值以及所述属性值的权重值确定各所述待下发的请求的综合评分值;
    将各所述待下发的请求按照综合评分值从大到小的顺序排列。
  4. 根据权利要求3所述的批量请求下发的优先排队处理方法,其特征在于,各所述待下发的请求的所述权重值至少是根据各所述待下发的请求的使用频率确定;所述权重值与所述使用频率呈正相关。
  5. 根据权利要求2至4任意一项所述的批量请求下发的优先排队处理方法,其特征在于,所述属性包括:方式、来源、类型、数据量中的至少一种。
  6. 根据权利要求5所述的方法,其特征在于,若所述属性为方式,所述属性对应的属性值包括POST、PUT、GET、DELETE中的至少一种。
  7. 根据权利要求5所述的方法,其特征在于,若属性为来源,所述属性对应的属性值包括内部、Openstack、Dsm中的至少一种。
  8. 根据权利要求5所述的方法,其特征在于,若所述属性为类型,所述属性对应的属性值包括升级类、业务类、服务类中的至少一种。
  9. 根据权利要求5所述的方法,其特征在于,若所述属性为数据量,所述属性对应的属性值包括高、中、低中的至少一种。
  10. 根据权利要求1至4任意一项所述的批量请求下发的优先排队处理方法,其特征在于,所述目标线程池为ThreadPoolExecutor线程池;所述预设规则至少根据所述ThreadPoolExecutor线程池的核心线程的数量、所述待下发的请求的数量、所述ThreadPoolExecutor线程池的缓存队列的数量、所述ThreadPoolExecutor线程池的最大线程的数量确定。
  11. 根据权利要求10所述的批量请求下发的优先排队处理方法,其特征在于,所述按照预设规则将各所述待下发的请求下发至所述目标线程池包括:
    获取所述ThreadPoolExecutor线程池的核心线程的数量;
    从所述优先排序队列的第一个所述待下发的请求开始,将第一预设数量的所述待下发的请求下发至核心线程;其中,所述第一预设数量等于所述核心线程的数量;
    通过所述核心线程处理所述第一预设数量的所述待下发的请求;
    获取所述待下发的请求的数量与所述核心线程的数量的第一差值;其中,所述第一差值为所述待下发的请求中除所述第一预设数量的所述待下发的请求外第一剩余的所述待下发的请求的数量;
    根据所述第一差值与所述缓存队列的数量的关系将所述待下发的请求下发至所述ThreadPoolExecutor线程池。
  12. 根据权利要求11所述的批量请求下发的优先排队处理方法,其特征在于,在所述第一差值小于或等于所述缓存队列的数量的情况下,所述根据所述第一差值与所述缓存队列的数量的关系将所述待下发的请求下发至所述ThreadPoolExecutor线程池包括:
    按照所述优先排序队列的顺序,将第一剩余的所述待下发的请求依次下发至所述ThreadPoolExecutor线程池。
  13. 根据权利要求11所述的批量请求下发的优先排队处理方法,其特征在于,在所述第一差值大于所述缓存队列的数量,且小于所述ThreadPoolExecutor线程池剩余可容纳的请求的数量的情况下,所述根据所述第一差值与所述缓存队列的数量的关系将所述待下发的请求下发至所述ThreadPoolExecutor线程池包括:
    按照所述优先排序队列的顺序,从所述第一预设数量的所述待下发的请求的下一个所述待下发的请求开始,跳过第二预设数量的所述待下发的请求,并将所述待下发的请求中除所述第一预设数量、所述第二预设数量的所述待下发的请求外第二剩余的所述待下发的请求下发至缓存队列中;
    按照所述优先排序队列的顺序,将所述第二预设数量的所述待下发的请求下发至所述ThreadPoolExecutor线程池,并启动新线程执行所述第二预设数量的所述待下发的请求;其中,所述ThreadPoolExecutor线程池剩余可容纳的请求的数量等于所述待下发的请求的数量、所述最大线程的数量与所述核心线程的数量的差值;所述第二预设数量等于所述待下发的请求的数量、所述核心线程的数量与所述缓存队列的数量的差值。
  14. 根据权利要求11所述的批量请求下发的优先排队处理方法,其特征在于,在所述第一差值大于所述缓存队列的数量,且大于所述ThreadPoolExecutor线程池剩余可容纳的请求的数量的情况下,所述根据所述第一差值与所述缓存队列的数量的关系将所述待下发的请求下发至所述ThreadPoolExecutor线程池包括:
    按照所述优先排序队列的顺序,从所述第一预设数量的所述待下发的请求的下一个所述待下发的请求开始,跳过第三预设数量的所述待下发的请求,并将所述待下发的请求中除所述第一预设数量、所述第三预设数量的所述待下发的请求外第三剩余的所述待下发的请求下发至缓存队列中;
    按照所述优先排序队列的顺序,将所述第三预设数量的所述待下发的请求下发至所述ThreadPoolExecutor线程池,并启动新线程执行所述第三预设数量的所述待下发的请求;其中,所述第三预设数量等于所述最大线程的数量与所述核心线程的数量的差值。
  15. 根据权利要求11所述的批量请求下发的优先排队处理方法,其特征在于,所述按照预设规则将各所述待下发的请求下发至所述目标线程池包括:
    自开始下发各所述待下发的请求至所述目标线程池开始,判断预设时间内按照所述预设规则是否完全将各所述待下发的请求下发至所述目标线程池;
    若预设时间内按照所述预设规则未完全将各所述待下发的请求下发至所述目标线程池,则返回所述根据预先设定的各所述待下发的请求的权重值将各所述待下发的请求进行排序的步骤。
  16. 根据权利要求15所述的批量请求下发的优先排队处理方法,其特征在于,在所述按照预设规则将各所述待下发的请求下发至所述目标线程池之后,还包括:
    输出用于表征成功将各所述待下发的请求下发至所述目标线程池的提示信息。
  17. 根据权利要求1至4任意一项所述的批量请求下发的优先排队处理方法,其特征在于,所述目标线程池为Eager ThreadPoolExecutor线程池;所述按照预设规则将各所述待下发的请求下发至所述目标线程池包括:
    按照所述优先排序队列的顺序,从所述优先排序队列中的第一个所述待下发的请求开始,依次将各所述待下发的请求下发至所述Eager ThreadPoolExecutor线程池。
  18. 根据权利要求1所述的批量请求下发的优先排队处理方法,其特征在于,所述方法还包括:
    根据系统的承受能力,调整所述目标线程池中工作线程的数量。
  19. 根据权利要求1所述的批量请求下发的优先排队处理方法,其特征在于,若所述目标线程池为Eager ThreadPoolExecutor线程池,所述方法还包括:
    按照所述优先排序队列将各所述待下发的请求下发至目标线程池;所述目标线程池按照先启线程达到最大线程数后再进入等待队列的顺序处理各所述待下发的请求。
  20. 一种批量请求下发的优先排队处理装置,其特征在于,包括:
    获取模块,用于获取各待下发的请求以及各所述待下发的请求下发的目标线程池;
    排序模块,用于根据预先设定的各所述待下发的请求的权重值将各所述待下发的请求进行排序,以便形成优先排序队列;
    下发模块,用于按照预设规则将各所述待下发的请求下发至所述目标线程池,以便所述目标线程池按照所述优先排序队列处理各所述待下发的请求;其中,所述预设规则根据所述目标线程池的处理请求的方式进行确定。
  21. 一种服务器,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述计算机程序时实现如权利要求1至19任一项所述的批量请求下发的优先排队处理方法的步骤。
  22. 一种非易失性可读存储介质,其特征在于,所述非易失性可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至19任一项所述的批量请求下发的优先排队处理方法的步骤。
PCT/CN2023/072153 2022-08-11 2023-01-13 批量请求下发的优先排队处理方法、装置、服务器及介质 WO2024031931A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210958383.8 2022-08-11
CN202210958383.8A CN115033393B (zh) 2022-08-11 2022-08-11 批量请求下发的优先排队处理方法、装置、服务器及介质

Publications (1)

Publication Number Publication Date
WO2024031931A1 true WO2024031931A1 (zh) 2024-02-15

Family

ID=83130186

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/072153 WO2024031931A1 (zh) 2022-08-11 2023-01-13 批量请求下发的优先排队处理方法、装置、服务器及介质

Country Status (2)

Country Link
CN (1) CN115033393B (zh)
WO (1) WO2024031931A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115033393B (zh) * 2022-08-11 2023-01-17 苏州浪潮智能科技有限公司 批量请求下发的优先排队处理方法、装置、服务器及介质
CN116048819B (zh) * 2023-03-30 2024-05-31 鸿盈科技实业(深圳)有限公司 一种高并发数据存储方法和系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070263650A1 (en) * 2006-05-09 2007-11-15 Srivatsa Sivan Subramania Method for prioritizing web service requests
CN105791254A (zh) * 2014-12-26 2016-07-20 阿里巴巴集团控股有限公司 网络请求处理方法、装置及终端
CN110287013A (zh) * 2019-06-26 2019-09-27 四川长虹电器股份有限公司 基于java多线程技术解决物联云端服务雪崩效应的方法
CN112905326A (zh) * 2021-02-18 2021-06-04 上海哔哩哔哩科技有限公司 任务处理方法及装置
CN113238861A (zh) * 2021-05-08 2021-08-10 北京天空卫士网络安全技术有限公司 一种任务执行方法和装置
CN113391910A (zh) * 2021-06-29 2021-09-14 未鲲(上海)科技服务有限公司 任务处理方法、装置、计算机设备及存储介质
CN115033393A (zh) * 2022-08-11 2022-09-09 苏州浪潮智能科技有限公司 批量请求下发的优先排队处理方法、装置、服务器及介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843592A (zh) * 2015-01-12 2016-08-10 芋头科技(杭州)有限公司 一种在预设嵌入式系统中实现脚本操作的系统
CN110569123B (zh) * 2019-07-31 2022-08-02 苏宁云计算有限公司 线程分配方法、装置、计算机设备和存储介质
CN111930486B (zh) * 2020-07-30 2023-11-17 中国工商银行股份有限公司 任务选取数据处理方法、装置、设备及存储介质
CN113157410A (zh) * 2021-03-30 2021-07-23 北京大米科技有限公司 线程池调节方法、装置、存储介质及电子设备
CN113641517B (zh) * 2021-08-10 2023-08-29 平安科技(深圳)有限公司 业务数据的发送方法、装置、计算机设备和存储介质
CN114201284A (zh) * 2021-12-14 2022-03-18 建信金融科技有限责任公司 定时任务管理方法及系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070263650A1 (en) * 2006-05-09 2007-11-15 Srivatsa Sivan Subramania Method for prioritizing web service requests
CN105791254A (zh) * 2014-12-26 2016-07-20 阿里巴巴集团控股有限公司 网络请求处理方法、装置及终端
CN110287013A (zh) * 2019-06-26 2019-09-27 四川长虹电器股份有限公司 基于java多线程技术解决物联云端服务雪崩效应的方法
CN112905326A (zh) * 2021-02-18 2021-06-04 上海哔哩哔哩科技有限公司 任务处理方法及装置
CN113238861A (zh) * 2021-05-08 2021-08-10 北京天空卫士网络安全技术有限公司 一种任务执行方法和装置
CN113391910A (zh) * 2021-06-29 2021-09-14 未鲲(上海)科技服务有限公司 任务处理方法、装置、计算机设备及存储介质
CN115033393A (zh) * 2022-08-11 2022-09-09 苏州浪潮智能科技有限公司 批量请求下发的优先排队处理方法、装置、服务器及介质

Also Published As

Publication number Publication date
CN115033393A (zh) 2022-09-09
CN115033393B (zh) 2023-01-17

Similar Documents

Publication Publication Date Title
WO2024031931A1 (zh) 批量请求下发的优先排队处理方法、装置、服务器及介质
US10223165B2 (en) Scheduling homogeneous and heterogeneous workloads with runtime elasticity in a parallel processing environment
US10908954B2 (en) Quality of service classes
CN113535367B (zh) 任务调度方法及相关装置
EP4113299A2 (en) Task processing method and device, and electronic device
US10884801B2 (en) Server resource orchestration based on application priority
CN109144700A (zh) 超时时长的确定方法、装置、服务器和数据处理方法
US11487555B2 (en) Running PBS jobs in kubernetes
CN113703951B (zh) 一种处理dma的方法、装置、及计算机可读存储介质
CN109840149B (zh) 任务调度方法、装置、设备及存储介质
CN104598311A (zh) 一种面向Hadoop的实时作业公平调度的方法和装置
CN111597044A (zh) 任务调度方法、装置、存储介质及电子设备
CN112269719B (zh) 基于ai训练平台的文件操作队列控制方法、装置及介质
WO2023165485A1 (zh) 调度方法及计算机系统
WO2024000859A1 (zh) 一种作业调度方法、作业调度装置、作业调度系统及存储介质
WO2023151498A1 (zh) 一种消息执行处理方法、装置、电子设备和存储介质
CN110851245A (zh) 一种分布式异步任务调度方法及电子设备
US9940207B2 (en) Failing back block objects in batch
CN115439250A (zh) 一种交易请求的处理方法及装置、存储介质、电子装置
KR20150089665A (ko) 워크플로우 작업 스케줄링 장치
CN107911484A (zh) 一种消息处理的方法及装置
CN107329819A (zh) 一种作业管理方法及装置
CN112925640A (zh) 一种集群训练节点分配方法、电子设备
CN111708799A (zh) Spark任务处理方法、装置、电子设备及存储介质
CN109800073B (zh) 实时进程的调度方法、装置、终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23851167

Country of ref document: EP

Kind code of ref document: A1