CN113806065A - Data processing method, device and storage medium - Google Patents

Data processing method, device and storage medium Download PDF

Info

Publication number
CN113806065A
CN113806065A CN202110087767.2A CN202110087767A CN113806065A CN 113806065 A CN113806065 A CN 113806065A CN 202110087767 A CN202110087767 A CN 202110087767A CN 113806065 A CN113806065 A CN 113806065A
Authority
CN
China
Prior art keywords
task
processed
thread
thread pool
task set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110087767.2A
Other languages
Chinese (zh)
Inventor
李鹏程
赵燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202110087767.2A priority Critical patent/CN113806065A/en
Publication of CN113806065A publication Critical patent/CN113806065A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • G06F5/065Partitioned buffers, e.g. allowing multiple independent queues, bidirectional FIFO's
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a data processing method, a data processing device and a data processing storage medium, and particularly relates to a method for acquiring a task set submitted by a thread pool and task identifiers corresponding to the task set, wherein the task set comprises at least one task to be processed, a mapping relation between the task set and the task identifiers corresponding to the task set is stored, a task thread is established in response to the thread pool, at least one task to be processed in the task set matched with the task identifiers of the thread pool is screened out based on the mapping relation, the thread pool acquires the corresponding at least one task to be processed, the acquired task to be processed is cached in a blocked work queue corresponding to the thread pool, and execution by the corresponding task thread is waited. According to the method and the device, the tasks to be processed of the thread pools are submitted and stored in a centralized mode, the corresponding tasks to be processed are obtained after the threads are established in the subsequent thread pools, the risk that the services are unavailable due to memory overflow caused by the upper limit of the memory of each server is avoided, and the data processing efficiency is improved.

Description

Data processing method, device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for data processing, and a storage medium.
Background
In daily development work, in order to improve system concurrency and increase throughput, a thread pool plays an important role in one application. According to the operating principle of a thread pool, the method mainly comprises two steps, namely, adding tasks, putting the tasks into a blocking queue (such as a bound queue ArrayBlockingQueue, an unbounded queue LinkedBlockingQueue, a delayed queue DelayQueue, a locked queue SynchronousQueue and the like), and storing the tasks at the stage, wherein the tasks mainly consume server resources, namely memory; and secondly, executing tasks, wherein server resources, namely Central Processing Units (CPUs), are mainly consumed in the stage. However, the memory space is a limited resource, which increases the memory consumption in the process of increasing and executing tasks, and when the memory overflow reaches the upper limit of the memory, the system is not available, and when the concurrency amount continuously rises, all services are down in an extreme case. If a bounded queue ArrayBlockingQueue is used, when the capacity of a resubmission task exceeds the capacity of the queue, a rejection strategy is triggered, and the task cannot be normally executed; or using an unbounded queue LinkedBlockingQueue to continue submitting tasks, memory overflow can result. When the existing blocking queue of the thread pool is used, abnormal operation of a memory or business logic can be encountered when the query rate per second is high.
Disclosure of Invention
The embodiment of the application provides a data processing method, and the problem that services are unavailable due to memory overflow of each application server is solved.
The method comprises the following steps:
acquiring a task set submitted by at least one thread pool and a task identifier corresponding to the task set, wherein the task set comprises at least one task to be processed;
storing the mapping relation between the task set and the task identifier corresponding to the task set;
responding to the thread pool to create a task thread, and screening out at least one task to be processed in the task set matched with the task identifier of the thread pool based on the mapping relation;
and the thread pool acquires at least one corresponding task to be processed, caches the acquired task to be processed to the blocked work queue corresponding to the thread pool, and waits to be executed by the corresponding task thread.
Optionally, the task data included in the acquired task to be processed is serialized and converted into a byte sequence corresponding to the task to be processed.
Optionally, when the task to be identified is saved, the saving time of the task to be identified is counted;
and stopping executing the operation of saving the task to be identified in response to the fact that the saving time length exceeds a preset time threshold corresponding to the task set to which the task to be identified belongs.
Optionally, in the task set that conforms to the mapping relationship with the received task identifier sent by the thread pool, if the to-be-processed task is not obtained, executing an operation of waiting for obtaining the to-be-processed task until the task thread of the thread pool corresponding to the task set is interrupted, and/or stopping obtaining the to-be-processed task in the task set when the duration of the task set being empty exceeds a preset duration.
Optionally, deserializing the byte sequence included in the acquired task to be processed into a task data object.
Optionally, the number of the saved task sets, the number of the tasks to be processed included in the task sets, the number of the thread pools and the number of the task threads are inquired, and the number of the application servers where the thread pools are located is adjusted.
In another embodiment of the present invention, there is provided an apparatus for data processing, the apparatus including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a task set submitted by at least one thread pool and a task identifier corresponding to the task set, and the task set comprises at least one task to be processed;
the storage module is used for storing the mapping relation between the task set and the task identifier corresponding to the task set;
the screening module is used for responding to the thread pool to establish a task thread and screening out at least one task to be processed in the task set matched with the task identifier of the thread pool based on the mapping relation;
and the cache module is used for the thread pool to acquire at least one corresponding task to be processed, cache the acquired task to be processed to the blocked work queue corresponding to the thread pool, and wait for the task to be executed by the corresponding task thread.
Optionally, the obtaining module is further configured to:
and serializing the task data contained in the acquired task to be processed, and converting the task data into a byte sequence corresponding to the task to be processed.
In another embodiment of the invention, a non-transitory computer readable storage medium is provided, storing instructions that, when executed by a processor, cause the processor to perform the steps of one of the above-described methods of data processing.
In another embodiment of the present invention, a terminal device is provided, which includes a processor configured to execute the steps of a data processing method as described above.
Based on the above embodiment, first, a task set submitted by at least one thread pool and a task identifier corresponding to the task set are obtained, where the task set includes at least one to-be-processed task, then, a mapping relationship between the task set and the task identifier corresponding to the task set is saved, further, a task thread is created in response to the thread pool, at least one to-be-processed task in the task set matched with the task identifier of the thread pool is screened out based on the mapping relationship, and finally, the thread pool obtains the corresponding at least one to-be-processed task, caches the obtained to-be-processed task to a blocked work queue corresponding to the thread pool, and waits to be executed by the corresponding task thread. According to the method and the device, the tasks to be processed of the thread pools are submitted and stored in a centralized mode, the corresponding tasks to be processed are obtained after the threads are established in the subsequent thread pools, the risk that the services are unavailable due to memory overflow caused by the upper limit of the memory of each server is avoided, and the data processing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic diagram illustrating a data interaction scenario for a method of data processing provided in embodiment 100 of the present application;
fig. 2 is a schematic diagram illustrating a method for data processing according to an embodiment 200 of the present application;
fig. 3 is a schematic diagram illustrating a specific flow of a method for data processing according to an embodiment 300 of the present application;
fig. 4 is a schematic diagram illustrating an apparatus for data processing according to an embodiment 400 of the present application;
fig. 5 shows a schematic diagram of a terminal device provided in embodiment 500 of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements explicitly listed, but may include other steps or elements not explicitly listed or inherent to such process, method, article, or apparatus.
Based on the problems in the prior art, the embodiment of the application provides a data processing method, which is mainly applicable to the technical field of computers. Fig. 1 is a schematic diagram of a data interaction scenario of a method for data processing according to embodiment 100 of the present application. The to-be-processed tasks to be processed by the servers using the thread pools are intensively submitted to the central task server, and the task threads are subsequently established in the thread pools, then the corresponding to-be-processed tasks are identified and acquired by the central task server and executed, so that the processing efficiency of each server thread pool is improved, and the risk of service unavailability possibly caused by memory overflow is reduced.
Further, as shown in fig. 2, a flow chart of a data processing method provided in embodiment 200 of the present application is schematically shown. The detailed steps are as follows:
step S11, acquiring a task set submitted by at least one thread pool and a task identifier corresponding to the task set, where the task set includes at least one to-be-processed task.
In this step, after each application server creates a thread pool for processing its corresponding task, it submits a task set composed of at least one task to be processed, which needs to be processed, to the central task server. Wherein, the central task server is independent from each application server. And submitting the task identifications corresponding to the task sets when submitting the task sets corresponding to the thread pools. Each task identifier corresponds to several corresponding tasks one to one.
Step S12, storing the mapping relationship between the task set and the task identifier corresponding to the task set.
In this step, after receiving the task sets and the corresponding task identifiers submitted by the thread pools, the central task server establishes a mapping relationship between the task sets and the corresponding task identifiers, and stores the task sets and the corresponding mapping relationships.
And step S13, creating a task thread in response to the thread pool, and screening out at least one task to be processed in the task set matched with the task identifier of the thread pool based on the mapping relation.
In this step, after the thread pool submits the task, a corresponding task thread is created based on the to-be-processed task to be executed, the central task server screens out a task set corresponding to the mapping relation matched with the task identifier of the thread pool from the stored task sets, and obtains at least one to-be-processed task in the task sets.
Step S14, the thread pool acquires at least one corresponding to-be-processed task, and caches the acquired to-be-processed task to the blocked work queue corresponding to the thread pool, to wait for being executed by the corresponding task thread.
In this step, each thread pool corresponds to a blocked work queue workQueue for storing the to-be-processed tasks waiting to be executed, so that the current system can create the to-be-processed tasks cached in the task thread processing task work queue based on the number of the central processing units. Further, the task to be processed corresponding to the current thread pool, which is obtained from the central task server, is cached in the blocking work queue to wait for the execution of the subsequent task thread.
As described above, based on the above embodiment, first, a task set submitted by at least one thread pool and a task identifier corresponding to the task set are obtained, where the task set includes at least one to-be-processed task, then, a mapping relationship between the task set and the task identifier corresponding to the task set is stored, further, a task thread is created in response to the thread pool, at least one to-be-processed task in the task set matching the task identifier of the thread pool is screened out based on the mapping relationship, and finally, the thread pool obtains the corresponding at least one to-be-processed task, caches the obtained to-be-processed task to a blocked work queue corresponding to the thread pool, and waits to be executed by the corresponding task thread. According to the method and the device, the tasks to be processed of the thread pools are submitted and stored in a centralized mode, the corresponding tasks to be processed are obtained after the threads are established in the subsequent thread pools, the risk that the services are unavailable due to memory overflow caused by the upper limit of the memory of each server is avoided, and the data processing efficiency is improved.
Fig. 3 is a schematic diagram illustrating a specific flow of a data processing method according to an embodiment 300 of the present application. The specific process is as follows:
s201, a central task server is created.
Here, the central task server in the embodiment of the present application is configured to acquire and store to-be-processed tasks that need to be executed by each application server. Wherein, the central task server preferably selects Redis middleware. Specifically, since Redis uses a memory as a data storage medium, which is the same as the conventional thread pool storage task, the efficiency of reading and writing data is extremely high, far exceeds that of a relational database, and provides rich data types. Meanwhile, Redis also has the following characteristics:
(1) the response speed is high
Redis is very fast in response, and can perform approximately 110000 read operations per second, or 81000 write operations, which are far faster than databases. If some common data are stored, the performance of the system can be effectively improved.
(2) Supporting multiple data types
Redis supports data types such as strings, hash structures, lists, collections, orderable collections, and the like. Some Java base data types can be stored for the character string, the hash can store the object, the List can store the List object, and the like. Therefore, the type of the stored data can be easily selected according to the requirement in the application, and the development is convenient. In the embodiment of the application, a list set data structure is adopted, and the execution type of each thread pool is different, so that the required task identifier Key can be customized according to business logic, and reading is facilitated.
(3) Operative atomicity
The operations of the application servers to access Redis are all atomic, ensuring that when two application servers access Redis servers simultaneously, the updated values (latest values) are obtained. When a plurality of task threads acquire the tasks to be processed, only one task thread can acquire the tasks to be processed at the same time because Redis is a single thread, and the consistency of data is ensured.
(4) Persistent data and disaster recovery
There are two current ways of persistence of Redis: rdb (redis database) and aof (appended only file), which can store data in the server memory in the disk, and prevent data loss due to server restart. And a Redis cluster can be built, and when one server node fails, another server node can be switched through a sentinel mode to continuously provide service. The data security is ensured, and the data loss of the downtime task of the traditional system is prevented.
S202, acquiring a task set submitted by at least one thread pool and a task identifier corresponding to the task set.
Here, in the embodiment of the present application, for example, creating a thread pool threadpooleexecutive is taken as an example, an entry object for creating a thread pool has a task blocking queue, where the object is a blockangqueue < Runnable > type, and therefore, a custom task blocking queue is required to inherit a method for saving a task in an object rewrite object.
Because the central server can expand capacity, and the situation that the space is insufficient does not exist, the two methods of offer (ee) and put (ee) can be combined, and a method is added into the participating String taskName, the data is directly submitted to the central task server, and the taskName can be used as the task identifier of the task set. Task identifiers taskName submitted by different thread pools are different.
S203, serializing the task data contained in the acquired task to be processed.
Here, the task data included in the acquired task to be processed is serialized and converted into a byte sequence corresponding to the task to be processed. Specifically, since the central task server does not identify whether the data is a task to be processed in the thread pool or other data when storing the data, the data stored in the central task server is only binary data. Therefore, when the thread pool submits the task to be processed, the task to be processed should be serialized, and the task to be processed expressed as the task object should be converted into a byte sequence, for example, serialized into a json character string and submitted to the central task server.
S204, the central task server stores the mapping relation between the task set and the task identifier corresponding to the task set.
In this step, the central task server stores the mapping relationship between the task set and the corresponding task identifier when acquiring the task set and the corresponding task identifier. Further, the storage duration of the task to be identified is counted when the task to be identified is stored. And stopping executing the operation of saving the task to be identified in response to the fact that the saving time length exceeds a preset time threshold corresponding to the task set to which the task to be identified belongs. Specifically, a method is added into the String taskName by an offer (Ee, long timeout, TimeUnit unit) method to obtain a preset time threshold timeout, and if the timeout returns null.
S205, the thread pool creates task threads.
In this step, after the application server creates the thread pool, the thread pool does not have any task thread under a default condition, but the task thread is created to execute the task to be processed when the task to be processed arrives. In the addtask method, a Worker object is created by a submitted task to be processed, then a thread factory is called to create a new task thread, then the reference of the task thread is assigned to a member variable thread of the Worker object, and then the Worker object is added into a working set through the workers. After thread creation is complete, the thread enters a ready (Runnable) state.
S206, screening out a task set corresponding to the thread pool in the central task server.
Here, in response to the thread pool creating a task thread, at least one task to be processed in the task set matching the task identifier of the thread pool is screened out based on the mapping relation. Specifically, in a task set which conforms to a mapping relation with a task identifier sent by a received thread pool, if a task to be processed is not obtained, an operation of waiting for obtaining the task to be processed is executed until a task thread of the thread pool corresponding to the task set is interrupted, and/or the task set is continuously empty and the time length exceeds a preset time length, the operation of obtaining the task to be processed in the task set is stopped. Further, the to-be-processed task in the corresponding task set is obtained by using methods such as poll, take, poll (long timeout, TimeUnit unit), and the like. Specifically, poll (): if no task to be processed exists in the corresponding task set, directly returning to null; if there are elements, get. take (): and if the task set is empty, waiting until the task to be processed or the corresponding task thread is interrupted. poll (long timeout, TimeUnit unit): if the task set is not empty, acquiring; null is returned if the queue is empty and has timed out, and wait is entered if the queue is empty and time TimeUnit has not timed out. The three methods also need to add a method entry task identifier taskanme so as to obtain the tasks to be processed from the task set specified on the central task server.
And S207, the thread pool acquires at least one corresponding task to be processed.
Here, the thread pool acquires the tasks to be processed in the corresponding task set stored on the central task server.
And S208, the thread pool carries out deserialization on the task data contained in the acquired task to be processed.
Here, the byte sequence included in the acquired task to be processed is deserialized into a task data object. Specifically, since the data acquired by the thread pool of each application server from the central task server is not recognizable data but a character string, the task object should be deserialized and recognized by the thread pool.
And S209, caching the acquired to-be-processed task to a blocking work queue corresponding to the thread pool, and waiting to be executed by a corresponding task thread.
In this step, each thread pool corresponds to a task cache queue, i.e., workQueue, for storing the to-be-processed tasks to be executed. Wherein, the type of workQueue is BlockingQueue < Runnable >, and usually three types can be taken: ArrayBlockingQueue (array-based FIFO queue, which must be sized when created); LinkedBlockingQueue (a linked list-based fifo queue that defaults to integer.max _ VALUE if this queue size is not specified at creation); syncronousqueue (this queue is special in that it does not hold submitted tasks, but will directly create a thread to execute new tasks).
Furthermore, each task thread firstly executes the task to be processed, i.e. the firstTask, which is transmitted in through the constructor, and after the runTask () is called to execute the firstTask, new tasks to be processed are continuously taken from the blocked work queue through the getTask () in the while loop to execute the tasks. getTask is a method in the threadpool analyzer class, if the number of task threads in the current thread pool is greater than the core pool size corePoolSize or the idle survival time is allowed to be set for the task threads in the core pool, poll is called to fetch the task to be processed, the method waits for a certain time, and if the task to be processed is not fetched, null is returned.
S210, the number of the servers where the thread pool is located is adjusted based on the central task server.
Here, the number of the saved task sets and the number of the tasks to be processed included in the task sets, the number of the thread pools and the number of the task threads are inquired, and the number of the application servers where the thread pools are located is adjusted. The number of application servers is reduced or increased by querying the number of tasks to be processed. This step may be performed throughout the data processing.
The application realizes the data processing method based on the steps. The to-be-processed tasks required to be executed by the thread pools of the application servers are extracted and stored to the independent central task servers, and the corresponding to-be-processed tasks are obtained after the task threads are established, so that the risk that the memory overflow service is unavailable due to the upper limit of the memory of each application server is avoided. Meanwhile, the condition of the submitted tasks to be processed can be inquired through the central task server so as to properly adjust the number of the application servers and prevent the waste of resources. In addition, the central task server supports the expansion and can be used as a shared resource to access different service systems.
Based on the same inventive concept, the embodiment 400 of the present application further provides an apparatus for data processing, where as shown in fig. 4, the apparatus includes:
an obtaining module 31, configured to obtain a task set submitted by at least one thread pool and a task identifier corresponding to the task set, where the task set includes at least one to-be-processed task;
a storage module 32, configured to store a mapping relationship between the task set and the task identifier corresponding to the task set;
the screening module 33 is configured to create a task thread in response to the thread pool, and screen out at least one to-be-processed task in the task set matching the task identifier of the thread pool based on the mapping relationship;
the cache module 34 is configured to obtain at least one corresponding to-be-processed task from the thread pool, cache the obtained to-be-processed task to a blocked work queue corresponding to the thread pool, and wait for execution by a corresponding task thread.
In this embodiment, specific functions and interaction manners of the obtaining module 31, the saving module 32, the screening module 33, and the caching module 34 can be referred to the record of the embodiment corresponding to fig. 1, and are not described herein again.
Optionally, the obtaining module 31 is further configured to:
and serializing the task data contained in the acquired task to be processed, and converting the task data into a byte sequence corresponding to the task to be processed.
As shown in fig. 5, another embodiment 500 of the present application further provides a terminal device, which includes a processor 501, where the processor 501 is configured to execute the steps of the data processing method. As can also be seen from fig. 5, the terminal device provided by the above embodiment further includes a non-transitory computer readable storage medium 502, the non-transitory computer readable storage medium 502 having stored thereon a computer program, which when executed by the processor 501, performs the steps of the above-described method for data processing. In practice, the terminal device may be one or more computers, as long as the computer-readable medium and the processor are included.
In particular, the storage medium can be a general-purpose storage medium, such as a removable disk, a hard disk, a FLASH, and the like, and when the computer program on the storage medium is executed, the computer program can execute the steps of the data processing method. In practical applications, the computer readable medium may be included in the apparatus/device/system described in the above embodiments, or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, enable performance of the steps of a method of data processing as described above.
According to embodiments disclosed herein, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example and without limitation: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing, without limiting the scope of the present disclosure. In the embodiments disclosed herein, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowchart and block diagrams in the figures of the present application illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments disclosed herein. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not explicitly recited in the present application. In particular, the features recited in the various embodiments and/or claims of the present application may be combined and/or coupled in various ways, all of which fall within the scope of the present disclosure, without departing from the spirit and teachings of the present application.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can still change or easily conceive of the technical solutions described in the foregoing embodiments or equivalent replacement of some technical features thereof within the technical scope disclosed in the present application; such changes, variations and substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application and are intended to be covered by the appended claims. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of data processing, comprising:
acquiring a task set submitted by at least one thread pool and a task identifier corresponding to the task set, wherein the task set comprises at least one task to be processed;
storing the mapping relation between the task set and the task identifier corresponding to the task set;
responding to the thread pool to create a task thread, and screening out at least one task to be processed in the task set matched with the task identifier of the thread pool based on the mapping relation;
and the thread pool acquires at least one corresponding task to be processed, caches the acquired task to be processed to the blocked work queue corresponding to the thread pool, and waits to be executed by the corresponding task thread.
2. The method according to claim 1, wherein the step of obtaining the task set submitted by at least one thread pool and the task identifier corresponding to the task set comprises:
and serializing the task data contained in the acquired task to be processed, and converting the task data into a byte sequence corresponding to the task to be processed.
3. The method of claim 1, wherein the step of saving the mapping relationship between the task set and the task identifier corresponding to the task set comprises:
counting the storage duration of the task to be identified when the task to be identified is stored;
and stopping executing the operation of saving the task to be identified in response to the fact that the saving time length exceeds a preset time threshold corresponding to the task set to which the task to be identified belongs.
4. The method according to claim 1, wherein the step of filtering out at least one of the tasks to be processed in the task set matching the task identifier of the thread pool based on the mapping relationship comprises:
and if the task to be processed is not obtained in the task set which conforms to the mapping relation with the received task identifier sent by the thread pool, executing the operation of waiting for obtaining the task to be processed until the task thread of the thread pool corresponding to the task set is interrupted, and/or stopping obtaining the task to be processed in the task set when the duration of the task set which is continuously empty exceeds a preset duration.
5. The method of claim 2, wherein the step of the thread pool obtaining the corresponding at least one task to be processed comprises:
and deserializing the byte sequence contained in the acquired task to be processed into a task data object.
6. The method of claim 1, further comprising:
and inquiring the number of the saved task sets, the number of the tasks to be processed contained in the task sets, the number of the thread pools and the number of the task threads, and adjusting the number of application servers where the thread pools are located.
7. An apparatus for data processing, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a task set submitted by at least one thread pool and a task identifier corresponding to the task set, and the task set comprises at least one task to be processed;
the storage module is used for storing the mapping relation between the task set and the task identifier corresponding to the task set;
the screening module is used for responding to the thread pool to establish a task thread and screening out at least one task to be processed in the task set matched with the task identifier of the thread pool based on the mapping relation;
and the cache module is used for the thread pool to acquire at least one corresponding task to be processed, cache the acquired task to be processed to the blocked work queue corresponding to the thread pool, and wait for the task to be executed by the corresponding task thread.
8. The apparatus of claim 7, wherein the obtaining module is further configured to:
and serializing the task data contained in the acquired task to be processed, and converting the task data into a byte sequence corresponding to the task to be processed.
9. A non-transitory computer readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the steps of a method of data processing according to any one of claims 1 to 6.
10. A terminal device, comprising a processor configured to perform the steps of a method of data processing according to any one of claims 1 to 6.
CN202110087767.2A 2021-01-22 2021-01-22 Data processing method, device and storage medium Pending CN113806065A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110087767.2A CN113806065A (en) 2021-01-22 2021-01-22 Data processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110087767.2A CN113806065A (en) 2021-01-22 2021-01-22 Data processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN113806065A true CN113806065A (en) 2021-12-17

Family

ID=78892793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110087767.2A Pending CN113806065A (en) 2021-01-22 2021-01-22 Data processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113806065A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115237609A (en) * 2022-09-22 2022-10-25 深圳市优网科技有限公司 Method, device and storage medium for user information quick association backfill

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115237609A (en) * 2022-09-22 2022-10-25 深圳市优网科技有限公司 Method, device and storage medium for user information quick association backfill
CN115237609B (en) * 2022-09-22 2022-12-27 深圳市优网科技有限公司 Method, device and storage medium for user information quick association backfill

Similar Documents

Publication Publication Date Title
CN109582455B (en) Multithreading task processing method and device and storage medium
CN106802826B (en) Service processing method and device based on thread pool
CN108280150B (en) Distributed asynchronous service distribution method and system
US9798595B2 (en) Transparent user mode scheduling on traditional threading systems
EP2363806A1 (en) Connection handler and method for providing applications with heterogeneous connection objects
CN109471711B (en) Task processing method and device
CN100538646C (en) A kind of method and apparatus of in distributed system, carrying out the SQL script file
CN108459913B (en) Data parallel processing method and device and server
US9038093B1 (en) Retrieving service request messages from a message queue maintained by a messaging middleware tool based on the origination time of the service request message
CN109842621A (en) A kind of method and terminal reducing token storage quantity
CN111984402A (en) Unified scheduling monitoring method and system for thread pool
CN113806065A (en) Data processing method, device and storage medium
US9558035B2 (en) System and method for supporting adaptive busy wait in a computing environment
CN111752961A (en) Data processing method and device
US11743200B2 (en) Techniques for improving resource utilization in a microservices architecture via priority queues
CN106815061B (en) Service processing method and device
CN113419832A (en) Processing method and device of delay task and terminal
CN102867018A (en) Method for analogue signal communication between threads in database system
CN107402752B (en) Timing triggering method and device for application
CN112685334A (en) Method, device and storage medium for block caching of data
CN112100186A (en) Data processing method and device based on distributed system and computer equipment
CN116719626B (en) Multithreading parallel processing method and processing system for splitting mass data
CN111949687B (en) Distributed database architecture based on shared memory and multiple processes and implementation method thereof
CN117539451B (en) Flow execution method, device, electronic equipment and storage medium
CN116009949B (en) Numerical value acquisition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination