CN115576719A - Data processing method and device, intelligent terminal and storage medium - Google Patents

Data processing method and device, intelligent terminal and storage medium Download PDF

Info

Publication number
CN115576719A
CN115576719A CN202211363155.2A CN202211363155A CN115576719A CN 115576719 A CN115576719 A CN 115576719A CN 202211363155 A CN202211363155 A CN 202211363155A CN 115576719 A CN115576719 A CN 115576719A
Authority
CN
China
Prior art keywords
data
thread
processed
thread pool
message queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211363155.2A
Other languages
Chinese (zh)
Inventor
桑文锋
曹犟
刘耀洲
付力力
熊磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sensors Data Network Technology Beijing Co Ltd
Original Assignee
Sensors Data Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sensors Data Network Technology Beijing Co Ltd filed Critical Sensors Data Network Technology Beijing Co Ltd
Priority to CN202211363155.2A priority Critical patent/CN115576719A/en
Publication of CN115576719A publication Critical patent/CN115576719A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a data processing method, a data processing device, an intelligent terminal and a storage medium, wherein the method comprises the following steps: acquiring attribute information of data to be processed, and distributing the data to be processed to a matched message queue based on the attribute information, wherein the message queue is correspondingly provided with a consumption thread and a thread pool, and at least one processing thread is arranged in the thread pool; calculating the data volume of the data to be processed in the message queue, and determining the thread number of the processing threads in the thread pool based on the data volume; based on the thread number, the consumption thread pulls a certain amount of data to be processed from the message queue to the thread pool, and the data to be processed pulled to the thread pool is processed by the processing thread in the thread pool. By the method, the data in the message queue is processed, and the situation of data blockage or resource waste is solved.

Description

Data processing method and device, intelligent terminal and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data processing method and apparatus, an intelligent terminal, and a storage medium.
Background
The message queue is an important component in a distributed system, and mainly solves the problems of application decoupling, asynchronous messages, traffic cut and the like. At present, more message queues are used, such as ActiveMQ, rabbitMQ, zeroMQ, kafka and the like, and data in the message queues can be consumed and processed through threads.
In the prior art, data is put into a message queue uniformly, and a program single thread or multiple threads are started to consume and process the data. This results in a situation of blocking due to the fact that when the data flow is too large, the data flow cannot be processed in time due to the fixed thread processing capability, and when the data flow is less, a part of idle processing space exists in the thread, which results in a situation of wasting thread computing resources.
Disclosure of Invention
The embodiment of the application provides a data processing method and device, an intelligent terminal and a storage medium which can be used in the financial field or other related fields, and is used for solving the problems of the background technology.
In a first aspect, an embodiment of the present application provides a data processing method, where the method includes:
acquiring attribute information of data to be processed, and distributing the data to be processed to a matched message queue based on the attribute information, wherein the message queue is correspondingly provided with a consumption thread and a thread pool, and the thread pool is at least provided with one processing thread;
calculating the data volume of the data to be processed in the message queue, and determining the thread number of the processing threads in the thread pool based on the data volume;
based on the thread number, the consumption thread pulls a certain amount of data to be processed from the message queue to the thread pool, and the data to be processed pulled to the thread pool is processed by the processing thread in the thread pool.
In some embodiments, the calculating a data amount of data to be processed in the message queue, and based on the data amount, determining a number of threads of the processing threads in the thread pool, includes:
judging whether the numerical value of the data quantity of the data to be processed in the message queue reaches a preset high peak value or not;
if so, increasing the number of threads in the thread pool according to a preset thread increasing rule;
if not, keeping the thread quantity of the thread pool.
In some embodiments, before the consuming thread pulls an amount of pending data from the message queue into the thread pool based on the number of threads, the method further comprises:
judging whether the numerical value of the thread quantity of the thread pool reaches a preset maximum value or not;
if yes, triggering and starting a rejection mechanism of the thread pool, and stopping the consumption thread from pulling the data to be processed from the message queue;
if not, based on the thread quantity, the consumption thread pulls a certain quantity of data to be processed from the message queue to the thread pool.
In some embodiments, after the triggering initiates a rejection mechanism of the thread pool, the method further comprises:
judging whether the thread pool has data to be processed or not;
if so, the consuming thread stops pulling the data to be processed from the message queue;
if not, determining the number of the data to be processed in the thread pool, and based on the thread number and the number of the data to be processed, pulling a certain number of the data to be processed from the message queue to the thread pool by the consumption thread.
In some embodiments, after the triggering initiates a rejection mechanism of the thread pool, the method further comprises:
and monitoring the to-be-processed quantity of the to-be-processed data in the thread pool within a preset time, and closing the rejection mechanism when the to-be-processed quantity is lower than the preset quantity.
In some embodiments, when a processing thread in an idle state exists in the thread pool, the thread pool automatically destroys the processing thread in the idle state.
In some embodiments, the thread pool holds only one of the processing threads when there is no data pending in the message queue.
In a second aspect, an embodiment of the present application further provides a data processing apparatus, where the apparatus includes:
the device comprises a shunting unit, a message queue and a thread pool, wherein the shunting unit is used for acquiring attribute information of data to be processed and shunting the data to be processed to the matched message queue based on the attribute information, the message queue is correspondingly provided with a consumption thread and the thread pool, and at least one processing thread is arranged in the thread pool;
the computing unit is used for computing the data volume of the data to be processed in the message queue and determining the thread number of the processing threads in the thread pool based on the data volume;
and the processing unit is used for pulling a certain amount of data to be processed from the message queue to the thread pool by the consumption thread based on the thread number, and processing the data to be processed pulled to the thread pool by the processing thread in the thread pool.
In a third aspect, an embodiment of the present application further provides an intelligent terminal, which includes a memory and a processor, where the memory is used to store instructions and data, and the processor is used to execute the data processing method described above.
In a fourth aspect, an embodiment of the present application further provides a storage medium, where a plurality of instructions are stored, and the instructions are adapted to be loaded by a processor to execute the data processing method described above.
According to the data processing method in the embodiment of the application, the data to be processed is distributed to the matched message queue according to the attribute information of the data to be processed, the message queue is correspondingly provided with a consumption thread and a thread pool, the thread pool is provided with a processing thread, and the processing thread in the thread pool is determined according to the data volume of the data to be processed in the message queue. In the process of data processing, a consumption thread pulls a certain amount of data to be processed from a message queue to a thread pool, and the data to be processed is processed by a processing thread. Through the mode of the embodiment of the application, due to the fact that the data to be processed are shunted in advance, the data to be processed in the same message queue are the same type of data, the situation that in the processing process, the blocking occurs due to the fact that the processing speed of certain or some data is too slow is avoided, the thread quantity of the processing threads in the thread pool is determined by the data quantity of the data to be processed in the message queue, the data can be processed in time, the blocking situation in the data processing peak period is avoided, and the situation that resources are wasted in the data processing valley period is avoided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an intelligent terminal according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the embodiments of the present application, it should be understood that the terms "first", "second", and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present application, "a plurality" means two or more unless specifically defined otherwise.
The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for the purpose of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known processes have not been described in detail so as not to obscure the description of the embodiments of the present application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed in the embodiments herein.
Embodiments of the present application provide a data processing method and apparatus, an intelligent terminal, and a storage medium, which will be described in detail below.
Referring to fig. 1, fig. 1 is a flowchart of a data processing method according to an embodiment of the present application, including the following contents:
101. the method comprises the steps of obtaining attribute information of data to be processed, distributing the data to be processed to a matched message queue based on the attribute information, wherein a consumption thread and a thread pool are correspondingly arranged on the message queue, and at least one processing thread is arranged in the thread pool.
In an embodiment of the present application, the attribute information may include an attribute of the data, a source of the data, or a processing time of the data, and the like. The attribute information may be one of attributes of the data, sources of the data, or time consumption of the data, or a combination of the attributes of the data, the sources of the data, or the time consumption of the data.
Before data distribution and processing are carried out, a plurality of message queues are set in advance according to different attribute information of data, each message queue is correspondingly provided with a consumption thread and a thread pool, a processing thread is arranged in each thread pool, in the process of data processing, to-be-processed data in the message queues are consumed and pulled through the consumption thread, the to-be-processed data are pulled into the corresponding thread pools, the to-be-processed data in the thread pools are subjected to data processing through the processing threads in the thread pools, and the processed data are stored in preset storage positions.
Optionally, each message queue is correspondingly provided with a consumption thread and a thread pool, and at an initial stage, only one processing thread is arranged in the thread pool. In the process of data processing, the consuming thread pulls the data to be processed from the corresponding message queue to the corresponding thread pool for data processing.
For example, the set message queues include a message queue a, the message queue a is correspondingly provided with a consuming thread B and a thread pool C, and the thread pool C is provided with a processing thread D at an initial stage. Then, in the process of data processing, the consuming thread B pulls the data to be processed in the message queue a from the message queue a to the thread pool C, and the processing thread D in the thread pool C performs data processing on the data to be processed.
Optionally, each message queue is correspondingly provided with a plurality of consumption threads and a plurality of thread pools, and at an initial stage, each thread pool is provided with only one processing thread. In the process of data processing, each consuming thread pulls data to be processed from the message queue to a corresponding thread pool for data processing.
For example, the set message queue includes a message queue a, the message queue a is correspondingly provided with two consumption threads B1 and B2, the message queue a is correspondingly provided with a thread pool C1 corresponding to the consumption thread B1, the message queue a is correspondingly provided with a thread pool C2 corresponding to the consumption thread B2, the thread pool C1 is provided with one processing thread D1 at the initial stage, and the thread pool C2 is provided with one processing thread D2 at the initial stage. Then, in the process of data processing, the consuming thread B1 pulls the data to be processed in the message queue a from the message queue a to the thread pool C1, and the processing thread D1 in the thread pool C1 performs data processing on the data to be processed in the thread pool C1. The consumption thread B2 pulls the data to be processed in the message queue A from the message queue A to the thread pool C2, and the data to be processed in the thread pool C2 is processed by the processing thread D2 in the thread pool C2.
In the data distribution stage, the data to be processed is distributed to the matched message queue according to the attribute information of the data to be processed, and the consumption thread corresponding to the message queue pulls the data to be processed to the corresponding thread pool for data processing.
Optionally, the data to be processed may be shunted by Flink or other shunting program. The Flink is an open-source distributed big data processing engine and a computing framework, can uniformly process unbounded data streams and bounded data streams, and can perform stateful or stateless computation.
When data distribution is performed, the data to be processed can be classified in advance according to the attribute information of the data to be processed, corresponding class labels are marked on the data to be processed contained in each class of data based on the classification result, the class labels of the data which can be stored in each message queue are set, and when data distribution is performed, the data can be distributed to the corresponding message queue according to the class labels of the data to be processed.
For example, the data category preset includes category a, category B, and category C, the corresponding message queue is provided with message queue a, message queue B, and message queue C, the message queue a is matched with the category a and used for storing the data to be processed corresponding to the category a, the message queue B is matched with the category B and used for storing the data to be processed corresponding to the category B, and the message queue C is matched with the category C and used for storing the data to be processed corresponding to the category C. Before shunting certain data to be processed, acquiring attribute information of the data to be processed, determining that the data to be processed belongs to a category A through analysis of the attribute information, classifying the data to be processed into the category A, and setting a category A label for the data to be processed. And when data distribution is carried out, distributing the data to be processed into the message queue A.
When data is distributed, the topic of each message queue may be set, and based on the set result, when the data to be processed is classified, the topic matched with the data to be processed is analyzed according to the attribute information of the data to be processed, and the data to be processed is distributed to the message queue matched with the topic, which is not illustrated here.
The data to be processed in each message queue can be regarded as the same type of data by shunting the data to be processed before data processing, and the data processing speeds of the same type of data are the same or similar in the data processing process, so that the greater difference is avoided, the blocking condition caused by the excessively slow processing speed of some or some data is avoided, and the data loss condition caused by untimely data processing is also avoided.
Optionally, a Java module may be created to be responsible for consuming threads, thread pools, and the implementation of data specific business logic.
102. And calculating the data volume of the data to be processed in the message queue, and determining the thread number of the processing threads in the thread pool based on the data volume.
In an embodiment of the present application, the number of threads of the processing threads in the thread pool may be determined based on the data amount of the to-be-processed data stored in the corresponding message queue, and the processing threads in the same thread pool are the same.
In the initial stage, there may be one processing thread in the thread pool, and when the data amount of the to-be-processed data stored in the message queue is greater than a preset first data amount, one processing thread is added, and when the data amount of the to-be-processed data stored in the message queue is greater than the first data amount and greater than a preset second data amount, one processing thread is added again, and so on, until the number of the processing threads in the thread pool reaches a settable maximum number, the addition of the processing threads is stopped.
Conversely, when a plurality of processing threads are arranged in the thread pool, one processing thread is destroyed when the data amount of the to-be-processed data stored in the message queue is smaller than a preset first data amount, and one processing thread is destroyed again when the data amount of the to-be-processed data stored in the message queue is smaller than the first data amount and smaller than a preset second data amount, and so on, until only one processing thread is left in the thread pool, the destruction of the processing thread is stopped.
Optionally, in some embodiments, calculating a data amount of data to be processed in the message queue, and determining, based on the data amount, a thread number of a processing thread in the thread pool, includes: judging whether the numerical value of the data quantity of the data to be processed in the message queue reaches a preset high peak value or not; if so, increasing the number of threads in the thread pool according to a preset thread increasing rule; if not, keeping the thread quantity of the thread pool.
For example, when the data amount of the to-be-processed data stored in the message queue reaches a preset high peak value, the processing threads in the thread pool are automatically added to the maximum number which can be added, otherwise, the processing threads are not added, and the number of the threads in the original thread pool is kept.
For example, when the data amount of the to-be-processed data stored in the message queue is greater than a preset first data amount, one processing thread is added, and when the data amount of the to-be-processed data stored in the message queue is greater than the first data amount and greater than a preset second data amount, one processing thread is added again, and so on, when the data amount of the to-be-processed data stored in the message queue reaches a preset high peak value, the processing threads in the thread pool are added to the maximum number which can be added.
Optionally, in some embodiments, calculating a data amount of data to be processed in the message queue, and determining, based on the data amount, a thread number of a processing thread in the thread pool, includes: judging whether the numerical value of the data quantity of the data to be processed in the message queue reaches a preset valley value or not; if so, destroying the processing threads in the thread pool according to a preset thread destroying rule, and reducing the number of the threads in the thread pool; if not, the number of threads in the thread pool is kept.
For example, when the data amount of the to-be-processed data stored in the message queue reaches a preset valley value, the processing threads in the thread pool are automatically destroyed to the maximum number capable of being destroyed, otherwise, the destruction of the processing threads is not performed, and the number of the threads in the original thread pool is maintained.
For example, when the data amount of the to-be-processed data stored in the message queue is smaller than a preset first data amount, one processing thread is destroyed, when the data amount of the to-be-processed data stored in the message queue is smaller than the first data amount and smaller than a preset second data amount, one processing thread is destroyed again, and so on, when the data amount of the to-be-processed data stored in the message queue reaches a preset valley value, the destruction of the processing thread is stopped, and at this time, only one processing thread exists in the thread pool.
Optionally, in some embodiments, when there is a processing thread in the idle state in the thread pool, the thread pool automatically destroys the processing thread in the idle state.
For example, when the data amount of the to-be-processed data stored in the message queue is smaller than a preset first data amount, the processing thread in the idle state in the thread pool is judged, and the processing thread in the idle state is destroyed. Correspondingly, when the data volume of the data to be processed stored in the message queue is smaller than the first data volume and smaller than the preset second data volume, the processing thread in the idle state is judged again, the processing thread in the idle state is destroyed, and so on, when the data volume of the data to be processed stored in the message queue reaches the preset valley value, the destruction of the processing thread is stopped, and at the moment, only one processing thread exists in the thread pool.
Optionally, in some embodiments, when there is no pending data in the message queue, the thread pool only holds one processing thread.
103. Based on the thread number, the consumption thread pulls a certain amount of data to be processed from the message queue to the thread pool, and the data to be processed pulled to the thread pool is processed by the processing thread in the thread pool.
Optionally, before the consuming thread pulls the data to be processed from the message queue to the thread pool, in addition to calculating the data amount of the data to be processed in the message queue and determining the thread number of the processing threads in the thread pool based on the data amount, the data processing speed of the thread pool is also obtained, and based on the data processing speed, the data amount and the thread number, a certain amount of data to be processed is pulled from the message queue to the thread pool.
The method for pulling the data to be processed from the message queue to the thread pool based on the data processing speed, the data volume and the thread number may be that a target pulling number of the data to be processed pulled to the thread pool at one time is determined based on the data processing speed, the data volume and the thread number, and the data to be processed corresponding to the target pulling number is pulled to the thread pool from the message queue based on the target pulling number.
Optionally, in some embodiments, before the consuming thread pulls a certain amount of data to be processed from the message queue to the thread pool based on the number of threads, the method includes: judging whether the numerical value of the number of threads in the thread pool reaches a preset maximum value or not; if yes, triggering a rejection mechanism of the thread pool, and stopping the consumption thread from pulling the data to be processed from the message queue; if not, based on the number of the threads, the consuming thread pulls a certain amount of data to be processed from the message queue to the thread pool.
The preset maximum value may be understood as the number of threads of the processing threads that are active in the thread pool at the data processing peak, which is at the maximum number that the thread pool can set. When the number of the processing threads in the thread pool reaches a preset maximum value, it indicates that the processing thread is in a peak period of data processing at the moment, and the data processing state of the thread pool is saturated, and then a rejection mechanism for starting the thread pool is triggered. Under the rejection mechanism, the consuming thread does not pull the data to be processed from the message queue to the thread pool any more, or after the consuming thread pulls a certain amount of data to be processed, the data to be processed cannot enter the thread pool.
In the embodiment of the application, each thread pool is provided with a rejection mechanism, and the starting or closing of the rejection mechanism of each thread pool is determined by the number of threads of the processing threads of the thread pool, the processing speed and other factors, and is independent of other thread pools.
Optionally, in some embodiments, after triggering the rejection mechanism for starting the thread pool, the method includes: judging whether the thread pool has data to be processed or not; if so, the consuming thread stops pulling the data to be processed from the message queue; if not, determining the number of the data to be processed in the thread pool, and based on the number of the threads and the number of the data to be processed, pulling a certain number of the data to be processed from the message queue to the thread pool by the consumption thread.
And when the number of the threads of the processing threads in the thread pool reaches a preset maximum value, which indicates that the data processing of the corresponding message queue is in a peak period, starting a rejection mechanism of the thread pool, and under the rejection mechanism, determining the working state of the consumption thread by judging whether the data in the thread pool is to be processed.
Optionally, in some embodiments, after triggering the rejection mechanism that starts the thread pool, the method includes: and monitoring the to-be-processed quantity of the to-be-processed data in the thread pool within a preset time, and closing a rejection mechanism when the to-be-processed quantity is lower than the preset quantity.
And under a rejection mechanism of the thread pool, judging whether the to-be-processed data exists in the thread pool, stopping pulling the to-be-processed data from the message queue to the thread pool by the consumption thread when the to-be-processed data exists, monitoring the quantity change condition of the to-be-processed data in the thread pool, closing the rejection mechanism when the to-be-processed data in the thread pool are completely processed, pulling a batch of to-be-processed data from the message queue to the thread pool by the consumption thread, and performing data processing on a new batch of to-be-processed data by the processing thread in the thread pool.
For example, the message queue a corresponds to a consuming thread B and a thread pool C, where a maximum value of the number of threads of the processing thread that can be set in the thread pool C is 6, and then the preset maximum value is 6, and when the number of threads of the thread pool C reaches 6, it indicates that the data processing of the message queue a is in a peak period, and then a rejection mechanism for starting the thread pool C is triggered. Under the rejection mechanism, the consuming thread B stops pulling pending data from the message queue a. And monitoring the quantity change condition of the data to be processed in the thread pool C, when the situation that the data to be processed does not exist in the thread pool C is monitored, indicating that the data to be processed in the thread pool C is completely processed, triggering a rejection mechanism for closing the thread pool C, and pulling a batch of data from the message queue A to the thread pool C by the consuming thread B.
Optionally, after the rejection mechanism is closed, the amount of the to-be-processed data stored in the message queue is calculated, the data processing speed of each processing thread in the thread pool is obtained, and the thread number of the processing threads in the thread pool and the target pulling amount of the to-be-processed data pulled to the thread pool by the consuming thread are determined based on the data amount and the data processing speed.
For example, when data processing of the thread pool is in a peak period, 6 processing threads are arranged in the thread pool, and a data amount which can be processed by the 6 processing threads in the thread pool at one time is x, while a data amount of data to be processed in the message queue is calculated to be y, when y is smaller than x, and the processing of y data to be processed in the thread pool can be completed only by 3 processing threads, it is determined that the number of the processing threads in the thread pool is 3, and a target pulling amount of the data to be processed, which is pulled to the thread pool by the consuming thread, is y.
Optionally, after the rejection mechanism is turned off, the amount of the to-be-processed data stored in the message queue is calculated, the data processing speed of each processing thread in the thread pool is obtained, based on the data amount and the data processing speed, the thread amount of the processing thread in the thread pool is determined as the target thread amount, the target pulling amount of the to-be-processed data pulled to the thread pool by the consuming thread is determined, and when the value of the target thread amount is smaller than the preset maximum value, the redundant processing threads in the thread pool are destroyed by taking the target thread amount as a reference.
For example, when data processing of the thread pool is in a peak period, 6 processing threads are arranged in the thread pool, a data amount which can be processed by the 6 processing threads in the thread pool at one time is x, a data amount of data to be processed in the message queue is calculated to be y, and when y is smaller than x and the processing of y data to be processed in the thread pool can be completed only by 3 processing threads, the number of target threads of the thread pool is determined to be 3, the target pulling amount of the consuming threads is y, the thread pool reserves 3 processing threads, and the other 3 processing threads are destroyed.
Under a rejection mechanism of the thread pool, whether to-be-processed data exists in the thread pool is judged, when the to-be-processed data exists, the consuming thread stops pulling the to-be-processed data from the message queue to the thread pool, in addition, the quantity change condition of the to-be-processed data in the thread pool is monitored, when the quantity of the to-be-processed data in the thread pool reaches the preset quantity, the rejection mechanism is closed, in addition, the target pulling quantity is determined based on the preset quantity, and based on the target pulling quantity, the consuming thread pulls the to-be-processed data corresponding to the target pulling quantity from the message queue to the thread pool, which is not illustrated here.
The preset number can be defined according to the situation. For example, when the preset number is that the thread pool is in a peak period of data processing, the number to be processed in the thread pool is equal to or less than half of the data amount of the data to be processed, which is pulled into the thread pool by the consuming thread at one time.
The data processing method comprises the steps of obtaining attribute information of data to be processed, distributing the data to be processed to a matched message queue based on the attribute information, wherein the message queue is correspondingly provided with a consumption thread and a thread pool, and at least one processing thread is arranged in the thread pool; calculating the data volume of data to be processed in the message queue, and determining the thread number of processing threads in the thread pool based on the data volume; based on the number of threads, the consumption thread pulls a certain amount of data to be processed from the message queue to the thread pool, and the data to be processed pulled to the thread pool is processed by the processing threads in the thread pool. According to the method and the device, before the data are consumed and processed, the data are shunted to the matched message queues in advance, the data in the same message queue can be considered as the same type of data, the same type of data are processed through the consumption thread and the thread pool of the same message queue, the same type of data can be processed at the same or similar speed, and the data processing process of the whole message queue cannot be influenced due to the fact that one processing speed is slow in the processing process. And each message queue is correspondingly provided with a consumption thread and a thread pool, wherein the processing threads in the thread pools can be automatically increased or decreased according to the data volume of the data to be processed, so that the data can be processed in time at the peak of data processing, the condition of data blockage is avoided, and the condition of resource waste caused by the low valley of data processing is avoided.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a data processing apparatus 200 according to an embodiment of the present application, where the data processing apparatus includes the following units:
the distribution unit 201 is configured to obtain attribute information of the data to be processed, and distribute the data to be processed to a matched message queue based on the attribute information, where the message queue is correspondingly provided with a consumption thread and a thread pool, and the thread pool is provided with at least one processing thread.
The calculating unit 202 is configured to calculate a data amount of data to be processed in the message queue, and determine, based on the data amount, a thread number of a processing thread in the thread pool.
And the processing unit 203 pulls a certain amount of data to be processed from the message queue to the thread pool by the consuming thread based on the number of threads, and the data to be processed pulled to the thread pool is processed by the processing thread in the thread pool.
Optionally, the computing unit 202 may include the following sub-units:
the quantity judging subunit is used for judging whether the numerical value of the data quantity of the data to be processed in the message queue reaches a preset high peak value or not; if so, increasing the number of threads in the thread pool according to a preset thread increasing rule; if not, the number of threads in the thread pool is kept.
Optionally, the data processing apparatus 200 in this embodiment of the present application further includes the following units:
the first judgment unit is used for judging whether the numerical value of the number of threads of the thread pool reaches a preset maximum value or not; if yes, triggering a rejection mechanism of the thread pool, and stopping the consumption thread from pulling the data to be processed from the message queue; if not, based on the number of threads, the consuming thread pulls a certain amount of data to be processed from the message queue to a thread pool.
The second judgment unit is used for judging whether the thread pool has data to be processed after triggering and starting a rejection mechanism of the thread pool; if so, stopping pulling the data to be processed from the message queue by the consuming thread; if not, determining the number of the data to be processed in the thread pool, and based on the number of the threads and the number of the data to be processed, pulling a certain number of the data to be processed from the message queue to the thread pool by the consumption thread.
Optionally, the data processing apparatus 200 according to the embodiment of the present application further includes other units and sub-units, which are not described herein again.
The data processing apparatus 200 of the embodiment of the application includes a shunting unit 201, configured to obtain attribute information of data to be processed, and shunt the data to be processed to a matched message queue based on the attribute information, where the message queue is correspondingly provided with a consumption thread and a thread pool, and the thread pool is provided with at least one processing thread; a calculating unit 202, configured to calculate a data amount of data to be processed in the message queue, and determine, based on the data amount, a thread number of processing threads in the thread pool; and the processing unit 203 is configured to, based on the number of threads, pull a certain amount of data to be processed from the message queue to the thread pool by the consuming thread, and perform data processing on the data to be processed pulled to the thread pool by a processing thread in the thread pool. According to the method and the device, before the data are consumed and processed, the data are shunted to the matched message queues in advance, the data in the same message queue can be considered as the same type of data, the same type of data are processed through the consumption thread and the thread pool of the same message queue, the same type of data can be processed at the same or similar speed, and the data processing process of the whole message queue cannot be influenced due to the fact that one processing speed is slow in the processing process. And each message queue is correspondingly provided with a consumption thread and a thread pool, wherein the processing threads in the thread pools can be automatically increased or decreased according to the data volume of the data to be processed, so that the data can be processed in time at the peak of data processing, the condition of data blockage is avoided, and the condition of resource waste caused by the low valley of data processing is avoided.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an intelligent terminal according to an embodiment of the present disclosure, where the intelligent terminal 300 may be an intelligent terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like. The intelligent terminal 300 includes a processor 301 having one or more processing cores, a memory 302 having one or more computer-readable storage media, and a computer program stored on the memory 302 and operable on the processor 301. The processor 301 is electrically connected to the memory 302. Those skilled in the art will appreciate that the intelligent terminal architecture shown in the figures does not constitute a limitation of the intelligent terminal and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The processor 301 is a control center of the intelligent terminal 300, connects various parts of the entire intelligent terminal 300 using various interfaces and lines, and performs various functions of the intelligent terminal 300 and processes data by running or loading software programs and/or modules stored in the memory 302 and calling data stored in the memory 302, thereby performing overall monitoring of the intelligent terminal 300.
In this embodiment, the processor 301 in the intelligent terminal 300 loads instructions corresponding to processes of one or more application programs into the memory 302 according to the following steps, and the processor 301 runs the application programs stored in the memory 302, thereby implementing various functions:
acquiring attribute information of data to be processed, and distributing the data to be processed to a matched message queue based on the attribute information, wherein the message queue is correspondingly provided with a consumption thread and a thread pool, and the thread pool is at least provided with one processing thread;
calculating the data volume of the data to be processed in the message queue, and determining the thread number of the processing threads in the thread pool based on the data volume;
based on the number of threads, the consumption thread pulls a certain amount of data to be processed from the message queue to the thread pool, and the data to be processed pulled to the thread pool is processed by the processing threads in the thread pool.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, the intelligent terminal 300 further includes a touch display screen 303, an input unit 304, and a power source 305, wherein the processor 301 is electrically connected to the touch display screen 303, the input unit 304, and the power source 305. Those skilled in the art will appreciate that the intelligent terminal architecture shown in fig. 3 is not intended to be limiting of intelligent terminals and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The touch display screen 303 may be used for displaying a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface, and the touch display screen 303 may include a display panel and a touch panel. Among other things, the display panel may be used to display information input by or provided to a user as well as various graphical user interfaces of the computer device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 301, and can receive and execute commands sent by the processor 301. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor 301 to determine the type of the touch event, and then the processor 301 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 303 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display screen 303 may also be used as a part of the input unit 304 to implement an input function.
The input unit 304 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 305 is used to power the various components of the smart terminal 300. Optionally, the power supply 305 may be logically connected to the processor 301 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 305 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 3, the smart terminal 300 may further include a sensor, a radio frequency module, and the like, which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the intelligent terminal 300 provided in this embodiment obtains the attribute information of the data to be processed, and based on the attribute information, distributes the data to be processed to the matched message queue, where the message queue is correspondingly provided with a consumption thread and a thread pool, and the thread pool is provided with at least one processing thread; calculating the data volume of data to be processed in the message queue, and determining the thread number of processing threads in the thread pool based on the data volume; based on the number of threads, the consumption thread pulls a certain amount of data to be processed from the message queue to the thread pool, and the data to be processed pulled to the thread pool is processed by the processing threads in the thread pool.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a computer-readable storage medium, in which a plurality of computer programs are stored, and the computer programs can be loaded by a processor to execute the steps in any data processing method provided by the present application. For example, the computer program may perform the steps of:
acquiring attribute information of data to be processed, and distributing the data to be processed to a matched message queue based on the attribute information, wherein the message queue is correspondingly provided with a consumption thread and a thread pool, and the thread pool is at least provided with one processing thread;
calculating the data volume of data to be processed in the message queue, and determining the thread number of processing threads in the thread pool based on the data volume;
based on the number of threads, the consumption thread pulls a certain amount of data to be processed from the message queue to the thread pool, and the data to be processed pulled to the thread pool is processed by the processing threads in the thread pool.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: a read Only Memory (ROM, re client account d Only Memory), a random access Memory (R client account M, R client account and access Memory), a magnetic disk or an optical disk, and the like.
Since the computer program stored in the storage medium can execute the steps in any data processing method provided in the embodiments of the present application, beneficial effects that can be achieved by any data processing method provided in the embodiments of the present application can be achieved, and detailed descriptions are omitted here for the foregoing embodiments.
The data processing method, the data processing apparatus, the intelligent terminal and the storage medium provided in the embodiments of the present application are described in detail above, and a specific example is applied in this document to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application, and meanwhile, for those skilled in the art, according to the idea of the present application, there are changes in the specific implementation and the application scope, and in summary, the content of the present description should not be understood as a limitation to the present application.

Claims (10)

1. A method of data processing, the method comprising:
acquiring attribute information of data to be processed, and distributing the data to be processed to a matched message queue based on the attribute information, wherein the message queue is correspondingly provided with a consumption thread and a thread pool, and at least one processing thread is arranged in the thread pool;
calculating the data volume of the data to be processed in the message queue, and determining the thread number of the processing threads in the thread pool based on the data volume;
based on the thread number, the consumption thread pulls a certain amount of data to be processed from the message queue to the thread pool, and the data to be processed pulled to the thread pool is processed by the processing thread in the thread pool.
2. The data processing method according to claim 1, wherein the calculating a data amount of data to be processed in the message queue, and determining the number of threads of the processing threads in the thread pool based on the data amount comprises:
judging whether the numerical value of the data quantity of the data to be processed in the message queue reaches a preset high peak value or not;
if so, increasing the number of threads in the thread pool according to a preset thread increasing rule;
and if not, keeping the thread quantity of the thread pool.
3. The data processing method of claim 1, wherein before the consuming thread pulls an amount of data to be processed from the message queue into the thread pool based on the number of threads, the method further comprises:
judging whether the numerical value of the number of threads of the thread pool reaches a preset maximum value or not;
if yes, triggering and starting a rejection mechanism of the thread pool, and stopping the consumption thread from pulling the data to be processed from the message queue;
if not, based on the thread quantity, the consumption thread pulls a certain quantity of data to be processed from the message queue to the thread pool.
4. The data processing method of claim 3, wherein after the triggering initiates a rejection mechanism for the thread pool, the method further comprises:
judging whether the thread pool has data to be processed or not;
if so, the consuming thread stops pulling the data to be processed from the message queue;
if not, determining the number of the data to be processed in the thread pool, and based on the thread number and the number of the data to be processed, pulling a certain number of the data to be processed from the message queue to the thread pool by the consumption thread.
5. The data processing method of claim 3, wherein after the triggering initiates a rejection mechanism for the thread pool, the method further comprises:
and monitoring the to-be-processed quantity of the to-be-processed data in the thread pool within a preset time, and closing the rejection mechanism when the to-be-processed quantity is lower than the preset quantity.
6. The data processing method according to claim 1, wherein when there is a processing thread in an idle state in the thread pool, the thread pool automatically destroys the processing thread in the idle state.
7. The data processing method of claim 1, wherein the thread pool only holds one of the processing threads when there is no data to be processed in the message queue.
8. A data processing apparatus, characterized in that the apparatus comprises:
the device comprises a shunting unit, a message queue and a thread pool, wherein the shunting unit is used for acquiring attribute information of data to be processed and shunting the data to be processed to the matched message queue based on the attribute information, the message queue is correspondingly provided with a consumption thread and the thread pool, and at least one processing thread is arranged in the thread pool;
the computing unit is used for computing the data volume of the data to be processed in the message queue and determining the thread number of the processing threads in the thread pool based on the data volume;
and the processing unit is used for pulling a certain amount of data to be processed from the message queue to the thread pool by the consumption thread based on the thread number, and processing the data to be processed pulled to the thread pool by the processing thread in the thread pool.
9. An intelligent terminal, comprising a memory for storing instructions and data and a processor for performing the data processing method of any one of claims 1-7.
10. A storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the data processing method of any of claims 1-7.
CN202211363155.2A 2022-11-02 2022-11-02 Data processing method and device, intelligent terminal and storage medium Pending CN115576719A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211363155.2A CN115576719A (en) 2022-11-02 2022-11-02 Data processing method and device, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211363155.2A CN115576719A (en) 2022-11-02 2022-11-02 Data processing method and device, intelligent terminal and storage medium

Publications (1)

Publication Number Publication Date
CN115576719A true CN115576719A (en) 2023-01-06

Family

ID=84589794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211363155.2A Pending CN115576719A (en) 2022-11-02 2022-11-02 Data processing method and device, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN115576719A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116643870A (en) * 2023-07-24 2023-08-25 北方健康医疗大数据科技有限公司 Method, system and device for processing long-time task distribution and readable storage medium
CN116821245A (en) * 2023-07-05 2023-09-29 贝壳找房(北京)科技有限公司 Data aggregation synchronization method and storage medium in distributed scene
CN117294347A (en) * 2023-11-24 2023-12-26 成都本原星通科技有限公司 Satellite signal receiving and processing method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116821245A (en) * 2023-07-05 2023-09-29 贝壳找房(北京)科技有限公司 Data aggregation synchronization method and storage medium in distributed scene
CN116643870A (en) * 2023-07-24 2023-08-25 北方健康医疗大数据科技有限公司 Method, system and device for processing long-time task distribution and readable storage medium
CN116643870B (en) * 2023-07-24 2023-11-10 北方健康医疗大数据科技有限公司 Method, system and device for processing long-time task distribution and readable storage medium
CN117294347A (en) * 2023-11-24 2023-12-26 成都本原星通科技有限公司 Satellite signal receiving and processing method
CN117294347B (en) * 2023-11-24 2024-01-30 成都本原星通科技有限公司 Satellite signal receiving and processing method

Similar Documents

Publication Publication Date Title
CN115576719A (en) Data processing method and device, intelligent terminal and storage medium
CN102866903B (en) Background work and foreground work are separated to coupling
CN111831441A (en) Memory recovery method and device, storage medium and electronic equipment
CN108132735B (en) Terminal and application control method
CN111831440A (en) Memory recovery method and device, storage medium and electronic equipment
CN110968415A (en) Scheduling method and device of multi-core processor and terminal
CN111475299B (en) Memory allocation method and device, storage medium and electronic equipment
CN111831434A (en) Resource allocation method, device, storage medium and electronic equipment
CN111831414A (en) Thread migration method and device, storage medium and electronic equipment
CN111831437B (en) Device management method and device, storage medium and electronic device
CN108304267A (en) The multi-source data of highly reliable low-resource expense draws the method for connecing
CN111831435A (en) Memory allocation method and device, storage medium and electronic equipment
CN111459622A (en) Method and device for scheduling virtual CPU, computer equipment and storage medium
CN111831436B (en) IO request scheduling method and device, storage medium and electronic equipment
CN111831432B (en) IO request scheduling method and device, storage medium and electronic equipment
US11409573B2 (en) Function parallelism in a runtime container of a function-as-a-service (FAAS) system
CN111831439A (en) IO request processing method and device, storage medium and electronic equipment
CN111831443A (en) Processor state adjusting method and device, storage medium and electronic equipment
CN111831412B (en) Interrupt processing method and device, storage medium and electronic equipment
CN111831462A (en) IO request processing method and device, storage medium and electronic equipment
US20240314051A1 (en) Method and apparatus of detecting message delay, electronic device, and storage medium
CN115617518A (en) Thread management method and device, electronic equipment and storage medium
CN112463626B (en) Memory leakage positioning method and device, computer equipment and storage medium
CN116932194A (en) Thread execution method, thread execution device, electronic equipment and computer readable storage medium
CN114816031A (en) Power saving method of terminal device, terminal device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination