CN106528299A - Data processing method and device - Google Patents
Data processing method and device Download PDFInfo
- Publication number
- CN106528299A CN106528299A CN201610846267.1A CN201610846267A CN106528299A CN 106528299 A CN106528299 A CN 106528299A CN 201610846267 A CN201610846267 A CN 201610846267A CN 106528299 A CN106528299 A CN 106528299A
- Authority
- CN
- China
- Prior art keywords
- thread
- data
- distribution
- running status
- pond
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
- G06F9/524—Deadlock detection or avoidance
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a data processing method and device, and belongs to the technical field of computers. The method comprises the following steps of receiving data to be processed; according to the data to be processed, creating a distribution thread in a single thread pool, wherein at most one distribution thread under an operation state is in the single thread pool; through the distribution thread under the operation state, creating m1 pieces of processing threads in a maximum operation thread pool; and in the maximum operation thread pool, through the processing threads under the operation state, processing the data, wherein the at most m2 pieces of processing threads under the operation state are in the maximum operation thread pool. By use of the method, through a condition that at most one distribution thread under the operation state is in the single thread pool, a situation that a plurality of processing threads need to access the same resource is avoided, and the problem of thread deadlock is solved. Calculation resources are reasonably utilized, and a multi-processing-thread mechanism is stably used for quickly processing the data so as to achieve an effect on improving the calculation performance of a system.
Description
Technical field
The present embodiments relate to field of computer technology, more particularly to a kind of data processing method and device.
Background technology
Thread is the minimum unit in program, is a single sequential control flow process in program.It is same in single program
Completing different data processings, this data processing method is referred to as multiple threads to the two or more threads of Shi Yunhang.
At present, in multiple threads, when processor receives substantial amounts of pending data, by pending data
Split according to fixed qty, obtained least one set data;According to the pending multiple threads of data creation, then by multiple lines
Journey is allocated, and makes multiple threads corresponding with every group of data, finally, by the thread in running status to pending data
Processed.
However, in multiple threads create thread quantity it is excessive when, be very easy to by multiple threads be assigned to for
Same group of data is processed, causes multiple threads to be required for accessing same resource, so that thread deadlock, that is, can produce
The mutual wait of cross-thread being caused due to competitive resource, leading to not the phenomenon for continuing to run with, this will cause computing resource
Waste, bring larger potential safety hazard to system.
The content of the invention
In order to solve the problems, such as that the number of threads created in multiple threads excessively causes thread deadlock, the embodiment of the present invention
There is provided a kind of data processing method and device.The technical scheme is as follows:
First aspect, there is provided a kind of data processing method, methods described include:
Receive pending data;
Distribution thread is created in single thread pond according to the pending data, in the single thread pond be up to
One distribution thread in running status, each described distribution thread are corresponding with the data of the first quantity n1;
M1 process thread is created in maximum active thread pond by the distribution thread in running status,
The data for distributing corresponding first quantity n1 of thread are evenly distributed and are processed to the m1 process thread for creating;
In the maximum active thread pond by running status the process thread to the data at
Reason, is up to the m2 process thread in running status in the maximum active thread pond.
Second aspect, there is provided a kind of data processing equipment, described device include:
Receiver module, for receiving pending data;
First creation module, it is for creating distribution thread in single thread pond according to the pending data, described
The be up to one distribution thread in running status in single thread pond, each described distribution thread and the first quantity n1
Data correspondence;
Second creation module, for by the distribution thread in running status in maximum active thread pond
M1 process thread is created, the data of corresponding first quantity n1 of the distribution thread are evenly distributed to the individual places of m1 for creating
Reason thread is processed;
Processing module, in the maximum active thread pond by running status the process thread to institute
State data to be processed, in the maximum active thread pond, be up to the m2 process thread in running status.
Technical scheme provided in an embodiment of the present invention at least has the advantages that:
By creating distribution thread in single thread pond according to pending data, each described distribution thread and first
The data correspondence of quantity n1, creates m1 process line in maximum active thread pond by the distribution thread in running status
Journey, the data of corresponding first quantity n1 of distribution thread are evenly distributed and are processed to the m1 process thread for creating;Due to list
The be up to one distribution thread in running status in one thread pool, m1 created by distributing thread A process thread and use
Data in processing data group a, m1 created by distributing thread B process the data that thread is used in processing data group b,
Data in data set a are not overlap with the data in data set b, so as to ensure that the place by different distribution thread creations
Data handled by reason thread are not overlapped, it is to avoid multiple process threads need to access the situation of same resource, solve
The problem of thread deadlock;Reasonable utilization computing resource is reached, has stably carried out quick processing data using multiprocessing threading mechanism,
So as to improve the effect of the calculating performance of system.
Description of the drawings
For the technical scheme being illustrated more clearly that in the embodiment of the present invention, below will be to making needed for embodiment description
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for
For those of ordinary skill in the art, on the premise of not paying creative work, can be obtaining other according to these accompanying drawings
Accompanying drawing.
Fig. 1 is the flow chart of the data processing method that one embodiment of the invention is provided;
Fig. 2 is the flow chart of the data processing method that another embodiment of the present invention is provided;
Fig. 3 is the principle schematic of the data processing method that another embodiment of the present invention is provided;
Fig. 4 is the structural representation of the data processing equipment that another embodiment of the present invention is provided;
Fig. 5 is the structural representation of the data processing equipment that another embodiment of the present invention is provided;
Fig. 6 is the structural framing figure of the server that one embodiment of the invention is provided.
Specific embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing to embodiment party of the present invention
Formula is described in further detail.
Fig. 1 is refer to, the flow chart that the data processing method of one embodiment of the invention offer is provided.At the data
Reason method is can apply in the server with multiple threads ability, includes processor and memorizer in the server.
The data processing method includes:
Step 102, receives pending data.
Optionally, processor receives pending data.Alternatively, the data are stream datas, such as weather station
In different time, the observed meteorological data for obtaining in different location.The data are divided into many according to different time and different location
Individual data.
Step 104, creates distribution thread according to pending data, in single thread pond be up in single thread pond
One distribution thread in running status, each distribution thread are corresponding with the data of the first quantity n1.
Optionally, the first quantity n1 is the numerical value pre-set according to computing capability during processor processes data,
First quantity n1 data will split into multi-group data as one group in pending data, at least two in wherein every group data
Data, and n1 is the natural number more than or equal to 2, pending data can be the integral multiple of n1, it is also possible to not be the integer of n1
Times.When pending data are not the integral multiple of n1, the quantity of pending data are divided by with n1 and obtain remainder x, be most
Remaining x data individually create a distribution thread afterwards.
Optionally, processor creates distribution thread, the distribution after establishment in single thread pond according to pending data
Thread is up to a distribution thread in running status all in queueing condition, in single thread pond.Specifically, process
Whether distribution thread in running status is had in device real-time judge single thread pond, if it is not, by single thread pond
In queueing condition and earliest one distribution thread of creation time switches to running status;If it has, then not switching over step
Suddenly.
Step 106, creates m1 process thread in maximum active thread pond by the distribution thread in running status,
The data of corresponding first quantity n1 of distribution thread are evenly distributed and are processed to the m1 process thread for creating.
Optionally, each processes thread for processing to the data of the second quantity n2, and the value of n2 is equal to n1 divided by m1
Quotient, i.e. n2=n1/m1, n1 >=2.
Optionally, pass through unique distribution thread in running status in maximum active thread in same time inner treater
M1 process thread is created in pond.Specifically, when processor will distribute thread switches to running status, it is required for again
Perform the step of m1 process thread is created in maximum active thread pond by the distribution thread in running status.It is optional
, m1 is preset value.
Optionally, each processes thread for processing to the data of the second quantity n2, and the second quantity n2 is according to the
One quantity n1 and the m1 process thread for creating are calculated.Schematically, n2=n1/m1.
Data are processed in maximum active thread pond, most by step 108 by the process thread in running status
The be up to m2 process thread in running status in big active thread pond.
Optionally, processor in maximum active thread pond by running status process thread to data at
Reason.
In sum, by creating distribution thread, each described distribution in single thread pond according to pending data
Thread is corresponding with the data of the first quantity n1, creates m1 by the distribution thread in running status in maximum active thread pond
Individual process thread, the data for distributing corresponding first quantity n1 of thread are evenly distributed at the m1 process thread for creating
Reason;Due to the be up to one distribution thread in running status in single thread pond, at m1 of the A establishments of distribution thread
The data that reason thread is used in processing data group a, m1 created by distributing thread B processes thread is used for processing data group b
In data, the data in data set a are overlap with the data in data set b, so as to ensure that by different distribution lines
The data processed handled by thread that journey is created are not overlapped, it is to avoid multiple process threads need to access the feelings of same resource
Condition, solves the problems, such as thread deadlock;Reasonable utilization computing resource is reached, has stably come quick using multiprocessing threading mechanism
Processing data, so that improve the effect of the calculating performance of system.
Fig. 2 is refer to, the flow chart that the data processing method of another embodiment of the present invention offer is provided.The data
Processing method is can apply in the server with multiple threads ability, includes processor and storage in the server
Device.The data processing method includes:
Step 201, receives pending data.
Optionally, processor receives pending data, and schematically, pending data are 20000 data.
Step 202, pending data are split for one group according to the first quantity n1 data, least one set is obtained
Data.
Optionally, the first quantity n1 is the numerical value pre-set according to computing capability during processor processes data.
Schematically, pending data are 20000 data, and the first quantity n1 is set to 2000, then processor will
20000 data are split according to 2000, obtain 10 groups of data.
Step 203, creates distribution thread corresponding with each group of data in single thread pond.
Wherein, the be up to one distribution thread in running status in single thread pond, each distribution thread and first
The data correspondence of quantity n1.
Schematically, processor creates distribution thread corresponding with each group of data in single thread pond, establishment point
There are 10 with thread, each distribution thread is corresponding with 2000 data.
Step 204, processor by running status distribution thread create m1 in maximum active thread pond at
Reason thread, the data of corresponding first quantity n1 of distribution thread are evenly distributed and are processed to the m1 process thread for creating;
Optionally, each processes thread for processing to the data of the second quantity n2, and the value of n2 is equal to n1 divided by m1
Quotient, i.e. n2=n1/m1, n1 >=2.
Optionally, number m1 for processing thread is pre-set according to computing capability during processor processes data
Numerical value;Schematically, m1 is 4, then the second quantity n2=2000/4=500, and each processes thread for carrying out to 500 data
Process.
Optionally, 4 process threads are created in maximum active thread pond by the distribution thread in running status.
Step 205, number m3 of the process thread in the maximum active thread of detection in running status and maximum operation threshold
Relation between value m2.
Optionally, number m3 and maximum fortune of the process thread in the maximum active thread of processor detection in running status
Relation between row threshold value m2.
M1 process thread, if m3=m2, is set to queueing condition by step 206.
Optionally, the m2 process thread in running status is up in maximum active thread pond, m2 is according to process
The numerical value pre-set by computing capability during device processing data;Schematically, m2 is 100.
Optionally, if m3=100,4 process threads are set to queueing condition by processor.
Step 207, if m2-m1<m3<M2, then process creation time in thread by m1 and process thread for m2-m3 earlier
Running status is set to, and the individual process threads of remaining m1- (m2-m3) in thread is processed by m1 and is set to queueing condition.
Optionally, if m3=98,96 are met<m3<100 condition, then processor by 4 process threads in creation time compared with
Early 2 process thread and are set to running status, 4 remaining 2 processed in thread are processed thread and is set to queuing shape
State.
M1 process thread, if m3≤m2-m1, is set to running status by step 208.
Optionally, if m3=90, meet the condition of m3≤96, then 4 process threads are set to running status by processor.
Step 209, it is predetermined whether number m4 for processing thread in the maximum active thread of detection in queueing condition reaches
Threshold value m5.
Optionally, whether number m4 of the process thread in the maximum active thread of processor detection in queueing condition reaches
Predetermined threshold m5, predetermined threshold m5 are the numerical value pre-set according to the memory size of processor;Schematically, m5=200.
Step 210, if number m4 of the process thread in queueing condition reaches predetermined threshold m5, suspends single thread
The distribution thread in running status in pond.
Optionally, if number m4 of the process thread in queueing condition is not up to 200, processor continues to run with single
The distribution thread in running status in thread pool;If number m4 of the process thread in queueing condition reaches 200, locate
Reason device suspends the distribution thread in running status in single thread pond.
Step 211, processor are carried out to data by the process thread in running status in maximum active thread pond
Process, in maximum active thread pond, be up to the m2 process thread in running status.
In sum, by creating distribution thread, each described distribution in single thread pond according to pending data
Thread is corresponding with the data of the first quantity n1, creates m1 by the distribution thread in running status in maximum active thread pond
Individual process thread, the data for distributing corresponding first quantity n1 of thread are evenly distributed at the m1 process thread for creating
Reason;Due to the be up to one distribution thread in running status in single thread pond, at m1 of the A establishments of distribution thread
The data that reason thread is used in processing data group a, m1 created by distributing thread B processes thread is used for processing data group b
In data, the data in data set a are overlap with the data in data set b, so as to ensure that by different distribution lines
The data processed handled by thread that journey is created are not overlapped, it is to avoid multiple process threads need to access the feelings of same resource
Condition, solves the problems, such as thread deadlock;Reasonable utilization computing resource is reached, has stably come quick using multiprocessing threading mechanism
Processing data, so that improve the effect of the calculating performance of system.
Also by detecting number m3 for processing thread in maximum active thread in running status, if m3=m2, will
M1 processes thread and is set to queueing condition;Due to being up to the m2 place in running status in maximum active thread pond
Reason thread, it is to avoid the problem of the processor pressure that number of threads is excessively caused;Reach when the Thread Count in multiple threads
When measuring excessive, the effect of the operational efficiency of processor is improved.
N1, n2, m1, the m2 being related in the application is natural number, optionally, with pending data as 20000 numbers
According to as a example by, Fig. 3 is refer to, the principle schematic of the data processing method of another embodiment of the present invention offer is provided.The
One quantity n1 is set to 2000, then 20000 data are split for one group by processor according to 2000, obtains 10 groups of numbers
According to.According to 10 groups of data, processor creates 10 distribution thread corresponding with this 10 groups of data in single thread pond, respectively
Distribution thread A, distribution thread B, distribution thread C, distribution thread D, distribution thread E, distribution thread F, distribution thread G, distribution line
Journey H, distribution thread I and distribution thread J.Number m1 for processing thread is set to 4 by processor, the second quantity n2=2000/4
=500, then 4 process threads are created in maximum active thread pond by the distribution thread A in running status, respectively located
Reason thread A1, thread A2 is processed, thread A3 is processed and processes thread A4, each processes thread for 500 data
Reason.Processor is in number m3 of the process thread of running status in detecting maximum active thread be 98, then process lines by 4
In journey, 2 process threads, i.e. process thread A1 and process thread A2 are set to running status to creation time earlier, at 4
Remaining 2 process thread in reason thread, that is, process thread A3 and process thread A4 is set to queueing condition.Processor is detected
In maximum active thread in queueing condition process thread number be 2, be not reaching to predetermined threshold 200, then processor after
The distribution thread in running status in reforwarding row single thread pond.Processor is in maximum active thread pond by fortune
The process thread of row state is processed to data.
Fig. 4 is refer to, the structural representation of the data processing equipment of one embodiment of the invention offer is provided.The dress
Put including:
Receiver module 401, for receiving pending data;
First creation module 402, for creating distribution thread, single line according to pending data in single thread pond
Be up to one distribution thread in running status of Cheng Chizhong, each distribution thread are corresponding with the data of the first quantity n1;
Second creation module 403, for being created in maximum active thread pond by the distribution thread in running status
The m1 data for processing corresponding first quantity n1 of thread distribution thread are evenly distributed and are processed at thread to m1 for creating
Reason;
Processing module 404, for being entered to data by the process thread in running status in maximum active thread pond
Row is processed, and is up to the m2 process thread in running status in maximum active thread pond.
In sum, by creating distribution thread, each described distribution in single thread pond according to pending data
Thread is corresponding with the data of the first quantity n1, creates m1 by the distribution thread in running status in maximum active thread pond
Individual process thread, the data for distributing corresponding first quantity n1 of thread are evenly distributed at the m1 process thread for creating
Reason;Due to the be up to one distribution thread in running status in single thread pond, at m1 of the A establishments of distribution thread
The data that reason thread is used in processing data group a, m1 created by distributing thread B processes thread is used for processing data group b
In data, the data in data set a are overlap with the data in data set b, so as to ensure that by different distribution lines
The data processed handled by thread that journey is created are not overlapped, it is to avoid multiple process threads need to access the feelings of same resource
Condition, solves the problems, such as thread deadlock;Reasonable utilization computing resource is reached, has stably come quick using multiprocessing threading mechanism
Processing data, so that improve the effect of the calculating performance of system.
Fig. 5 is refer to, the structural representation of the data processing equipment of another embodiment of the present invention offer is provided.Should
Device includes:
First creation module 402, including:
Split cells 402a and creating unit 402b;
Split cells 402a, for pending data are split for one group according to the first quantity n1 data, obtains
To least one set data;
Creating unit 402b, for creating distribution thread corresponding with each group of data in single thread pond.
The device, also includes:
Handover module 405, for a distribution line earliest by queueing condition and creation time is in single thread pond
Journey switches to running status.
The device, also includes:
First detection module 406, the first setup module 407, the second setup module 408 and the 3rd setup module 409;
First detection module 406, for detecting number m3 of the process thread in maximum active thread in running status
Relation between m2;
First setup module 407, if for m3=m2, be set to queueing condition by m1 process thread;
Second setup module 408, if for m2-m1<m3<M2, then process creation time m2- earlier in thread by m1
M3 processes thread and is set to running status, processes the individual process threads of remaining m1- (m2-m3) in thread by m1 and is set to
Queueing condition;
3rd setup module 409, if for m3≤m2-m1, be set to running status by m1 process thread.
The device, also includes:
Second detection module 410 and time-out module 411;
Second detection module 410, for detecting number m4 of the process thread in maximum active thread in queueing condition
Whether predetermined threshold m5 is reached;
Suspend module 411, if number m4 for the process thread in queueing condition reaches predetermined threshold m5, suspend
The distribution thread in running status in single thread pond.
In sum, by creating distribution thread, each described distribution in single thread pond according to pending data
Thread is corresponding with the data of the first quantity n1, creates m1 by the distribution thread in running status in maximum active thread pond
Individual process thread, the data for distributing corresponding first quantity n1 of thread are evenly distributed at the m1 process thread for creating
Reason;Due to the be up to one distribution thread in running status in single thread pond, at m1 of the A establishments of distribution thread
The data that reason thread is used in processing data group a, m1 created by distributing thread B processes thread is used for processing data group b
In data, the data in data set a are overlap with the data in data set b, so as to ensure that by different distribution lines
The data processed handled by thread that journey is created are not overlapped, it is to avoid multiple process threads need to access the feelings of same resource
Condition, solves the problems, such as thread deadlock;Reasonable utilization computing resource is reached, has stably come quick using multiprocessing threading mechanism
Processing data, so that improve the effect of the calculating performance of system.
Also by detecting number m3 for processing thread in maximum active thread in running status, if m3=m2, will
M1 processes thread and is set to queueing condition;Due to being up to the m2 place in running status in maximum active thread pond
Reason thread, it is to avoid the problem of the processor pressure that number of threads is excessively caused;Reach when the Thread Count in multiple threads
When measuring excessive, the effect of the operational efficiency of processor is improved.
Fig. 6 is refer to, the structural framing figure of the server of one embodiment of the invention offer is provided.The server
600 include CPU (CPU) 601, including random access memory (RAM) 602 and read only memory (ROM) 603
System storage 604, and the system bus 605 of connection system memorizer 604 and CPU 601.The server
600 basic input/outputs (I/O systems) 606 for also including transmission information between each device in help computer, and
For the mass-memory unit 607 of storage program area 613, application program 614 and other program modules 615.
The basic input/output 606 is included for the display 608 of display information and is believed for user input
The input equipment 609 of such as mouse, keyboard etc of breath.Wherein described display 608 and input equipment 609 all pass through to be connected to
The IOC 610 of system bus 605 is connected to CPU 601.The basic input/output 606
Can also include IOC 610 for receive and process from keyboard, mouse or electronic touch pen etc. it is multiple its
The input of his equipment.Similarly, IOC 610 also provides output to display screen, printer or other kinds of defeated
Go out equipment.
The mass-memory unit 607 is by being connected to the bulk memory controller (not shown) of system bus 605
It is connected to CPU 601.The mass-memory unit 607 and its associated computer-readable medium are server
600 provide non-volatile memories.That is, the mass-memory unit 607 can include such as hard disk or CD-ROM
The computer-readable medium (not shown) of driver etc.
Without loss of generality, the computer-readable medium can include computer-readable storage medium and communication media.Computer
Storage medium is included for storing the information such as computer-readable instruction, data structure, program module or other data
Volatibility and non-volatile, removable and irremovable medium that any method or technique is realized.Computer-readable storage medium includes
RAM, ROM, EPROM, EEPROM, flash memory or other solid-state storages its technologies, CD-ROM, DVD or other optical storages, tape
Box, tape, disk storage or other magnetic storage apparatus.Certainly, skilled person will appreciate that the computer-readable storage medium
It is not limited to above-mentioned several.Above-mentioned system storage 604 and mass-memory unit 607 may be collectively referred to as memorizer.
According to various embodiments of the present invention, the server 600 can also be arrived by network connections such as the Internets
Remote computer operation on network.Namely server 600 can be by the network interface that is connected on the system bus 605
Unit 611 is connected to network 612, in other words, it is also possible to be connected to using NIU 611 other kinds of network or
Remote computer system (not shown).
The memorizer also include one or more than one program, one or more than one program storage in
In memorizer, one or more than one program bag contains for carrying out data processing method provided in an embodiment of the present invention.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
One of ordinary skill in the art will appreciate that completely or partially walking in realizing the data processing method of above-described embodiment
Suddenly can be completed by hardware, it is also possible to instruct the hardware of correlation to complete by program, described program can be stored in
In a kind of computer-readable recording medium, storage medium mentioned above can be read only memory, disk or CD etc..
The foregoing is only presently preferred embodiments of the present invention, not to limit the present invention, all spirit in the present invention and
Within principle, any modification, equivalent substitution and improvements made etc. should be included within the scope of the present invention.
Claims (10)
1. a kind of data processing method, it is characterised in that methods described includes:
Receive pending data;
Distribution thread is created in single thread pond according to the pending data, be up to one in the single thread pond
In the distribution thread of running status, each described distribution thread is corresponding with the data of the first quantity n1;
M1 process thread is created in maximum active thread pond by the distribution thread in running status, it is described
The data of corresponding first quantity n1 of distribution thread are evenly distributed and are processed to the m1 process thread for creating;
The data are processed by the process thread in running status in the maximum active thread pond, institute
State the be up to m2 process thread in running status in maximum active thread pond.
2. method according to claim 1, it is characterised in that it is described according to the pending data in single thread pond
It is middle to create distribution thread, including:
The pending data are split for one group according to the first quantity n1 data, least one set data are obtained;
The distribution thread corresponding with each group of data is created in the single thread pond.
3. method according to claim 1, it is characterised in that described by the distribution line in running status
After journey creates m1 process thread in maximum active thread pond, also include:
The operation will be switched in an earliest distribution thread of queueing condition and creation time in the single thread pond
State.
4. according to the arbitrary described method of claims 1 to 3, it is characterised in that described by the institute in running status
State distribution thread creates in maximum active thread pond m1 process thread after, also include:
The relation being in detecting the maximum active thread between number m3 of the process thread of running status and m2;
If m3=m2, the m1 process thread is set to into queueing condition;
If m2-m1<m3<M2, then by creation time in the m1 process thread, the m2-m3 process thread is arranged earlier
For running status, the individual process threads of remaining m1- (m2-m3) in the m1 process thread are set to into queuing shape
State;
If m3≤m2-m1, the m1 process thread is set to into running status.
5. according to the arbitrary described method of claims 1 to 3, it is characterised in that methods described also includes:
Whether number m4 of the process thread for being in queueing condition in detecting the maximum active thread reaches predetermined threshold
m5;
If number m4 of the process thread in queueing condition reaches predetermined threshold m5, suspend in the single thread pond
The distribution thread in running status.
6. a kind of data processing equipment, it is characterised in that described device includes:
Receiver module, for receiving pending data;
First creation module is for creating distribution thread in single thread pond according to the pending data, described single
The number of the be up to one distribution thread in running status in thread pool, each described distribution thread and the first quantity n1
According to correspondence;
Second creation module, for being created in maximum active thread pond by the distribution thread in running status
M1 process thread, the data of corresponding first quantity n1 of the distribution thread are evenly distributed to the m1 process line for creating
Cheng Jinhang process;
Processing module, in the maximum active thread pond by running status the process thread to the number
According to being processed, the m2 process thread in running status in the maximum active thread pond, is up to.
7. device according to claim 6, it is characterised in that first creation module, including:
Split cells and creating unit;
The split cells, for the pending data are split for one group according to the first quantity n1 data, obtains
To least one set data;
The creating unit, for creating the distribution thread corresponding with each group of data in the single thread pond.
8. device according to claim 6, it is characterised in that described device, also includes:
Handover module, for will cut in an earliest distribution thread of queueing condition and creation time in the single thread pond
It is changed to the running status.
9., according to the arbitrary described device of claim 6 to 8, it is characterised in that described device, also include:
First detection module, the first setup module, the second setup module and the 3rd setup module;
The first detection module, for detecting the individual of the process thread in the maximum active thread in running status
Relation between number m3 and m2;
First setup module, if for m3=m2, be set to queueing condition by the m1 process thread;
Second setup module, if for m2-m1<m3<M2, then by creation time in the m1 process thread earlier
The m2-m3 process thread is set to running status, by the m1 individual institute of remaining m1- (m2-m3) processed in thread
State process thread and be set to queueing condition;
3rd setup module, if for m3≤m2-m1, be set to running status by the m1 process thread.
10., according to the arbitrary described device of claim 6 to 8, it is characterised in that described device, also include:
Second detection module and time-out module;
Second detection module, for detecting the individual of the process thread in the maximum active thread in queueing condition
Whether number m4 reaches predetermined threshold m5;
The time-out module, if number m4 for the process thread in queueing condition reaches predetermined threshold m5, temporarily
Stop the distribution thread in running status in the single thread pond.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610846267.1A CN106528299B (en) | 2016-09-23 | 2016-09-23 | Data processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610846267.1A CN106528299B (en) | 2016-09-23 | 2016-09-23 | Data processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106528299A true CN106528299A (en) | 2017-03-22 |
CN106528299B CN106528299B (en) | 2019-12-03 |
Family
ID=58344241
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610846267.1A Active CN106528299B (en) | 2016-09-23 | 2016-09-23 | Data processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106528299B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108279977A (en) * | 2017-12-29 | 2018-07-13 | 深圳市德兰明海科技有限公司 | A kind of data processing method, device and controller based on RTOS |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101826003A (en) * | 2010-04-16 | 2010-09-08 | 中兴通讯股份有限公司 | Multithread processing method and device |
CN102821164A (en) * | 2012-08-31 | 2012-12-12 | 河海大学 | Efficient parallel-distribution type data processing system |
CN103955491A (en) * | 2014-04-15 | 2014-07-30 | 南威软件股份有限公司 | Method for synchronizing timing data increment |
CN104142865A (en) * | 2014-07-18 | 2014-11-12 | 国家电网公司 | Data collecting and processing method based on thread synchronization |
CN104239149A (en) * | 2012-08-31 | 2014-12-24 | 南京工业职业技术学院 | Server multithread parallel data processing method and load balancing method |
CN104700255A (en) * | 2013-12-06 | 2015-06-10 | 腾讯科技(北京)有限公司 | Multi-process processing method, device and system |
-
2016
- 2016-09-23 CN CN201610846267.1A patent/CN106528299B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101826003A (en) * | 2010-04-16 | 2010-09-08 | 中兴通讯股份有限公司 | Multithread processing method and device |
CN102821164A (en) * | 2012-08-31 | 2012-12-12 | 河海大学 | Efficient parallel-distribution type data processing system |
CN104239149A (en) * | 2012-08-31 | 2014-12-24 | 南京工业职业技术学院 | Server multithread parallel data processing method and load balancing method |
CN104700255A (en) * | 2013-12-06 | 2015-06-10 | 腾讯科技(北京)有限公司 | Multi-process processing method, device and system |
CN103955491A (en) * | 2014-04-15 | 2014-07-30 | 南威软件股份有限公司 | Method for synchronizing timing data increment |
CN104142865A (en) * | 2014-07-18 | 2014-11-12 | 国家电网公司 | Data collecting and processing method based on thread synchronization |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108279977A (en) * | 2017-12-29 | 2018-07-13 | 深圳市德兰明海科技有限公司 | A kind of data processing method, device and controller based on RTOS |
Also Published As
Publication number | Publication date |
---|---|
CN106528299B (en) | 2019-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105450522B (en) | Method, equipment and medium for the route service chain flow point group between virtual machine | |
US9946563B2 (en) | Batch scheduler management of virtual machines | |
US9495206B2 (en) | Scheduling and execution of tasks based on resource availability | |
CN107025139A (en) | A kind of high-performance calculation Scheduling Framework based on cloud computing | |
CN104901989B (en) | A kind of Site Service offer system and method | |
CN104714785A (en) | Task scheduling device, task scheduling method and data parallel processing device | |
CN104142860A (en) | Resource adjusting method and device of application service system | |
CN103473115B (en) | virtual machine placement method and device | |
CN104598316B (en) | A kind of storage resource distribution method and device | |
US9424212B2 (en) | Operating system-managed interrupt steering in multiprocessor systems | |
CN106506670A (en) | A kind of cloud platform virtual resource high speed dispatching method and system | |
CN104021040A (en) | Cloud computing associated task scheduling method and device based on time constraint | |
CN111104210A (en) | Task processing method and device and computer system | |
CN114416352A (en) | Computing resource allocation method and device, electronic equipment and storage medium | |
US9882973B2 (en) | Breadth-first resource allocation system and methods | |
CN112162835A (en) | Scheduling optimization method for real-time tasks in heterogeneous cloud environment | |
US20160210171A1 (en) | Scheduling in job execution | |
CN115658311A (en) | Resource scheduling method, device, equipment and medium | |
CN106528299A (en) | Data processing method and device | |
CN105653347B (en) | A kind of server, method for managing resource and virtual machine manager | |
CN107634978B (en) | Resource scheduling method and device | |
CN115951974B (en) | Management method, system, equipment and medium of GPU virtual machine | |
CN105511959A (en) | Method and device for distributing virtual resource | |
CN102945188A (en) | Method and device for dispatching resources of virtual machine | |
CN115952054A (en) | Simulation task resource management method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |