CN106533977B - A kind of data processing method of cloud data center - Google Patents
A kind of data processing method of cloud data center Download PDFInfo
- Publication number
- CN106533977B CN106533977B CN201610944130.XA CN201610944130A CN106533977B CN 106533977 B CN106533977 B CN 106533977B CN 201610944130 A CN201610944130 A CN 201610944130A CN 106533977 B CN106533977 B CN 106533977B
- Authority
- CN
- China
- Prior art keywords
- data
- current
- queue
- packet
- data packet
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/622—Queue service order
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a kind of data processing methods of cloud data center, and described method includes following steps: I, determining producer consumer model;II, determination strategy execute condition;III, corresponding strategy is executed;IV, it is finished.Invention describes a kind of cloud data center data processing policies, novel more consumer data models of the strategy based on circle queue, according to the three of consumer kinds of processing modes, under the experimental situation of single producer and double consumers, processing strategie in the case of three kinds of design, guarantee that consumer can quickly find currently without processed data packet, and then accelerates the processing speed of data packet.Meanwhile by setting queue_size and M value, reprocessing of the reduction consumer to data packet as far as possible realizes and achievees the purpose that process data at high speeds with the smallest consumption.
Description
Technical field
The present invention relates to technical field of data processing, especially a kind of data processing method of cloud data center.
Background technique
The data processing problem of cloud data center is one of its main problem to be solved.The data processing of cloud data center
In the process, data reach data center, initially enter task queue, newly arrived task data is ceaselessly put into as the producer
Task queue, the virtual machine for handling task take out data packet as consumer from task queue and handle.The process can be with
Regard a producer consumer problem as.
Producer consumer problem describes the thread of two shared fixed size buffer areas --- i.e. so-called " production
Person " and " consumer " --- the problem of occurring in actual motion.The main function of the producer is to generate a certain amount of data
It is put into buffer area, then repeats this process.At the same time, consumer also consumes these data in buffer area.But there are still this
The problem of sample: data, consumption data when consumer can be empty in the buffer can be added in the producer when buffer area is full.
Current solution, during handling data packet, it may appear that reprocessing of the consumer to data packet is made
At the waste of resource and energy consumption.
Summary of the invention
A kind of number of cloud data center is provided the technical problem to be solved by the present invention is to overcome the deficiencies in the prior art
According to processing method, the present invention can accelerate processing of the consumer to data packet.
The present invention uses following technical scheme to solve above-mentioned technical problem:
A kind of data processing method of the cloud data center proposed according to the present invention, comprising the following steps:
Step 1: determining producer consumer model, which is circle queue;
Step 2: the producer is put into data packet from the initial position of circle queue in order one by one, while by data packet
For information assignment into data cell packet_unit, data cell packet_unit is the packet for saving each data packet
Number, length and the location information in circulating memory queue;
Step 3: determination strategy executes condition and executes corresponding strategy;It is specific as follows:
If the packet numbers that the position of current data packet to be processed is less than queue length and current tail of the queue receives are big
In current packet numbers to be treated, the data packet from g_current+M to queue_size-1 is handled at this time;Wherein, altogether
It enjoys element g_current and is used to indicate the position of current producer's creation data packet in the queue, queue_size indicates annular
Queue length, M are the interval step number of data processing;
If the position of current data packet to be processed is less than queue length and the packet numbers that are currently received have been above and work as
Preceding packet numbers to be treated, handle the data packet from 0 to g_current at this time;
If the position of current data packet to be processed has been above or be equal to queue length, the position of data packet is taken at this time
(g_current+ M) % queue_size;If current packet numbers to be treated have been above current processed number
According to Bale No., so handling the data packet from (g_current+ M) % queue_size to g_current at this time.
Scheme, M=1 are advanced optimized as a kind of data processing method of cloud data center of the present invention.
Scheme is advanced optimized as a kind of data processing method of cloud data center of the present invention, M=1 is to show
The producer is identical with the processing speed of consumption.
Scheme, M=2 are advanced optimized as a kind of data processing method of cloud data center of the present invention.
Scheme, M=3 are advanced optimized as a kind of data processing method of cloud data center of the present invention.
The invention adopts the above technical scheme compared with prior art, has following technical effect that
(1) according to the three of consumer kinds of processing strategies, under the experimental situation of single producer and double consumers, consumer can
To quickly find currently without processed data packet, and then accelerate the processing speed of data packet;
(2) in actual conditions, by the setting of queue_size and M value, consumer can be reduced as far as possible to data
The reprocessing of packet achievees the purpose that process data at high speeds with the smallest consumption;
(3) key point of the invention is: multiple consumers select in relative strategy processing circle queue according to different situations
Data, with the smallest reprocessing consumption achieve the purpose that process data at high speeds packet;
(4) the invention proposes more consumer data processing strategies based on circle queue, and point three kinds of situations are to more consumption
The data processing problem of person, is analyzed, and proposes corresponding data processing policy;Data processing side proposed by the invention
Method can reduce the number of processes of repetitive data packet, improve the efficiency of more consumer data processing.
Detailed description of the invention
Fig. 1 is circle queue schematic diagram of the present invention;
Fig. 2 is consumer's process flow diagram of strategy process of the present invention;
The case where Fig. 3 is strategy process of the present invention A schematic diagram;
The case where Fig. 4 is strategy process of the present invention B schematic diagram;
The case where Fig. 5 is strategy process of the present invention C schematic diagram;
Fig. 6 is the flow chart of this method.
Specific embodiment
Technical solution of the present invention is described in further detail with reference to the accompanying drawing:
It solves the problems, such as to propose in background of invention at present, producer's suspend mode when buffer area is full must just be allowed (to be wanted
Simply just abandon data), when next consumer consumes the data in buffer area, the producer can be just waken up, and open
Begin to add data toward buffer area.Equally, consumer can also be allowed to enter suspend mode in buffer area sky, wait until the producer toward buffer area
It adds data and then wakes up consumer.The method for generalling use interprocess communication solves the problems, such as this, and common method has signal
Lamp method etc..If solution is incomplete, the case where being easy to appear deadlock.When there is deadlock, two threads can all be fallen into
Suspend mode waits other side to wake up oneself.The problem can also be generalized to the situation of multiple producers and consumers.
Fig. 6 is the flow chart of this method, and a kind of data processing method of cloud data center, the method includes walking as follows
It is rapid:
I, producer consumer model is determined;
II, determination strategy execute condition;
III, corresponding strategy is executed;
IV, it is finished;
Preferably, the step I includes using the queue based on annular.
Several basis definition and problem involved in the present invention are introduced first.
1) annular data queue, queue length queue_size are defined.This numerical value is determined by actual needs and system performance
It is fixed.
2) shared element g_current is defined, for indicating the position of current producer's creation data packet in the queue.
It 3) is to guarantee that producers and consumers do not handle same region of memory simultaneously without lock property.So consumption
The position g_current+M of person's circle queue per treatment come realize without lock property.Place of the size of M value by producers and consumers
Reason influences.The present invention takes ideal situation M=1, shows that the producer is identical with the processing speed of consumption.
4) define data cell packet_unit, for save each data packet Bale No. counter, length len and
Location information x in circulating memory queue.
5) it is g_next_pkt that the number of data packet need to currently be handled by, which defining,
The producer is put into data packet from the initial position of circle queue in order one by one, while the information assignment of data packet
Into data cell packet_unit, following Fig. 1.
Preferably, the strategy execution condition in the step II includes two types, three kinds of situations;
The mode that consumer handles data is divided into two classes by the present invention, totally three kinds of situations, handles annular according to different situations
The data of different location in queue, to achieve the purpose that accelerate processing data.Fig. 2 is consumer's process flow diagram.
It is a kind of: g_current+1 < queue_size
Such situation is that the position of current data packet to be processed is less than queue length.Then distinguish estimate of situation A and B into
Row data processing.
Situation A(packet_unit [queue_size-1] .counter > g_next_pkt)
This indicates that the packet numbers that current tail of the queue receives have been above current packet numbers to be treated, this situation explanation
Current the smallest Bale No. (i.e. current Bale No. to be treated) is in the second half section of queue.So being handled at this time from g_current+1
To the data packet of queue_size-1, following Fig. 3.
Situation B(packet_unit [g_current] .counter > g_next_pkt)
The packet numbers that this expression is currently received have been above current packet numbers to be treated, this situation explanation is current
The smallest Bale No. (i.e. current Bale No. to be treated) is in the front half section of queue.So being handled at this time from 0 to g_current's
Data packet, following Fig. 4.
Two classes: g_current+1 >=queue_size
Such situation shows that the position of current data packet to be processed has been above the position wrapped at this time equal to queue length
Take (g_current+1) % queue_size.
Situation C(packet_unit [(g_current+1) % queue_size] .counter > g_next_pkt)
This indicates that current packet numbers to be treated have been above current processed packet numbers, this situation explanation
Current the smallest Bale No. (i.e. current Bale No. to be treated) (g_current+1) % queue_size and g_current it
Between.So handling the data packet from (g_current+1) % queue_size to g_current at this time.M=1 this paper, this feelings
Condition is the entire circle queue of processing, following Fig. 5.
According to the three of consumer kinds of processing strategies, under the experimental situation of single producer and double consumers, consumer can be with
It quickly finds currently without processed data packet, and then accelerates the processing speed of data packet.
In actual conditions, by the setting of queue_size and M value, consumer can be reduced as far as possible to data packet
Reprocessing achievees the purpose that process data at high speeds with the smallest consumption.
Key point of the invention is: multiple consumers select the number in relative strategy processing circle queue according to different situations
According to, with the smallest reprocessing consumption achieve the purpose that process data at high speeds packet.
The invention proposes more consumer data processing strategies based on circle queue, and point three kinds of situations are to more consumers'
Data processing problem is analyzed, and proposes corresponding data processing policy.Data processing method energy proposed by the invention
The number of processes for reducing repetitive data packet improves the efficiency of more consumer data processing.
Obviously, embodiments described above is only a part of the embodiment of the present invention, instead of all the embodiments.Base
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts it is all its
Its embodiment, shall fall within the protection scope of the present invention.
Claims (5)
1. a kind of data processing method of cloud data center, which comprises the following steps:
Step 1: determining producer consumer model, which is circle queue;
Step 2: the producer is put into data packet from the initial position of circle queue in order one by one, while by the information of data packet
For assignment into data cell packet_unit, data cell packet_unit is for saving the Bale No. of each data packet, length
Degree and the location information in circulating memory queue;
Step 3: determination strategy executes condition and executes corresponding strategy;It is specific as follows:
Work as if the packet numbers that the position of current data packet to be processed is less than queue length and current tail of the queue receives have been above
Preceding packet numbers to be treated, handle the data packet from g_current+M to queue_size-1 at this time;Wherein, member is shared
Plain g_current is used to indicate the position of current producer's creation data packet in the queue, and queue_size indicates circle queue
Length, M are the interval step number of data processing;
If the position of current data packet to be processed is less than queue length and the packet numbers being currently received have been above current need
Packet numbers to be processed handle the data packet from 0 to g_current at this time;
If the position of current data packet to be processed has been above or be equal to queue length, the position of data packet takes (g_ at this time
current+ M) % queue_size;If current packet numbers to be treated have been above current processed data packet
Number, the data packet from (g_current+ M) % queue_size to g_current is handled at this time.
2. a kind of data processing method of cloud data center according to claim 1, which is characterized in that M=1.
3. a kind of data processing method of cloud data center according to claim 2, which is characterized in that M=1 is to show to give birth to
Production person is identical with the processing speed of consumption.
4. a kind of data processing method of cloud data center according to claim 1, which is characterized in that M=2.
5. a kind of data processing method of cloud data center according to claim 1, which is characterized in that M=3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610944130.XA CN106533977B (en) | 2016-11-02 | 2016-11-02 | A kind of data processing method of cloud data center |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610944130.XA CN106533977B (en) | 2016-11-02 | 2016-11-02 | A kind of data processing method of cloud data center |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106533977A CN106533977A (en) | 2017-03-22 |
CN106533977B true CN106533977B (en) | 2019-05-17 |
Family
ID=58292304
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610944130.XA Active CN106533977B (en) | 2016-11-02 | 2016-11-02 | A kind of data processing method of cloud data center |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106533977B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0760501A1 (en) * | 1995-09-04 | 1997-03-05 | Hewlett-Packard Company | Data handling system with circular queue formed in paged memory |
CN102591843A (en) * | 2011-12-30 | 2012-07-18 | 中国科学技术大学苏州研究院 | Inter-core communication method for multi-core processor |
CN103218176A (en) * | 2013-04-02 | 2013-07-24 | 中国科学院信息工程研究所 | Data processing method and device |
-
2016
- 2016-11-02 CN CN201610944130.XA patent/CN106533977B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0760501A1 (en) * | 1995-09-04 | 1997-03-05 | Hewlett-Packard Company | Data handling system with circular queue formed in paged memory |
CN102591843A (en) * | 2011-12-30 | 2012-07-18 | 中国科学技术大学苏州研究院 | Inter-core communication method for multi-core processor |
CN103218176A (en) * | 2013-04-02 | 2013-07-24 | 中国科学院信息工程研究所 | Data processing method and device |
Non-Patent Citations (2)
Title |
---|
"A scalable multi-producer multi-consumer wait-free ring buffer";Andrew Barrington,等;《SAC "15 Proceedings of the 30th Annual ACM Symposium on Applied Computing》;20150417;全文 * |
"JetStream:enabling high performance event streaming across cloud data-centers";Radu Tudoran,等;《DEBS "14 Proceedings of the 8th ACM International Conference on Distributed Event-Based Systems》;20140529;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN106533977A (en) | 2017-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7930574B2 (en) | Thread migration to improve power efficiency in a parallel processing environment | |
US9632822B2 (en) | Multi-core device and multi-thread scheduling method thereof | |
CN103995742B (en) | Embedded type real-time scheduling control device and method based on MCU | |
US10467054B2 (en) | Resource management method and system, and computer storage medium | |
TW200707170A (en) | Power management of multiple processors | |
JP2009541848A5 (en) | ||
CN106445070B (en) | Energy consumption optimization scheduling method for hard real-time system resource-limited sporadic tasks | |
EP3278220A1 (en) | Power aware scheduling and power manager | |
CN105718315A (en) | Task processing method and server | |
CN103336684A (en) | AC capable of concurrent processing AP information and processing method thereof | |
CN102662638B (en) | Threshold boundary selecting method for supporting helper thread pre-fetching distance parameters | |
CN113535356B (en) | Energy-aware hierarchical task scheduling method and device | |
CN114610472A (en) | Multi-process management method in heterogeneous computing and computing equipment | |
CN106533977B (en) | A kind of data processing method of cloud data center | |
CN111045800A (en) | Method and system for optimizing GPU (graphics processing Unit) performance based on short job priority | |
CN109062624A (en) | It is a kind of to interrupt the processing method waken up for vehicle electronic control unit | |
CN108563497B (en) | Efficient multi-dimensional algorithm scheduling method and task server | |
CN106354555A (en) | Operation system process scheduling calculation method | |
KR101765830B1 (en) | Multi-core system and method for driving the same | |
CN106293007B (en) | A kind of energy-saving scheduling method for supporting non-preemption real-time task collection | |
CN106648834B (en) | Virtual machine scheduling method based on batch packaging problem | |
CN107341060B (en) | Virtual machine memory allocation method and device | |
CN102279731A (en) | Method for realizing single step execution | |
CN106371914A (en) | Load intensity-based multi-core task scheduling method and system | |
CN104978006B (en) | A kind of low power idle under multithread mode waits method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |