CN108694083A - A kind of data processing method and device of server - Google Patents

A kind of data processing method and device of server Download PDF

Info

Publication number
CN108694083A
CN108694083A CN201710225059.4A CN201710225059A CN108694083A CN 108694083 A CN108694083 A CN 108694083A CN 201710225059 A CN201710225059 A CN 201710225059A CN 108694083 A CN108694083 A CN 108694083A
Authority
CN
China
Prior art keywords
thread
data processing
shared drive
worker thread
worker
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710225059.4A
Other languages
Chinese (zh)
Other versions
CN108694083B (en
Inventor
朱鑫
许泽伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710225059.4A priority Critical patent/CN108694083B/en
Publication of CN108694083A publication Critical patent/CN108694083A/en
Application granted granted Critical
Publication of CN108694083B publication Critical patent/CN108694083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a kind of data processing method and device of server;The network transmitting-receiving layer of the present embodiment may include receiving thread and multiple worker threads, receive thread for receiving data processing request, and distribute to worker thread, and worker thread can be communicated by shared drive queue with service logic thread, to handle data processing request, wherein, since worker thread is with multiple, and with shared memory queue, and service logic thread three corresponds, therefore, the processing capacity of network transmitting-receiving layer can be greatly promoted, and without carrying out lock competition, substantially increase treatment effeciency, be conducive to improve the performance of network server.

Description

A kind of data processing method and device of server
Technical field
The present invention relates to fields of communication technology, and in particular to a kind of data processing method and device of server.
Background technology
With the development of network technology and persistently increasing for user demand, the data volume that network server is faced It is increasingly huger, how improves the performance of network server, can preferably handle data, always be that people are closed The problem of note.
In the prior art, network server is generally divided into network transmitting-receiving layer and Business Logic, and it is main that network receives and dispatches layer For receiving various data processing requests, and Business Logic can then carry out specifically data according to the data processing request Processing, wherein network transmitting-receiving layer is mainly communicated by shared drive with Business Logic.For example, when network transmitting-receiving layer connects After receiving data processing request, which can be preserved to shared drive, then, by multiple Business Logics it Between mutually fight for locking, the Business Logic by getting lock is read out the data processing request preserved in shared drive, and Data are handled according to the data processing request.
In the research and practice process to the prior art, it was found by the inventors of the present invention that due in existing scheme, net The processing capacity that network receives and dispatches layer is limited, moreover, need continually to be competed between Business Logic, therefore, not only realize compared with For complexity the performance of network server is largely effected on moreover, treatment effeciency is relatively low.
Invention content
The embodiment of the present invention provides a kind of data processing method and device of server, not only realizes simply, furthermore, it is possible to Treatment effeciency is greatly improved, the performance for improving network server is conducive to.
The embodiment of the present invention provides a kind of data processing method of server, including:
Data processing request is received by receiving thread (AcceptThread), the thread that receives corresponds to multiple auxiliary lines Journey (AuxThread), each worker thread correspond to shared drive (SharedMemory) queue;
Worker thread is selected for the data processing request, and by the worker thread of selection by the data processing request It is written in corresponding shared drive queue;
It calls and the one-to-one service logic thread (Worker) of the shared drive queue;
The data processing request in the shared drive queue is handled by the service logic thread.
The embodiment of the present invention also provides a kind of data processing equipment of server, including:
Receiving unit, for receiving data processing request by receiving thread, the thread that receives corresponds to multiple auxiliary lines Journey, each worker thread correspond to a shared drive queue;
Selecting unit, for selecting worker thread for the data processing request;
For the worker thread by selection corresponding shared drive team is written in the data processing request by writing unit In row;
Call unit, for calling and the one-to-one service logic thread of the shared drive queue;
Processing unit, for by the service logic thread to the data processing request in the shared drive queue into Row processing.
The network transmitting-receiving layer of the embodiment of the present invention may include receiving thread and multiple worker threads, receive thread for connecing Data processing request is received, and distributes to worker thread, and worker thread can pass through shared drive queue and service logic thread Communicated, to handle data processing request, wherein due to worker thread have it is multiple, and with shared memory queue, And service logic thread three corresponds, and therefore, can greatly promote the processing capacity of network transmitting-receiving layer, and without carrying out Lock competition, i.e., each service logic thread need to only handle the data processing request in shared drive queue corresponding with itself i.e. Can, without fighting for locking with other service logic threads, so, it sees on the whole, the program is for existing scheme, not only It realizes simply, furthermore, it is possible to greatly improve treatment effeciency, is conducive to the performance for improving network server.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 a are the schematic diagram of a scenario of the data processing method of server provided in an embodiment of the present invention;
Fig. 1 b are the flow charts of the data processing method of server provided in an embodiment of the present invention;
Fig. 2 is another flow chart of the data processing method of server provided in an embodiment of the present invention;
Fig. 3 a are the structural schematic diagrams of the data processing equipment of server provided in an embodiment of the present invention;
Fig. 3 b are another structural schematic diagrams of the data processing equipment of server provided in an embodiment of the present invention;
Fig. 4 is the structural schematic diagram of server provided in an embodiment of the present invention.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, the every other implementation that those skilled in the art are obtained without creative efforts Example, shall fall within the protection scope of the present invention.
The embodiment of the present invention provides a kind of data processing method and device of server.
Wherein, the data processing equipment of the server can specifically be integrated in the equipment such as server, such as network server In.For example, for being integrated in network server, referring to Fig. 1 a, which may include receiving thread (AcceptThread), multiple worker threads (AuxThread) and multiple service logic threads (Worker), wherein receive Thread distributes to worker thread for receiving data processing request, and each worker thread can pass through shared drive (SharedMemory) queue is communicated with service logic thread, to handle data processing request.Wherein, due to auxiliary Index contour journey have it is multiple, therefore, can greatly promote network transmitting-receiving layer (including receiving thread and worker thread) processing capacity; Further, since corresponded between worker thread and shared memory queue and service logic thread three, so, Mei Geye Business logic thread need to only handle the data processing request in shared drive queue corresponding with itself (i.e. service logic thread) i.e. Can, without fighting for locking with other service logic threads, treatment effeciency can be greatly promoted.
It is described in detail separately below.It should be noted that the serial number of following embodiment is not as preferably suitable to embodiment The restriction of sequence
Embodiment one,
The angle of data processing equipment from server is described for this implementation, the data processing equipment tool of the server Body can be integrated in the equipment such as server, such as network server.
A kind of data processing method of server, abbreviation data processing method, including:It is received at data by receiving thread Reason request, this receives thread and corresponds to multiple worker threads, and each worker thread corresponds to a shared drive queue;At the data Request selecting worker thread is managed, and the data processing request is written by corresponding shared drive queue by the worker thread of selection In;It calls and the one-to-one service logic thread of the shared drive queue;By the service logic thread to the shared drive Data processing request in queue is handled.
As shown in Figure 1 b, the detailed process of the data processing method can be as follows:
101, data processing request is received by receiving thread.
Receive thread and worker thread wherein it is possible to which network transmitting-receiving layer is divided into, one receives thread and corresponds to multiple auxiliary Thread, each worker thread correspond to a shared drive queue.
For example, for receiving thread and correspond to worker thread A, worker thread B and worker thread C, then worker thread A, auxiliary Index contour journey B and worker thread C is respectively provided with independent shared drive queue, for example, worker thread A corresponds to shared drive queue 1, Worker thread B corresponds to 2 worker thread C of shared drive queue and corresponds to shared drive queue 3, etc..
102, it is that the data processing request selects worker thread.
For example, specifically worker thread can be selected for the data processing request according to load balancing, for example, specifically It can be as follows:
The load information for receiving the corresponding multiple worker threads of thread is obtained, according to the load information from multiple auxiliary The data processing request, is distributed to the worker thread of selection by the worker thread that most lightly loaded is selected in thread.
Wherein, which may include the information such as the quantity (connecting number) of the currently processed request of worker thread, For example, still for receiving thread and correspond to worker thread A, worker thread B and worker thread C, if worker thread A is currently processed Connection number be 10, connection number currently processed worker thread B is 15, and connection number currently processed worker thread C is 13, then at this point it is possible to which the data processing request is distributed to worker thread A, and so on, etc..
103, the data processing request is written in corresponding shared drive queue by the worker thread of selection;For example, It specifically can be as follows:
Shared drive queue corresponding with the worker thread of selection is obtained according to default mapping relations, by the auxiliary line selected The data processing request is written in the corresponding shared drive queue journey.
Wherein, worker thread can be by maintenance personnel according to the demand of practical application with the mapping relations of shared memory queue It is configured, can also voluntarily be established by system in advance, i.e., in step " by the worker thread of selection by the data processing Request is written in corresponding shared drive queue " before, which can also include:
Worker thread and the one-to-one mapping relations of shared memory queue are established, which is preserved to present count According in library.
Then at this point, step " obtaining shared drive queue corresponding with the worker thread of selection according to default mapping relations " tool Body may include:By searching for the mapping relations in the database, shared drive team corresponding with the worker thread of selection is obtained Row.
104, it calls and the one-to-one service logic thread of the shared drive queue.
For example, specifically can according to the correspondence of preset shared drive queue and service logic thread, come call with The corresponding service logic thread of the shared drive queue.
Wherein, which can be by maintenance personnel according to reality with the one-to-one relationship of service logic thread The demand of application is configured in advance, can also voluntarily be established by system, i.e., " is called and the shared drive queue in step Before one-to-one service logic thread ", which can also include:
The one-to-one relationship for establishing shared drive queue and service logic thread preserves the correspondence.
Then at this point, step " calling and the one-to-one service logic thread of the shared drive queue " can specifically include: According to the correspondence of preservation, call and the one-to-one service logic thread of the shared drive queue.
105, the data processing request in the shared drive queue is handled by the service logic thread.
For example, can specifically be read the data processing request in the shared drive queue by the service logic thread It takes, operation object and operation content is determined according to the data processing request read by the service logic thread, and by the business Logic thread executes the operation content to the operation object.
For example, if operation object is data M, operation content is " deletion ", then at this point, service logic thread can be to data M executes delete operation.
For another example, if operation object is data M, operation content is " update ", then at this point, service logic thread can be with logarithm Update operation is executed according to M, and so on, etc..
From the foregoing, it will be observed that the network transmitting-receiving layer of the present embodiment may include receiving thread and multiple worker threads, receive thread For receiving data processing request, and worker thread is distributed to, and worker thread can be patrolled by shared drive queue with business It collects thread to be communicated, to handle data processing request, wherein since worker thread is with multiple and interior with sharing It deposits queue and service logic thread three corresponds, therefore, the processing capacity of network transmitting-receiving layer can be greatly promoted, and Without carrying out lock competition, i.e., each service logic thread need to only handle the data processing in shared drive queue corresponding with itself Request, without fighting for locking with other service logic threads, so, see on the whole, the program relative to existing scheme and Speech is not only realized simply, furthermore, it is possible to greatly improve treatment effeciency, is conducive to the performance for improving network server.
Embodiment two,
According to method described in embodiment one, citing is described in further detail below.
In the present embodiment, it is carried out for being specifically integrated in network server by the data processing equipment of the server Explanation.
As shown in Fig. 2, a kind of data processing method of server, detailed process can be as follows:
201, network server starts the thread that receives of local (i.e. network server), and receives thread by this and receive number It is asked according to processing.
202, network server is asked according to load balancing for the data processing after receiving data processing request Seek selection worker thread.
Wherein, which can be configured according to the demand of practical application, for example, can be arranged multiple negative Grade, such as " heavy duty ", " medium load " and " light load " are carried, each grade of load has corresponding threshold value, can be by The current load information of worker thread is compared with these threshold values, and load current to worker thread based on comparative result into Row grade classification can be selected according to default selection strategy in the worker thread for belonging to " light load " grade after division Required worker thread is selected, for example, a worker thread can be randomly choosed in the worker thread for belonging to " light load " grade, etc. Deng.
Wherein, which may include the information such as the quantity (connecting number) of the currently processed request of worker thread, For example, corresponding to worker thread A, worker thread B, worker thread C and worker thread D, and the item of " heavy duty " to receive thread Part is that connection number is more than 25, and the condition of " medium load " is to connect number between 15~25, and the condition of " light load " is to connect Number is connect to be less than for 15, if connection number currently processed worker thread A is 18, connection number currently processed worker thread B Be 30, connection number currently processed worker thread C be 8, and connection number currently processed worker thread D be 11, then this When, it may be determined that the grade of load of worker thread A is " medium load ", and the grade of load of worker thread B is " heavy duty ", auxiliary The grade of load of thread C and worker thread D are " light load ", so, it can be in worker thread C and worker thread D machines selection one Worker thread, such as selection worker thread C, and the data processing request is distributed into worker thread C.
Optionally, other modes can also be used, for example, the load of this multiple worker thread can be ranked up, Then, the worker thread that most lightly loaded is therefrom selected based on sequence, specifically can be as follows:
Network server obtains the load information for receiving the corresponding multiple worker threads of thread, according to the load information pair Multiple worker thread is ranked up, and the worker thread of most lightly loaded is selected from multiple worker thread based on sequence, by this Data processing request distributes to the worker thread of selection.
For example, still corresponding to worker thread A, worker thread B, worker thread C and worker thread D to receive thread and being Example, if connection number currently processed worker thread A is 18, connection number currently processed worker thread B is 30, worker thread Connection number currently processed C is 8, and connection number currently processed worker thread D is 11, then at this point it is possible to auxiliary to these Index contour journey is ranked up, and is such as ranked up according to the order of connection number from high to low, i.e. worker thread B>Worker thread A>Auxiliary Thread D>Worker thread C, since the connection number of worker thread C is minimum, therefore, it is determined the most lightly loaded of worker thread C, because The data processing request, can be distributed to worker thread C by this, for example, being passed to the data processing request by receiving thread Worker thread C, etc..
It should be noted that can also be ranked up according to the order of connection number from low to high, realization is similar to the above, This is repeated no more.
203, network server obtains corresponding with the worker thread selected in step 202 total according to default mapping relations Enjoy memory queue.
For example, using with the one-to-one shared drive queues of worker thread A as shared drive queue 1, with worker thread B mono- One corresponding shared drive queue is shared drive queue 2, is in shared with the one-to-one shared drive queues of worker thread C Deposit queue 3, and for being shared drive queue 4 with the one-to-one shared drive queues of worker thread D, if in step 202, Worker thread C is selected, then at this point it is possible to obtain and the one-to-one shared drive queues of worker thread C, i.e. shared drive team Row 3.
Wherein, worker thread can be by maintenance personnel according to the demand of practical application with the mapping relations of shared memory queue It is configured, can also voluntarily be established by system in advance, for example, worker thread and shared memory queue can be pre-established One-to-one mapping relations, and the mapping relations are preserved into presetting database, in this way, at this time can be by searching for this Mapping relations in database, to get shared drive queue corresponding with selected worker thread, for details, reference can be made to preceding The embodiment in face, details are not described herein.
204, the data processing request is written in the shared drive queue by the worker thread of selection for network server.
If for example, in step 203, shared drive queue 3 is selected, then at this point it is possible to by worker thread 3 by the data In processing request write-in shared drive queue 3.
205, network server calls and the one-to-one service logic thread of the shared drive queue.
For example, specifically can according to the correspondence of preset shared drive queue and service logic thread, come call with The corresponding service logic thread of the shared drive queue.
For example, if in step 204, which is written in shared drive queue 3, then at this point, network takes Business device can call service logic thread corresponding with the shared drive queue 3, for example call service logic thread 3, similarly, if In step 204, which is written into other shared drive queues, in shared drive queue 2, then at this point, Network server can call service logic thread corresponding with the shared drive queue 2, for example call service logic thread 2, And so on, etc..
Wherein, which can be by maintenance personnel according to reality with the one-to-one relationship of service logic thread The demand of application is configured in advance, can also voluntarily be established by system, for example, can call service logic thread it Before, the one-to-one relationship of shared drive queue and service logic thread is established, and preserve the correspondence, in this way, at this time just Can be according to the correspondence of preservation, calling and the one-to-one service logic thread of the shared drive queue, for details, reference can be made to Embodiment one, details are not described herein.
206, network server carries out the data processing request in the shared drive queue by the service logic thread It reads.
For example, by taking service logic thread is specially service logic thread 3 as an example, since service logic thread 3 is corresponding total It is shared drive queue 3 to enjoy memory queue, and therefore, service logic thread 3 can be read out shared drive queue 3 at this time, To obtain corresponding data processing request;Similarly, if service logic thread is specially service logic thread 2, due to service logic line 2 corresponding shared drive queue of journey is shared drive queue 2, and therefore, service logic thread 2 can be to shared drive queue at this time 2 are read out, to obtain corresponding data processing request, and so on, etc..
207, service logic thread determines operation object and operation content according to the data processing request read.For example, It specifically can be as follows:
If data M is deleted in data processing request instruction, at this point it is possible to determine that operation object is data M, operation content is It deletes.
If data processing request instruction modification data M, at this point it is possible to determine that operation object is data M, operation content is Modification.
If data processing request instruction updates the data M, at this point it is possible to determine that operation object is data M, operation content is Update.
If data processing request indicates interpolation data M, at this point it is possible to determine that operation object is data M, operation content is Addition.
And so on, no longer enumerated here.
208, service logic thread executes the operation content to the operation object.For example, specifically can be as follows:
If operation object is data M, operation content is to delete, then at this point it is possible to execute delete operation to data M.
If operation object is data M, operation content is modification, then at this point it is possible to execute modification operation to data M.
If operation object is data M, operation content is update, then at this point it is possible to execute update operation to data M.
If operation object is data M, operation content is addition, then at this point it is possible to execute addition operation to data M.
And so on, no longer enumerated here.
From the foregoing, it will be observed that the network transmitting-receiving layer of the present embodiment may include receiving thread and multiple worker threads, receive thread For receiving data processing request, and worker thread is distributed to, and worker thread can be patrolled by shared drive queue with business It collects thread to be communicated, to handle data processing request, wherein since worker thread is with multiple and interior with sharing It deposits queue and service logic thread three corresponds, therefore, the processing capacity of network transmitting-receiving layer can be greatly promoted, and Without carrying out lock competition, i.e., each service logic thread need to only handle the data processing in shared drive queue corresponding with itself Request, without fighting for locking with other service logic threads, so, see on the whole, the program relative to existing scheme and Speech is not only realized simply, furthermore, it is possible to greatly improve treatment effeciency, is conducive to the performance for improving network server.
Embodiment three,
In order to preferably implement above method, the embodiment of the present invention also provides a kind of data processing equipment of server, letter Claim data processing equipment.As shown in Figure 3a, which may include receiving unit 301, selecting unit 302, write-in Unit 303, call unit 304 and processing unit 305, specifically can be as follows:
(1) receiving unit 301;
Receiving unit 301, for receiving data processing request by receiving thread.
Receive thread and worker thread wherein it is possible to which network transmitting-receiving layer is divided into, one receives thread and corresponds to multiple auxiliary Thread, each worker thread correspond to a shared drive queue.
For example, for receiving thread and correspond to worker thread A, worker thread B and worker thread C, then worker thread A, auxiliary Index contour journey B and worker thread C is respectively provided with independent shared drive queue, for example, worker thread A corresponds to shared drive queue 1, Worker thread B corresponds to 2 worker thread C of shared drive queue and corresponds to shared drive queue 3, etc..
(2) selecting unit 302;
Selecting unit 302, for selecting worker thread for the data processing request.
For example, selecting unit 302, specifically can be used for being selected for the data processing request according to load balancing auxiliary Index contour journey, for example, specifically can be as follows:
The load information for receiving the corresponding multiple worker threads of thread is obtained, according to the load information from multiple auxiliary The data processing request, is distributed to the worker thread of selection by the worker thread that most lightly loaded is selected in thread.
Wherein, which may include the information such as the quantity (connecting number) of the currently processed request of worker thread, For example, still for receiving thread and correspond to worker thread A, worker thread B and worker thread C, if worker thread A is currently processed Connection number be 10, connection number currently processed worker thread B is 15, and connection number currently processed worker thread C is 13, then at this point it is possible to which the data processing request is distributed to worker thread A, and so on, etc..
(3) writing unit 303;
For the worker thread by selection corresponding shared drive is written in the data processing request by writing unit 303 In queue.
For example, the writing unit 303, the worker thread pair that specifically can be used for obtaining and selecting according to mapping relations are preset The data processing request is written in the corresponding shared drive queue by the worker thread selected for the shared drive queue answered.
Wherein, worker thread can be by maintenance personnel according to the demand of practical application with the mapping relations of shared memory queue It is configured, can also voluntarily be established by system in advance, i.e., as shown in Figure 3b, which can also include building Vertical unit 306, it is as follows:
Unit 306 is established, can be used for establishing worker thread and the one-to-one mapping relations of shared memory queue, by this Mapping relations are preserved into presetting database.
Then at this point, the writing unit 303, specifically can be used for by searching for the mapping relations in the database, obtain with The corresponding shared drive queue of worker thread of selection.
(4) call unit 304;
Call unit 304, for calling and the one-to-one service logic thread of the shared drive queue.
For example, call unit 304, specifically can be used for pair according to preset shared drive queue and service logic thread It should be related to, to call service logic thread corresponding with the shared drive queue.
Wherein, which can be by maintenance personnel according to reality with the one-to-one relationship of service logic thread The demand of application is configured in advance, can also voluntarily be established by system, i.e.,:
Unit 306 is established, the one-to-one relationship for establishing shared drive queue and service logic thread is can be also used for, is protected Deposit the correspondence.
Then at this point, call unit 304, specifically can be used for, according to the correspondence for establishing the preservation of unit 306, calling and being somebody's turn to do The one-to-one service logic thread of shared drive queue.
(5) processing unit 305;
Processing unit 305, for by the service logic thread to the data processing request in the shared drive queue into Row processing.
For example, processing unit 305, specifically can be used for through the service logic thread to the number in the shared drive queue It is read out according to processing request, operation object and operation is determined according to the data processing request read by the service logic thread Content executes the operation content by the service logic thread to the operation object.
For example, if operation object is data M, operation content is " deletion ", then at this point, processing unit 305 can pass through industry Logic thread be engaged in data M execution delete operations.
For another example, if operation object is data M, operation content is " update ", then at this point, processing unit 305 can pass through Service logic thread executes update operation to data M, and so on, etc..
When it is implemented, above each unit can be realized as independent entity, arbitrary combination can also be carried out, is made It is realized for same or several entities, the specific implementation of above each unit can be found in the embodiment of the method for front, herein not It repeats again.
The data processing equipment of the server can be specifically integrated in the equipment such as server, such as network server.
From the foregoing, it will be observed that the receiving unit 301 of the data processing equipment of the present embodiment can receive data by receiving thread Processing request, and distributes to worker thread, then by writing unit 303 by worker thread by data processing request write-in with One-to-one shared drive queue in, and corresponding service logic thread is called by call unit 304, and single by processing Member 304 is handled the data processing request in the shared drive queue by the service logic thread.Wherein, due to auxiliary Thread has multiple, and corresponds with shared memory queue and service logic thread three, therefore, can greatly promote Network receives and dispatches the processing capacity of layer, and without carrying out lock competition, i.e., each service logic thread need to only handle corresponding with itself Data processing request in shared drive queue, without fighting for locking with other service logic threads, so, on the whole It sees, the program is not only realized simply for existing scheme, furthermore, it is possible to greatly improve treatment effeciency, is conducive to improve The performance of network server.
Example IV,
The embodiment of the present invention also provides a kind of server, can be as the network server of the embodiment of the present invention, such as Fig. 4 institutes Show, it illustrates the structural schematic diagrams of the server involved by the embodiment of the present invention, specifically:
The server may include one or processor 401, one or more meters of more than one processing core The components such as memory 402, power supply 403 and the input unit 404 of calculation machine readable storage medium storing program for executing.Those skilled in the art can manage It solves, server architecture does not constitute the restriction to server shown in Fig. 4, may include than illustrating more or fewer portions Part either combines certain components or different components arrangement.Wherein:
Processor 401 is the control centre of the server, utilizes each of various interfaces and the entire server of connection Part by running or execute the software program and/or module that are stored in memory 402, and calls and is stored in memory Data in 402, the various functions and processing data of execute server, to carry out integral monitoring to server.Optionally, locate Reason device 401 may include one or more processing cores;Preferably, processor 401 can integrate application processor and modulatedemodulate is mediated Manage device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is main Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 401.
Memory 402 can be used for storing software program and module, and processor 401 is stored in memory 402 by operation Software program and module, to perform various functions application and data processing.Memory 402 can include mainly storage journey Sequence area and storage data field, wherein storing program area can storage program area, the application program (ratio needed at least one function Such as sound-playing function, image player function) etc.;Storage data field can be stored uses created data according to server Deng.In addition, memory 402 may include high-speed random access memory, can also include nonvolatile memory, for example, at least One disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memory 402 can also include Memory Controller, to provide access of the processor 401 to memory 402.
Server further includes the power supply 403 powered to all parts, it is preferred that power supply 403 can pass through power management system System is logically contiguous with processor 401, to realize the work(such as management charging, electric discharge and power managed by power-supply management system Energy.Power supply 403 can also include one or more direct current or AC power, recharging system, power failure monitor electricity The random components such as road, power supply changeover device or inverter, power supply status indicator.
The server may also include input unit 404, which can be used for receiving the number or character letter of input Breath, and generation keyboard related with user setting and function control, mouse, operating lever, optics or trace ball signal are defeated Enter.
Although being not shown, server can also be including display unit etc., and details are not described herein.Specifically in the present embodiment, Processor 401 in server can according to following instruction, by the process of one or more application program is corresponding can It executes file to be loaded into memory 402, and the application program being stored in memory 402 is run by processor 401, to Realize various functions, it is as follows:
Data processing request is received by receiving thread, selects worker thread for the data processing request, and pass through selection Worker thread the data processing request is written in corresponding shared drive queue, call a pair of with the shared drive queue one The service logic thread answered is handled the data processing request in the shared drive queue by the service logic thread.
For example, can specifically be read the data processing request in the shared drive queue by the service logic thread It takes, operation object and operation content is determined according to the data processing request read by the service logic thread, and by the business Logic thread executes the operation content, etc. to the operation object.
Wherein, one receives thread and corresponds to multiple worker threads, and each worker thread corresponds to a shared drive queue, and The shared drive also one-to-one correspondence with service logic thread, i.e. worker thread, shared drive queue and service logic thread three Person corresponds.Wherein, worker thread and the mapping relations of shared memory queue and shared drive queue and service logic line The one-to-one relationship of journey can be in advance configured by maintenance personnel according to the demand of practical application, alternatively, can also be by System is voluntarily established, and for details, reference can be made to the embodiments of front, and details are not described herein.
Optionally, select worker thread mode can there are many, for example, can be according to load balancing come for the number According to processing request selecting worker thread, that is, the application program being stored in memory 402 can also implement function such as:
The load information for receiving the corresponding multiple worker threads of thread is obtained, according to the load information from multiple auxiliary The data processing request, is distributed to the worker thread of selection by the worker thread that most lightly loaded is selected in thread.
Wherein, which may include the information such as the quantity (connecting number) of the currently processed request of worker thread.
From the foregoing, it will be observed that the network transmitting-receiving layer for the server that the present embodiment is provided may include receiving thread and multiple auxiliary Thread receives thread for receiving data processing request, and distributes to worker thread, and worker thread can pass through shared drive Queue is communicated with service logic thread, to handle data processing request, wherein since worker thread is with more It is a, and corresponded with shared memory queue and service logic thread three, therefore, network transmitting-receiving layer can be greatly promoted Processing capacity, and without carrying out lock competition, i.e., each service logic thread need to only handle shared drive corresponding with itself team Data processing request in row, without fighting for locking with other service logic threads, so, it sees on the whole, program phase It for existing scheme, not only realizes simply, furthermore, it is possible to greatly improve treatment effeciency, is conducive to improve network server Performance.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage Medium may include:Read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), disk or CD etc..
It is provided for the embodiments of the invention a kind of data processing method and device of server above and has carried out detailed Jie It continues, principle and implementation of the present invention are described for specific case used herein, and the explanation of above example is only It is the method and its core concept for being used to help understand the present invention;Meanwhile for those skilled in the art, according to the present invention Thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not be construed as Limitation of the present invention.

Claims (10)

1. a kind of data processing method of server, which is characterized in that including:
Data processing request is received by receiving thread, the thread that receives corresponds to multiple worker threads, each worker thread pair Answer a shared drive queue;
Worker thread is selected for the data processing request, and the data processing request is written by the worker thread of selection In corresponding shared drive queue;
It calls and the one-to-one service logic thread of the shared drive queue;
The data processing request in the shared drive queue is handled by the service logic thread.
2. according to the method described in claim 1, it is characterized in that, it is described for the data processing request select worker thread, Including:
Obtain the load information for receiving the corresponding multiple worker threads of thread;
The worker thread of most lightly loaded is selected from the multiple worker thread according to the load information;
The data processing request is distributed to the worker thread of selection.
3. according to the method described in claim 1, it is characterized in that, the worker thread by selection is by the data processing Request is written in corresponding shared drive queue, including:
Shared drive queue corresponding with the worker thread of selection is obtained according to default mapping relations;
The data processing request is written in the corresponding shared drive queue by the worker thread selected.
4. according to the method described in claim 3, it is characterized in that, the worker thread by selection is by the data processing Before request is written in corresponding shared drive queue, further include:
Establish worker thread and the one-to-one mapping relations of shared memory queue;
The mapping relations are preserved into presetting database;
The basis presets mapping relations and obtains shared drive queue corresponding with the worker thread of selection, including:By searching for Mapping relations in the database obtain shared drive queue corresponding with the worker thread of selection.
5. method according to any one of claims 1 to 4, which is characterized in that described to pass through the service logic thread pair Data processing request in the shared drive queue is handled, including:
The data processing request in the shared drive queue is read out by the service logic thread;
Operation object and operation content are determined according to the data processing request read by the service logic thread;
The operation content is executed to the operation object by the service logic thread.
6. a kind of data processing equipment of server, which is characterized in that including:
Receiving unit, for receiving data processing request by receiving thread, the thread that receives corresponds to multiple worker threads, often A worker thread corresponds to a shared drive queue;
Selecting unit, for selecting worker thread for the data processing request;
For the worker thread by selection corresponding shared drive queue is written in the data processing request by writing unit In;
Call unit, for calling and the one-to-one service logic thread of the shared drive queue;
Processing unit, at by the service logic thread to the data processing request in the shared drive queue Reason.
7. device according to claim 6, which is characterized in that
The selecting unit, specifically for obtaining the load information for receiving the corresponding multiple worker threads of thread, according to institute The worker thread that load information selects most lightly loaded from the multiple worker thread is stated, the data processing request is distributed to The worker thread of selection.
8. device according to claim 6, which is characterized in that
Said write unit is specifically used for obtaining shared drive team corresponding with the worker thread of selection according to default mapping relations The data processing request is written in the corresponding shared drive queue by the worker thread selected for row.
9. device according to claim 8, which is characterized in that further include establishing unit;
It is described to establish unit, for establishing worker thread and the one-to-one mapping relations of shared memory queue, by the mapping Relationship is preserved into presetting database;
Said write unit is specifically used for, by searching for the mapping relations in the database, obtaining the worker thread with selection Corresponding shared drive queue.
10. according to claim 6 to 9 any one of them device, which is characterized in that
The processing unit, specifically for being asked to the data processing in the shared drive queue by the service logic thread It asks and is read out, operation object and operation content are determined according to the data processing request read by the service logic thread, The operation content is executed to the operation object by the service logic thread.
CN201710225059.4A 2017-04-07 2017-04-07 Data processing method and device for server Active CN108694083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710225059.4A CN108694083B (en) 2017-04-07 2017-04-07 Data processing method and device for server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710225059.4A CN108694083B (en) 2017-04-07 2017-04-07 Data processing method and device for server

Publications (2)

Publication Number Publication Date
CN108694083A true CN108694083A (en) 2018-10-23
CN108694083B CN108694083B (en) 2022-07-29

Family

ID=63842191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710225059.4A Active CN108694083B (en) 2017-04-07 2017-04-07 Data processing method and device for server

Country Status (1)

Country Link
CN (1) CN108694083B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472516A (en) * 2019-07-23 2019-11-19 腾讯科技(深圳)有限公司 A kind of construction method, device, equipment and the system of character image identifying system
CN111352743A (en) * 2018-12-24 2020-06-30 北京新媒传信科技有限公司 Process communication method and device
CN112232770A (en) * 2020-10-17 2021-01-15 严怀华 Business information processing method based on smart community and cloud service center
CN113312184A (en) * 2021-06-07 2021-08-27 平安证券股份有限公司 Service data processing method and related equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753439A (en) * 2009-12-18 2010-06-23 深圳市融创天下科技发展有限公司 Method for distributing and transmitting streaming media
US20130282853A1 (en) * 2012-04-20 2013-10-24 Electronics And Telecommunications Research Institute Apparatus and method for processing data in middleware for data distribution service
CN103412786A (en) * 2013-08-29 2013-11-27 苏州科达科技股份有限公司 High performance server architecture system and data processing method thereof
US20140351550A1 (en) * 2013-05-23 2014-11-27 Electronics And Telecommunications Research Institute Memory management apparatus and method for threads of data distribution service middleware
US20150081941A1 (en) * 2013-09-18 2015-03-19 International Business Machines Corporation Shared receive queue allocation for network on a chip communication
CN104735077A (en) * 2015-04-01 2015-06-24 积成电子股份有限公司 Method for realizing efficient user datagram protocol (UDP) concurrence through loop buffers and loop queue
US20150331720A1 (en) * 2012-10-19 2015-11-19 uCIRRUS Multi-threaded, lockless data parallelization
CN105827604A (en) * 2016-03-15 2016-08-03 深圳市游科互动科技有限公司 Server and service processing method thereof
US20160378712A1 (en) * 2015-06-23 2016-12-29 International Business Machines Corporation Lock-free processing of stateless protocols over rdma

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753439A (en) * 2009-12-18 2010-06-23 深圳市融创天下科技发展有限公司 Method for distributing and transmitting streaming media
US20130282853A1 (en) * 2012-04-20 2013-10-24 Electronics And Telecommunications Research Institute Apparatus and method for processing data in middleware for data distribution service
US20150331720A1 (en) * 2012-10-19 2015-11-19 uCIRRUS Multi-threaded, lockless data parallelization
US20140351550A1 (en) * 2013-05-23 2014-11-27 Electronics And Telecommunications Research Institute Memory management apparatus and method for threads of data distribution service middleware
CN103412786A (en) * 2013-08-29 2013-11-27 苏州科达科技股份有限公司 High performance server architecture system and data processing method thereof
US20150081941A1 (en) * 2013-09-18 2015-03-19 International Business Machines Corporation Shared receive queue allocation for network on a chip communication
CN104735077A (en) * 2015-04-01 2015-06-24 积成电子股份有限公司 Method for realizing efficient user datagram protocol (UDP) concurrence through loop buffers and loop queue
US20160378712A1 (en) * 2015-06-23 2016-12-29 International Business Machines Corporation Lock-free processing of stateless protocols over rdma
CN105827604A (en) * 2016-03-15 2016-08-03 深圳市游科互动科技有限公司 Server and service processing method thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUI MA ET.AL: "Effective Data Exchange in Parallel Computing", 《2013 INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CLOUD COMPUTING》 *
宋刚 等: "基于共享存储和Gzip的并行压缩算法研究", 《计算机工程与设计》 *
王志刚,胡玉平主编: "《计算机操作系统》", 31 August 2005 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352743A (en) * 2018-12-24 2020-06-30 北京新媒传信科技有限公司 Process communication method and device
CN111352743B (en) * 2018-12-24 2023-12-01 北京新媒传信科技有限公司 Process communication method and device
CN110472516A (en) * 2019-07-23 2019-11-19 腾讯科技(深圳)有限公司 A kind of construction method, device, equipment and the system of character image identifying system
CN110472516B (en) * 2019-07-23 2024-10-18 腾讯科技(深圳)有限公司 Method, device, equipment and system for constructing character image recognition system
CN112232770A (en) * 2020-10-17 2021-01-15 严怀华 Business information processing method based on smart community and cloud service center
CN113312184A (en) * 2021-06-07 2021-08-27 平安证券股份有限公司 Service data processing method and related equipment

Also Published As

Publication number Publication date
CN108694083B (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN108694083A (en) A kind of data processing method and device of server
CN108494703A (en) A kind of access frequency control method, device and storage medium
CN105242983B (en) A kind of date storage method and a kind of data storage management service device
CN103399781B (en) Cloud Server and virtual machine management method thereof
CN110162388A (en) A kind of method for scheduling task, system and terminal device
JPH04165541A (en) File rearranging method
CN104219235B (en) A kind of distributed transaction requesting method and device
CN109491928A (en) Buffer control method, device, terminal and storage medium
CN103080903A (en) Scheduler, multi-core processor system, and scheduling method
US6725445B1 (en) System for minimizing notifications in workflow management system
CN109968352A (en) Robot control method, robot and device with storage function
US10103575B2 (en) Power interchange management system and power interchange management method for maintaining a balance between power supply and demand
CN107943423A (en) The management method and computer-readable recording medium of storage resource in cloud system
CN108694188A (en) A kind of newer method of index data and relevant apparatus
CN107071045A (en) A kind of resource scheduling system based on multi-tenant
CN113952720A (en) Game scene rendering method and device, electronic equipment and storage medium
US20120253750A1 (en) Cable management and inventory enhancement
CN108306912A (en) Virtual network function management method and its device, network function virtualization system
CN107391281A (en) A kind of data processing method of server, device and storage medium
CN112221151A (en) Map generation method and device, computer equipment and storage medium
CN107276833A (en) A kind of node information management method and device
CN116737385A (en) Rendering control method, device and rendering system
CN108667750A (en) virtual resource management method and device
CN110471713A (en) A kind of ultrasonic system method for managing resource and device
CN110111203A (en) Batch process, device and the electronic equipment of business datum

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant