CN108694083B - Data processing method and device for server - Google Patents

Data processing method and device for server Download PDF

Info

Publication number
CN108694083B
CN108694083B CN201710225059.4A CN201710225059A CN108694083B CN 108694083 B CN108694083 B CN 108694083B CN 201710225059 A CN201710225059 A CN 201710225059A CN 108694083 B CN108694083 B CN 108694083B
Authority
CN
China
Prior art keywords
thread
data processing
shared memory
processing request
memory queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710225059.4A
Other languages
Chinese (zh)
Other versions
CN108694083A (en
Inventor
朱鑫
许泽伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710225059.4A priority Critical patent/CN108694083B/en
Publication of CN108694083A publication Critical patent/CN108694083A/en
Application granted granted Critical
Publication of CN108694083B publication Critical patent/CN108694083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Abstract

The embodiment of the invention discloses a data processing method and a data processing device of a server; the network transceiving layer of the embodiment may include an acceptance thread and a plurality of assistant threads, the acceptance thread is configured to receive a data processing request and is allocated to the assistant threads, and the assistant threads may communicate with the service logic thread through the shared memory queue to process the data processing request.

Description

Data processing method and device for server
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a data processing method and apparatus for a server.
Background
With the development of network technology and the continuous increase of user demands, the data volume faced by the network server is also more and more huge, and it is always a concern for people how to improve the performance of the network server and enable the network server to better process data.
In the prior art, a network server is generally divided into a network transceiving layer and a service logic layer, the network transceiving layer is mainly used for receiving various data processing requests, and the service logic layer can specifically process data according to the data processing requests, wherein the network transceiving layer and the service logic layer are mainly communicated through a shared memory. For example, after the network transceiver layer receives the data processing request, the data processing request may be stored in the shared memory, and then the plurality of service logic layers contend for the lock, and the service logic layer contends for the lock to read the data processing request stored in the shared memory, and process the data according to the data processing request.
In the course of research and practice on the prior art, the inventor of the present invention finds that, in the existing scheme, because the processing capability of the network transceiving layer is limited, and frequent competition needs to be performed between the service logic layers, the implementation is not only more complex, but also the processing efficiency is lower, which greatly affects the performance of the network server.
Disclosure of Invention
The embodiment of the invention provides a data processing method and a data processing device for a server, which are not only simple to implement, but also can greatly improve the processing efficiency and are beneficial to improving the performance of a network server.
The embodiment of the invention provides a data processing method of a server, which comprises the following steps:
receiving a data processing request through an accepting thread (AcceptThread), wherein the accepting thread corresponds to a plurality of assistant threads (AuxThread), and each assistant thread corresponds to a shared memory (Sharedmemory) queue;
selecting an auxiliary thread for the data processing request, and writing the data processing request into a corresponding shared memory queue through the selected auxiliary thread;
calling business logic threads (Worker) which correspond to the shared memory queues one by one;
and processing the data processing request in the shared memory queue through the service logic thread.
An embodiment of the present invention further provides a data processing apparatus for a server, including:
the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a data processing request through a receiving thread, the receiving thread corresponds to a plurality of auxiliary threads, and each auxiliary thread corresponds to a shared memory queue;
a selection unit for selecting an assist thread for the data processing request;
the write-in unit is used for writing the data processing request into a corresponding shared memory queue through the selected auxiliary thread;
the calling unit is used for calling the service logic threads which are in one-to-one correspondence with the shared memory queues;
and the processing unit is used for processing the data processing request in the shared memory queue through the service logic thread.
The network transceiving layer of embodiments of the present invention may comprise an accept thread for receiving data processing requests and assigning to an assist thread, and the helper thread may communicate with the business logic thread via the shared memory queue, to process the data processing request, wherein, because the auxiliary threads are provided with a plurality of threads which are in one-to-one correspondence with the shared memory queue and the business logic thread, the processing capability of the network transceiving layer can be greatly improved without lock competition, namely, each business logic thread only needs to process the data processing request in the shared memory queue corresponding to the business logic thread, and does not need to contend for the lock with other business logic threads, therefore, compared with the existing scheme, the scheme is simple to implement, can greatly improve the processing efficiency and is beneficial to improving the performance of the network server.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a schematic view of a data processing method of a server according to an embodiment of the present invention;
FIG. 1b is a flowchart of a data processing method of a server according to an embodiment of the present invention;
fig. 2 is another flowchart of a data processing method of a server according to an embodiment of the present invention;
FIG. 3a is a schematic structural diagram of a data processing apparatus of a server according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of another structure of a data processing apparatus of a server according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a data processing method and device of a server.
The data processing apparatus of the server may be specifically integrated in a server, such as a network server and other devices. For example, taking the example of integration in a network server, referring to fig. 1a, the network server may include an accept thread (accept thread), a plurality of assist threads (AuxThread), and a plurality of business logic threads (Worker), wherein the accept thread is used for receiving data processing requests and is allocated to the assist threads, and each assist thread may communicate with the business logic threads through a shared memory (SharedMemory) queue to process the data processing requests. Because the number of the auxiliary threads is multiple, the processing capacity of a network transceiving layer (including a receiving thread and an auxiliary thread) can be greatly improved; in addition, because the auxiliary thread corresponds to the shared memory queue and the business logic thread one to one, each business logic thread only needs to process the data processing request in the shared memory queue corresponding to the business logic thread, and the lock contention with other business logic threads is not needed, so that the processing efficiency can be greatly improved.
The following are detailed below. It should be noted that the numbers in the following examples are not intended to limit the order of preference of the examples
The first embodiment,
The present implementation will be described from the perspective of a data processing apparatus of a server, which may be specifically integrated in a server, such as a network server or the like.
A data processing method of a server, called data processing method for short, includes: receiving a data processing request through an acceptance thread, wherein the acceptance thread corresponds to a plurality of auxiliary threads, and each auxiliary thread corresponds to a shared memory queue; selecting an auxiliary thread for the data processing request, and writing the data processing request into a corresponding shared memory queue through the selected auxiliary thread; calling business logic threads corresponding to the shared memory queues one by one; and processing the data processing request in the shared memory queue through the business logic thread.
As shown in fig. 1b, the specific flow of the data processing method may be as follows:
101. a data processing request is received by an accepting thread.
The network transceiving layer can be divided into an accepting thread and an assistant thread, wherein one accepting thread corresponds to a plurality of assistant threads, and each assistant thread corresponds to one shared memory queue.
For example, taking the receiving thread corresponding to the assistant thread a, the assistant thread B, and the assistant thread C as an example, the assistant thread a, the assistant thread B, and the assistant thread C respectively have independent shared memory queues, for example, the assistant thread a corresponds to the shared memory queue 1, the assistant thread B corresponds to the shared memory queue 2, the assistant thread C corresponds to the shared memory queue 3, and so on.
102. An assist thread is selected for the data processing request.
For example, an auxiliary thread may be specifically selected for the data processing request according to a load balancing policy, for example, the following may be specifically selected:
and acquiring load information of a plurality of assistant threads corresponding to the receiving thread, selecting an assistant thread with the lightest load from the plurality of assistant threads according to the load information, and distributing the data processing request to the selected assistant thread.
For example, even taking the receiving thread corresponding to the assist thread a, the assist thread B, and the assist thread C as an example, if the number of connections currently processed by the assist thread a is 10, the number of connections currently processed by the assist thread B is 15, and the number of connections currently processed by the assist thread C is 13, then the data processing request may be allocated to the assist thread a, and so on.
103. Writing the data processing request into a corresponding shared memory queue through the selected auxiliary thread; for example, the following may be specifically mentioned:
and acquiring a shared memory queue corresponding to the selected auxiliary thread according to a preset mapping relation, and writing the data processing request into the corresponding shared memory queue by the selected auxiliary thread.
The mapping relationship between the auxiliary thread and the shared memory queue may be preset by a maintainer according to the requirement of actual application, or may be established by the system itself, that is, before the step "writing the data processing request into the corresponding shared memory queue through the selected auxiliary thread", the data processing method may further include:
and establishing a mapping relation of one-to-one correspondence between the auxiliary threads and the shared memory queues, and storing the mapping relation into a preset database.
Then, the step of "obtaining the shared memory queue corresponding to the selected assist thread according to the preset mapping relationship" may specifically include: and obtaining the shared memory queue corresponding to the selected auxiliary thread by searching the mapping relation in the database.
104. And calling business logic threads which correspond to the shared memory queues one by one.
For example, the service logic thread corresponding to the shared memory queue may be called specifically according to a preset corresponding relationship between the shared memory queue and the service logic thread.
The one-to-one correspondence between the shared memory queue and the service logic thread may be preset by a maintenance person according to the requirement of actual application, or may be established by the system itself, that is, before the step "invoking the service logic thread corresponding to the shared memory queue", the data processing method may further include:
and establishing a one-to-one corresponding relation between the shared memory queue and the business logic thread, and storing the corresponding relation.
At this time, the step "invoking the service logic threads corresponding to the shared memory queues one to one" may specifically include: and calling the service logic threads which are in one-to-one correspondence with the shared memory queues according to the stored corresponding relation.
105. And processing the data processing request in the shared memory queue through the business logic thread.
For example, the data processing request in the shared memory queue may be read by the service logic thread, the service logic thread determines an operation object and operation content according to the read data processing request, and the service logic thread executes the operation content on the operation object.
For example, if the operation object is data M and the operation content is "delete", at this time, the business logic thread may execute a delete operation on the data M.
For another example, if the operation object is data M and the operation content is "update", then the business logic thread may perform an update operation on the data M, and so on.
As can be seen from the above, the network transceiving layer of this embodiment may include an accepting thread and a plurality of assistant threads, where the accepting thread is used to receive the data processing request and is allocated to the assistant threads, and the helper thread may communicate with the business logic thread via the shared memory queue, to process the data processing request, wherein, because the auxiliary threads are provided with a plurality of threads which are in one-to-one correspondence with the shared memory queue and the business logic thread, the processing capability of the network transceiving layer can be greatly improved without lock competition, namely, each business logic thread only needs to process the data processing request in the shared memory queue corresponding to the business logic thread, and does not need to contend for the lock with other business logic threads, therefore, compared with the existing scheme, the scheme is simple to implement, can greatly improve the processing efficiency and is beneficial to improving the performance of the network server.
Example II,
The method described in the first embodiment is further illustrated by way of example.
In this embodiment, a description will be given taking an example in which the data processing apparatus of the server is specifically integrated in a web server.
As shown in fig. 2, a specific flow of a data processing method of a server may be as follows:
201. the web server initiates a local (i.e., web server) accept thread and receives data processing requests through the accept thread.
202. After receiving the data processing request, the network server selects an auxiliary thread for the data processing request according to the load balancing strategy.
The load balancing policy may be set according to a requirement of an actual application, for example, a plurality of load levels, such as "heavy load", "medium load", and "light load", may be set, each load level has a corresponding threshold, current load information of the assist thread may be compared with the thresholds, and the current load of the assist thread may be classified based on a comparison result, after the classification is completed, a required assist thread may be selected from the assist threads belonging to the "light load" level according to a preset selection policy, for example, an assist thread may be randomly selected from the assist threads belonging to the "light load" level, and the like.
For example, taking the receiving thread corresponding to the assist thread a, the assist thread B, the assist thread C, and the assist thread D as an example, and if the condition of "heavy load" is that the number of connections is greater than 25, and the condition of "medium load" is that the number of connections is 15 to 25, and the condition of "light load" is that the number of connections is less than 15, if the number of connections currently processed by the assist thread a is 18, the number of connections currently processed by the assist thread B is 30, the number of connections currently processed by the assist thread C is 8, and the number of connections currently processed by the assist thread D is 11, then at this time, it may be determined that the load level of the assist thread a is "medium load", the load level of the assist thread B is "heavy load", and the load levels of the assist thread C and the assist thread D are "light load", therefore, an assist thread, such as assist thread C, may be selected at assist thread C and assist thread D machines and the data processing request assigned to assist thread C.
Optionally, other manners may also be adopted, for example, the loads of the multiple assistant threads may be sorted, and then the assistant thread with the lightest load is selected from the sorted assistant threads, which may specifically be as follows:
the network server acquires load information of a plurality of assistant threads corresponding to the receiving thread, sorts the assistant threads according to the load information, selects an assistant thread with the lightest load from the assistant threads based on the sorting, and distributes the data processing request to the selected assistant thread.
For example, also taking the receiving thread corresponding to the assistant thread a, the assistant thread B, the assistant thread C, and the assistant thread D as an example, if the number of connections currently processed by the assistant thread a is 18, the number of connections currently processed by the assistant thread B is 30, the number of connections currently processed by the assistant thread C is 8, and the number of connections currently processed by the assistant thread D is 11, then these assistant threads may be sorted, for example, in the order of the numbers of connections from high to low, that is, the assistant thread B > the assistant thread a > the assistant thread D > the assistant thread C.
It should be noted that the connection numbers may also be sorted from low to high, and the implementation is similar to that described above, and is not described herein again.
203. The network server obtains the shared memory queue corresponding to the auxiliary thread selected in step 202 according to a preset mapping relationship.
For example, taking the shared memory queue corresponding to assist thread a one-to-one as shared memory queue 1, the shared memory queue corresponding to assist thread B one-to-one as shared memory queue 2, the shared memory queue corresponding to assist thread C one-to-one as shared memory queue 3, and the shared memory queue corresponding to assist thread D one-to-one as shared memory queue 4 as an example, if assist thread C is selected in step 202, then at this time, the shared memory queue corresponding to assist thread C one-to-one, that is, shared memory queue 3, may be obtained.
For example, mapping relationships between the auxiliary threads and the shared memory queues may be pre-established in advance, and the mapping relationships are stored in a preset database, so that the shared memory queues corresponding to the selected auxiliary threads can be obtained by searching the mapping relationships in the database.
204. The network server writes the data processing request into the shared memory queue through the selected assistant thread.
For example, if shared memory queue 3 is selected in step 203, the data processing request may be written into shared memory queue 3 by assist thread 3 at this time.
205. And the network server calls the business logic threads which correspond to the shared memory queues one by one.
For example, the service logic thread corresponding to the shared memory queue may be called specifically according to a preset corresponding relationship between the shared memory queue and the service logic thread.
For example, if the data processing request is written into the shared memory queue 3 in step 204, at this time, the network server may call a service logic thread corresponding to the shared memory queue 3, such as calling the service logic thread 3, and similarly, if the data processing request is written into another shared memory queue, such as the shared memory queue 2, at this time, the network server may call a service logic thread corresponding to the shared memory queue 2, such as calling the service logic thread 2, and so on.
For example, before the service logic thread is called, a one-to-one correspondence between the shared memory queue and the service logic thread may be established, and the correspondence is stored, so that the service logic thread corresponding to the shared memory queue one-to-one may be called according to the stored correspondence.
206. And the network server reads the data processing request in the shared memory queue through the service logic thread.
For example, taking the service logic thread as the service logic thread 3 specifically, since the shared memory queue corresponding to the service logic thread 3 is the shared memory queue 3, at this time, the service logic thread 3 may read the shared memory queue 3 to obtain the corresponding data processing request; similarly, if the service logic thread is specifically the service logic thread 2, since the shared memory queue corresponding to the service logic thread 2 is the shared memory queue 2, at this time, the service logic thread 2 may read the shared memory queue 2 to obtain the corresponding data processing request, and so on.
207. And the business logic thread determines an operation object and operation content according to the read data processing request. For example, the following may be specifically mentioned:
if the data processing request indicates deletion of the data M, it may be determined that the operation object is the data M and the operation content is deletion.
If the data processing request indicates to modify the data M, at this time, it may be determined that the operation object is the data M and the operation content is modification.
If the data processing request indicates to update the data M, at this time, it may be determined that the operation object is the data M and the operation content is the update.
If the data processing request indicates to add data M, at this time, it may be determined that the operation object is data M and the operation content is addition.
And so on, and are not further enumerated here.
208. And the business logic thread executes the operation content on the operation object. For example, the following may be specifically mentioned:
if the operation object is data M and the operation content is deletion, then at this time, a deletion operation may be performed on the data M.
If the operation object is data M and the operation content is modification, then at this time, a modification operation may be performed on the data M.
If the operation object is data M and the operation content is update, then at this time, an update operation may be performed on the data M.
If the operation object is data M and the operation content is addition, at this time, an addition operation may be performed on the data M.
And so on, and are not further enumerated here.
As can be seen from the above, the network transceiving layer of this embodiment may include an accepting thread and a plurality of assistant threads, where the accepting thread is used to receive the data processing request and is allocated to the assistant threads, and the helper thread may communicate with the business logic thread via the shared memory queue, to process the data processing request, wherein, because the auxiliary threads are provided with a plurality of threads which are in one-to-one correspondence with the shared memory queue and the business logic thread, the processing capability of the network transceiving layer can be greatly improved without lock competition, namely, each business logic thread only needs to process the data processing request in the shared memory queue corresponding to the business logic thread, and does not need to contend for the lock with other business logic threads, therefore, compared with the existing scheme, the scheme is simple to implement, can greatly improve the processing efficiency and is beneficial to improving the performance of the network server.
Example III,
In order to better implement the above method, an embodiment of the present invention further provides a data processing apparatus of a server, which is referred to as a data processing apparatus for short. As shown in fig. 3a, the data processing apparatus may include a receiving unit 301, a selecting unit 302, a writing unit 303, a calling unit 304, and a processing unit 305, and specifically may be as follows:
(1) a receiving unit 301;
a receiving unit 301, configured to receive a data processing request through an accepting thread.
The network transceiving layer can be divided into an accepting thread and an assistant thread, wherein one accepting thread corresponds to a plurality of assistant threads, and each assistant thread corresponds to one shared memory queue.
For example, taking the receiving thread corresponding to the assistant thread a, the assistant thread B, and the assistant thread C as an example, the assistant thread a, the assistant thread B, and the assistant thread C respectively have independent shared memory queues, for example, the assistant thread a corresponds to the shared memory queue 1, the assistant thread B corresponds to the shared memory queue 2, the assistant thread C corresponds to the shared memory queue 3, and so on.
(2) A selection unit 302;
a selecting unit 302 for selecting an assist thread for the data processing request.
For example, the selecting unit 302 may be specifically configured to select an auxiliary thread for the data processing request according to a load balancing policy, for example, the following may be specifically used:
and acquiring load information of a plurality of assistant threads corresponding to the receiving thread, selecting an assistant thread with the lightest load from the plurality of assistant threads according to the load information, and distributing the data processing request to the selected assistant thread.
For example, even taking the receiving thread corresponding to the assist thread a, the assist thread B, and the assist thread C as an example, if the number of connections currently processed by the assist thread a is 10, the number of connections currently processed by the assist thread B is 15, and the number of connections currently processed by the assist thread C is 13, then the data processing request may be allocated to the assist thread a, and so on.
(3) A write unit 303;
a writing unit 303, configured to write the data processing request into the corresponding shared memory queue through the selected helper thread.
For example, the writing unit 303 may be specifically configured to obtain the shared memory queue corresponding to the selected assistant thread according to a preset mapping relationship, and write the data processing request into the corresponding shared memory queue by the selected assistant thread.
The mapping relationship between the auxiliary thread and the shared memory queue may be preset by a maintenance person according to a requirement of an actual application, or may be automatically established by the system, that is, as shown in fig. 3b, the data processing apparatus may further include an establishing unit 306, as follows:
the establishing unit 306 may be configured to establish a mapping relationship between the assist threads and the shared memory queues in a one-to-one correspondence manner, and store the mapping relationship in a preset database.
At this time, the writing unit 303 may be specifically configured to obtain the shared memory queue corresponding to the selected assist thread by searching the mapping relationship in the database.
(4) A calling unit 304;
and the calling unit 304 is configured to call the service logic threads corresponding to the shared memory queues one to one.
For example, the invoking unit 304 may be specifically configured to invoke a service logic thread corresponding to a shared memory queue according to a preset corresponding relationship between the shared memory queue and the service logic thread.
The one-to-one correspondence between the shared memory queue and the service logic thread can be preset by maintenance personnel according to the requirements of practical application, and can also be established by the system, namely:
the establishing unit 306 may also be configured to establish a one-to-one correspondence between the shared memory queue and the service logic thread, and store the correspondence.
At this time, the invoking unit 304 may be specifically configured to invoke the service logic threads corresponding to the shared memory queue one to one according to the correspondence relationship stored by the establishing unit 306.
(5) A processing unit 305;
a processing unit 305, configured to process the data processing request in the shared memory queue through the service logic thread.
For example, the processing unit 305 may be specifically configured to read the data processing request in the shared memory queue through the service logic thread, determine an operation object and operation content according to the read data processing request by the service logic thread, and execute the operation content on the operation object by the service logic thread.
For example, if the operation object is data M and the operation content is "delete", in this case, the processing unit 305 may execute a delete operation on the data M through the business logic thread.
For another example, if the operation object is data M and the operation content is "update", then the processing unit 305 may execute an update operation on the data M through a business logic thread, and so on.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
The data processing device of the server may be specifically integrated in a server, such as a network server or the like.
As can be seen from the above, the receiving unit 301 of the data processing apparatus of this embodiment can receive the data processing request through the receiving thread, and allocate the data processing request to the helper thread, then the writing unit 303 writes the data processing request into the one-to-one shared memory queue through the helper thread, and the calling unit 304 calls the corresponding service logic thread, and the processing unit 304 processes the data processing request in the shared memory queue through the service logic thread. The auxiliary threads are in one-to-one correspondence with the shared memory queues and the business logic threads, so that the processing capacity of a network transceiving layer can be greatly improved, lock competition is not needed, namely each business logic thread only needs to process a data processing request in the shared memory queue corresponding to the business logic thread, and the lock contention with other business logic threads is not needed, therefore, compared with the existing scheme, the scheme is simple to implement, the processing efficiency can be greatly improved, and the performance of the network server is favorably improved.
Example four,
An embodiment of the present invention further provides a server, which may be used as a network server in the embodiment of the present invention, as shown in fig. 4, which shows a schematic structural diagram of the server in the embodiment of the present invention, specifically:
the server may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the server architecture shown in FIG. 4 is not meant to be limiting, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the server, connects various parts of the entire server using various interfaces and lines, and performs various functions of the server and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the server. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the server, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The server further includes a power supply 403 for supplying power to each component, and preferably, the power supply 403 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The server may also include an input unit 404, the input unit 404 being operable to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the server may further include a display unit and the like, which will not be described in detail herein. Specifically, in this embodiment, the processor 401 in the server loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions as follows:
receiving a data processing request through an accepting thread, selecting an auxiliary thread for the data processing request, writing the data processing request into a corresponding shared memory queue through the selected auxiliary thread, calling business logic threads corresponding to the shared memory queue one by one, and processing the data processing request in the shared memory queue through the business logic threads.
For example, the data processing request in the shared memory queue may be read by the service logic thread, the service logic thread determines an operation object and operation content according to the read data processing request, and the service logic thread executes the operation content on the operation object, and so on.
One receiving thread corresponds to a plurality of auxiliary threads, each auxiliary thread corresponds to a shared memory queue, and the shared memories correspond to the business logic threads one by one, namely the auxiliary threads, the shared memory queues and the business logic threads correspond to one by one. The mapping relationship between the auxiliary thread and the shared memory queue and the one-to-one correspondence between the shared memory queue and the service logic thread may be preset by a maintainer according to the requirement of the actual application, or may be automatically established by the system.
Optionally, there may be multiple ways to select the assistant thread, for example, the assistant thread may be selected for the data processing request according to a load balancing policy, that is, the application program stored in the memory 402, and the following functions may also be implemented:
and acquiring load information of a plurality of assistant threads corresponding to the receiving thread, selecting an assistant thread with the lightest load from the plurality of assistant threads according to the load information, and distributing the data processing request to the selected assistant thread.
The load information may include information such as the number of requests currently processed by the helper thread (i.e., the number of connections).
As can be seen from the above, the network transceiving layer of the server provided in this embodiment may include an accepting thread and a plurality of assistant threads, where the accepting thread is configured to receive a data processing request and allocate the data processing request to the assistant threads, and the assistant threads may communicate with the service logic threads through the shared memory queue to process the data processing request, where the assistant threads have a plurality of assistant threads, and correspond to the shared memory queue and the service logic threads one to one, so that the processing capability of the network transceiving layer may be greatly improved, and lock contention does not need to be performed, that is, each service logic thread only needs to process the data processing request in the shared memory queue corresponding to itself, and does not need to contend for a lock with other service logic threads, so that, as a whole, compared with the existing scheme, the scheme is not only simple to implement, but also may greatly improve the processing efficiency, the performance of the network server is improved.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The data processing method and apparatus for a server provided in the embodiments of the present invention are described in detail above, and a specific example is applied in the description to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (9)

1. A data processing method of a server, comprising:
receiving a data processing request through an acceptance thread, wherein the acceptance thread corresponds to a plurality of auxiliary threads, each auxiliary thread corresponds to a shared memory queue, and the auxiliary threads, the shared memory queues and the business logic threads are in one-to-one correspondence;
selecting an auxiliary thread for the data processing request according to a load balancing policy, the selecting an auxiliary thread for the data processing request according to a load balancing policy comprising: acquiring load information of a plurality of auxiliary threads corresponding to the receiving thread; selecting an assistant thread with the lightest load from the plurality of assistant threads according to the load information; distributing the data processing request to a selected auxiliary thread through the accepting thread;
writing the data processing request into a shared memory queue corresponding to the selected auxiliary thread through the selected auxiliary thread;
calling business logic threads corresponding to the shared memory queues one by one;
and processing the data processing request in the shared memory queue through the service logic thread.
2. The method of claim 1, wherein writing the data processing request to the corresponding shared memory queue by the selected helper thread comprises:
acquiring a shared memory queue corresponding to the selected auxiliary thread according to a preset mapping relation;
and writing the data processing request into the corresponding shared memory queue by the selected auxiliary thread.
3. The method of claim 2, wherein prior to writing the data processing request to the corresponding shared memory queue by the selected helper thread, further comprising:
establishing a mapping relation of one-to-one correspondence of the auxiliary threads and the shared memory queues;
storing the mapping relation into a preset database;
the obtaining of the shared memory queue corresponding to the selected assist thread according to the preset mapping relationship includes: and obtaining the shared memory queue corresponding to the selected auxiliary thread by searching the mapping relation in the database.
4. The method according to any one of claims 1 to 3, wherein the processing, by the business logic thread, the data processing request in the shared memory queue comprises:
reading the data processing request in the shared memory queue through the service logic thread;
determining an operation object and operation content by the service logic thread according to the read data processing request;
executing the operation content on the operation object by the business logic thread.
5. A data processing apparatus of a server, comprising:
the system comprises a receiving unit, a service logic thread and a processing unit, wherein the receiving unit is used for receiving a data processing request through a receiving thread, the receiving thread corresponds to a plurality of auxiliary threads, each auxiliary thread corresponds to a shared memory queue, and the auxiliary thread, the shared memory queues and the service logic thread are in one-to-one correspondence;
a selecting unit, configured to select an auxiliary thread for the data processing request according to a load balancing policy, where the selecting an auxiliary thread for the data processing request according to a load balancing policy includes: acquiring load information of a plurality of auxiliary threads corresponding to the receiving thread; selecting an assistant thread with the lightest load from the plurality of assistant threads according to the load information; distributing the data processing request to a selected auxiliary thread through the accepting thread;
the write-in unit is used for writing the data processing request into the shared memory queue corresponding to the selected auxiliary thread through the selected auxiliary thread;
the calling unit is used for calling the service logic threads which are in one-to-one correspondence with the shared memory queues;
and the processing unit is used for processing the data processing request in the shared memory queue through the service logic thread.
6. The apparatus of claim 5,
the write-in unit is specifically configured to acquire a shared memory queue corresponding to the selected assist thread according to a preset mapping relationship, and write the data processing request into the corresponding shared memory queue by the selected assist thread.
7. The apparatus of claim 6, further comprising a setup unit;
the establishing unit is used for establishing a mapping relation which corresponds to the auxiliary thread and the shared memory queue one by one, and storing the mapping relation into a preset database;
the write-in unit is specifically configured to obtain the shared memory queue corresponding to the selected assist thread by searching for a mapping relationship in the database.
8. The apparatus according to any one of claims 5 to 7,
the processing unit is specifically configured to read the data processing request in the shared memory queue through the service logic thread, determine an operation object and operation content according to the read data processing request by the service logic thread, and execute the operation content on the operation object by the service logic thread.
9. A storage medium, characterized in that the computer readable storage medium stores a plurality of instructions adapted to be loaded by a processor to execute the steps in the data processing method of the server according to any one of claims 1 to 4.
CN201710225059.4A 2017-04-07 2017-04-07 Data processing method and device for server Active CN108694083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710225059.4A CN108694083B (en) 2017-04-07 2017-04-07 Data processing method and device for server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710225059.4A CN108694083B (en) 2017-04-07 2017-04-07 Data processing method and device for server

Publications (2)

Publication Number Publication Date
CN108694083A CN108694083A (en) 2018-10-23
CN108694083B true CN108694083B (en) 2022-07-29

Family

ID=63842191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710225059.4A Active CN108694083B (en) 2017-04-07 2017-04-07 Data processing method and device for server

Country Status (1)

Country Link
CN (1) CN108694083B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352743B (en) * 2018-12-24 2023-12-01 北京新媒传信科技有限公司 Process communication method and device
CN110472516A (en) * 2019-07-23 2019-11-19 腾讯科技(深圳)有限公司 A kind of construction method, device, equipment and the system of character image identifying system
CN113052565A (en) * 2020-10-17 2021-06-29 严怀华 Intelligent community business information processing method based on cloud service and cloud service equipment
CN113312184A (en) * 2021-06-07 2021-08-27 平安证券股份有限公司 Service data processing method and related equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753439A (en) * 2009-12-18 2010-06-23 深圳市融创天下科技发展有限公司 Method for distributing and transmitting streaming media
KR20130118593A (en) * 2012-04-20 2013-10-30 한국전자통신연구원 Apparatus and method for processing data in middleware for data distribution service
CA2888684C (en) * 2012-10-19 2017-03-07 Argyle Data, Inc. Multi-threaded, lockless data parallelization
KR20140137573A (en) * 2013-05-23 2014-12-03 한국전자통신연구원 Memory management apparatus and method for thread of data distribution service middleware
CN103412786B (en) * 2013-08-29 2017-04-12 苏州科达科技股份有限公司 High performance server architecture system and data processing method thereof
US9378168B2 (en) * 2013-09-18 2016-06-28 International Business Machines Corporation Shared receive queue allocation for network on a chip communication
CN104735077B (en) * 2015-04-01 2017-11-24 积成电子股份有限公司 It is a kind of to realize the efficiently concurrent methods of UDP using Circular buffer and circle queue
US9953006B2 (en) * 2015-06-23 2018-04-24 International Business Machines Corporation Lock-free processing of stateless protocols over RDMA
CN105827604A (en) * 2016-03-15 2016-08-03 深圳市游科互动科技有限公司 Server and service processing method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Effective Data Exchange in Parallel Computing;Hui Ma et.al;《2013 International Conference on Information Science and Cloud Computing》;20131231;全文 *
基于共享存储和Gzip的并行压缩算法研究;宋刚 等;《计算机工程与设计》;20091231;全文 *

Also Published As

Publication number Publication date
CN108694083A (en) 2018-10-23

Similar Documents

Publication Publication Date Title
CN108694083B (en) Data processing method and device for server
CN104461744B (en) A kind of resource allocation methods and device
CN102835068B (en) Method and apparatus for managing reallocation of system resources
CN106681835B (en) The method and resource manager of resource allocation
CN112269641B (en) Scheduling method, scheduling device, electronic equipment and storage medium
CN111464659A (en) Node scheduling method, node pre-selection processing method, device, equipment and medium
CN104679594B (en) A kind of middleware distributed computing method
CN113110938B (en) Resource allocation method and device, computer equipment and storage medium
CN114416352A (en) Computing resource allocation method and device, electronic equipment and storage medium
CN108334396A (en) The creation method and device of a kind of data processing method and device, resource group
US10761869B2 (en) Cloud platform construction method and cloud platform storing image files in storage backend cluster according to image file type
CN107273200A (en) A kind of method for scheduling task stored for isomery
CN114356543A (en) Kubernetes-based multi-tenant machine learning task resource scheduling method
CN110321215A (en) Queue control method and device
CN106598737A (en) Method and device for implementing hardware resource allocation
CN110706148B (en) Face image processing method, device, equipment and storage medium
CN111352735A (en) Data acceleration method, device, storage medium and equipment
CN111290858B (en) Input/output resource management method, device, computer equipment and storage medium
US20230155958A1 (en) Method for optimal resource selection based on available gpu resource analysis in large-scale container platform
CN106961490A (en) A kind of resource monitoring method and system, a kind of home server
CN109298949B (en) Resource scheduling system of distributed file system
CN115658311A (en) Resource scheduling method, device, equipment and medium
CN115629854A (en) Distributed task scheduling method, system, electronic device and storage medium
CN115174406A (en) Method and device for expanding and contracting container application, computer equipment and storage medium
CN105278873B (en) A kind of distribution method and device of disk block

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant