CN112395076A - Network data processing method, equipment and storage medium - Google Patents

Network data processing method, equipment and storage medium Download PDF

Info

Publication number
CN112395076A
CN112395076A CN201910755314.5A CN201910755314A CN112395076A CN 112395076 A CN112395076 A CN 112395076A CN 201910755314 A CN201910755314 A CN 201910755314A CN 112395076 A CN112395076 A CN 112395076A
Authority
CN
China
Prior art keywords
thread
network data
preset area
processing
sending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910755314.5A
Other languages
Chinese (zh)
Inventor
赵钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910755314.5A priority Critical patent/CN112395076A/en
Publication of CN112395076A publication Critical patent/CN112395076A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

In the embodiment of the application, network data are sent to respective corresponding first preset areas through at least one first thread; at least one first thread corresponds to one second thread; when a second thread is in an idle state and the second thread detects that network data exist in any one first preset area, acquiring the network data from the first preset area, processing the network data, and sending a processing result to the corresponding second preset area; because at least one first thread corresponds to a second thread, and the second thread can only execute operation in an idle state, when data to be processed continuously exist, the second thread can continuously execute data processing operation, so that the time of independent execution operation of the second thread is greatly prolonged, the first thread is not required to be limited to execute operation or not, the data processing performance is improved, and meanwhile, the function of data processing can be completed.

Description

Network data processing method, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, a device, and a storage medium for processing network data.
Background
With the development of information technology, in a traditional business scenario, the requirements on the access amount of resources and response time are not high, and the business requirements can be met by using a database. However, with the rapid increase of the service access volume, the user experience requirement on the product is improved, and great pressure is inevitably caused to resources, so that the database cannot meet the service requirement at this time. Especially in the internet service scenario, the access amount is more than ten times or even more than hundreds times higher than the original access amount. Therefore, the requirement cannot be met by simply optimizing the database layer, but the data is put into the memory, so that the reading efficiency can be improved.
Disclosure of Invention
Aspects of the present disclosure provide a method, device, and storage medium for processing network data, so as to conveniently and quickly improve performance of a storage system.
The embodiment of the application provides a method for processing network data, which comprises the following steps: when network data are detected, the network data are sent to corresponding first preset areas through at least one first thread; wherein the at least one first thread corresponds to a second thread; when the second thread is in an idle state and the second thread detects that the network data exists in any one of the first preset areas, processing the network data in the first preset area through the second thread and sending a processing result to the corresponding second preset area; after the transmission of the processing result is completed, the second thread is in an idle state.
An embodiment of the present application further provides a method for processing network data, including: creating at least one first thread and a second thread corresponding to the at least one first thread, and configuring corresponding functions for the first thread and the second thread; the first thread is used for receiving network data from a responsible network transmission channel and sending the network data to a corresponding first preset area, the second thread is used for acquiring the network data from the first preset area under the condition of being in an idle state, processing the network data and sending a processing result to a corresponding second preset area, and the first thread is also used for sending the processing result in the corresponding second preset area to a corresponding network transmission channel.
The embodiment of the application also provides a computing device, which comprises a memory, a processor and a communication component; the memory for storing a computer program; the processor to execute the computer program to: when network data are detected, the network data are sent to corresponding first preset areas through at least one first thread; wherein the at least one first thread corresponds to a second thread; when the second thread is in an idle state and the second thread detects that the network data exists in any one of the first preset areas, processing the network data in the first preset area through the second thread and sending a processing result to the corresponding second preset area; after the transmission of the processing result is completed, the second thread is in an idle state.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which when executed by one or more processors causes the one or more processors to implement the steps in the method for processing network data.
The embodiment of the application also provides a computing device, which comprises a memory and a processor; the memory for storing a computer program; the processor to execute the computer program to: creating at least one first thread and a second thread corresponding to the at least one first thread, and configuring corresponding functions for the first thread and the second thread; the first thread is used for receiving network data from a responsible network transmission channel and sending the network data to a corresponding first preset area, the second thread is used for acquiring the network data from the first preset area under the condition of being in an idle state, processing the network data and sending a processing result to a corresponding second preset area, and the first thread is also used for sending the processing result in the corresponding second preset area to a corresponding network transmission channel.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which when executed by one or more processors causes the one or more processors to implement the steps in the method for processing network data.
In the embodiment of the application, network data are sent to the corresponding first preset areas through at least one first thread; at least one first thread corresponds to one second thread; when the second thread is in an idle state and the second thread detects that network data exist in any first preset area, the second thread acquires the network data from the first preset area, processes the network data and sends a processing result to the corresponding second preset area; because at least one first thread corresponds to a second thread, and the second thread can only execute operation in an idle state, when data to be processed continuously exist, the second thread can continuously execute data processing operation, so that the time of independent execution operation of the second thread is greatly prolonged, the first thread is not required to be limited to execute operation or not, the data processing performance is improved, and meanwhile, the function of data processing can be completed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1A is a schematic diagram of a network data processing system according to an exemplary embodiment of the present application;
FIG. 1B is a block diagram of a system for processing network data according to an exemplary embodiment of the present application;
fig. 2 is a flowchart illustrating a method for processing network data according to an exemplary embodiment of the present application;
fig. 3 is a schematic structural diagram of a network data processing apparatus according to an exemplary embodiment of the present application;
fig. 4 is a schematic structural diagram of a network data processing apparatus according to another exemplary embodiment of the present application;
FIG. 5 is a schematic block diagram of a server according to another exemplary embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to another exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The Redis storage system is a key-value storage system. It supports relatively more stored value types, including string, list, set, zset, and hash. Redis storage systems support a variety of different manners of sorting. To ensure efficiency, data is cached in memory. The Redis storage system can periodically write updated data into a disk or write modification operation into an additional recording file, and realizes master-slave synchronization on the basis.
In a traditional multi-thread scheme, the Redis memory system is usually split into a plurality of identical threads, that is, each thread still processes the same operation, however, the original design purpose of the Redis memory system is violated because they split the logic of command execution into a parallel mode, destroy the serial logic of command execution, and cannot be fully compatible with the original functions of Redis, such as transactions, lua scripts and the like.
In the embodiment of the application, network data are sent to the corresponding first preset areas through at least one first thread; at least one first thread corresponds to one second thread; when the second thread is in an idle state and the second thread detects that network data exist in any first preset area, the second thread acquires the network data from the first preset area, processes the network data and sends a processing result to the corresponding second preset area; because at least one first thread corresponds to a second thread, and the second thread can only execute operation in an idle state, when data to be processed continuously exist, the second thread can continuously execute data processing operation, so that the time of independent execution operation of the second thread is greatly prolonged, the first thread is not required to be limited to execute operation or not, the data processing performance is improved, and meanwhile, the function of data processing can be completed.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1A is a schematic structural diagram of a performance enhancing system according to an exemplary embodiment of the present disclosure. As shown in fig. 1A, the lift system 100A may include: a first device 101 and a second device 102.
The first device 101 may be any computing device with certain computing capabilities. The basic structure of the first device 101 may include: at least one processor. The number of processors depends on the configuration and type of the first device 101. The first device 101 may also include a Memory, which may be volatile, such as RAM, non-volatile, such as Read-Only Memory (ROM), flash Memory, etc., or may include both types. The memory typically stores one or more application programs, and may also store program data and the like. In addition to the processing unit and the memory, the first terminal 101 may also include some basic configurations, such as a network card chip, an IO bus, a display component, and some peripheral devices. Alternatively, some peripheral devices may include, for example, a keyboard, a mouse, a stylus, a printer, and the like. Other peripheral devices are well known in the art and will not be described in detail herein.
The second device 102 refers to a device that can provide computing processing services in a network virtual environment, and generally refers to a server that stores information using a network. In physical implementation, the second device 102 may be any device capable of providing computing services, responding to service requests, and performing processing, and may be, for example, a conventional server, a cloud host, a virtual center, and the like. The second device 102 mainly includes a processor, a hard disk, a memory, a system bus, and the like, and is similar to a general computer architecture.
In this example, the first device 101 sends the network data to be stored to the second device 102 through the network transmission channel. A Redis storage system is arranged in the second device 102, and when the network data is detected, the second device 102 sends the network data to the corresponding first preset area through at least one first thread; at least one first thread corresponds to one second thread; when a second thread is in an idle state and the second thread detects that network data exists in any first preset area, processing the network data in the first preset area through the second thread and sending a processing result to a corresponding second preset area; after the completion of the transmission of the processing result, the second thread is in an idle state.
The second device 102 returns the processing result in the second preset area to the first device 101 through the first thread corresponding to any one of the first preset areas.
In some instances, the first device 101 may have multiple and may simultaneously transmit network data to the second device 102.
In this embodiment, as shown in fig. 1B, a first device 101, such as a terminal, installed on-line shopping client, responds to a shopping account registration request of a user, such as an on-line buyer, to send attribute data of a registered account, such as a user ID, an age, a gender, etc., to a second device 102, such as a server, after receiving the attribute data, the second device 102 assigns the attribute data to a corresponding first thread, such as an IO thread, writes data, such as stores the attribute data in a first storage area, such as a first message queue, and a second thread, such as a Worker thread, polls the first message queue corresponding to each IO thread in an idle state, and when it is polled that there are pending messages in the first message queue, that is, the attribute data, the Worker thread acquires the attribute data from the first message queue, at this time, the second thread is in a busy state, that is, in a non-idle state, and processes the attribute data, for example, stores the processed result, for example, the successfully stored message, into a storage table corresponding to the Redis storage system, and then sends the processed result, for example, the successfully stored message, to the second message queue corresponding to the IO thread, at this time, the second thread returns to the idle state again, each IO thread polls the second message queue in charge of itself, and sends the message existing in the polled second message queue, for example, the successfully stored message, to the corresponding terminal, that is, the first device 101, which has informed that the online shopping client installed in the terminal is successfully stored.
It should be noted that the number of the first threads is at least one, and a specific numerical value may be set according to a service requirement, so that the time for the second thread to execute the operation is increased, and the number of the second threads is 1.
In the present embodiment described above, the first device 101 may be in network connection with the second device 102, and the network connection may be a wireless or wired network connection. If the first device 101 and the second device 102 are communicatively connected, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), WiMax, and the like.
The following describes the data processing procedure of the second device 102 in detail with reference to the method embodiment.
Fig. 2 is a flowchart illustrating a network data processing method according to an exemplary embodiment of the present application. The method 200 provided in the embodiment of the present application may be executed by a second device, such as a server, where the method 200 includes the following steps:
201: when the network data are detected, the network data are sent to the corresponding first preset areas through at least one first thread; wherein at least one first thread corresponds to one second thread.
202: when the second thread is in an idle state and the second thread detects that network data exists in any first preset area, the second thread processes the network data in the first preset area and sends a processing result to the corresponding second preset area.
203: after the completion of the transmission of the processing result, the second thread is in an idle state.
It should be noted that the present embodiment may be applied to a Redis storage system; the Redis storage system is open source software, a memory type database and supports data types such as character strings, hash tables, lists, sets, ordered sets, bitmaps, Hyperloggs and the like; redis storage systems may also have built-in functions for replication, Lua scripts, LRU (last Recentry Used Page built-in Algorithm), transactions, and different levels of disk persistence.
The following is detailed for the above steps:
201: when the network data are detected, the network data are sent to the corresponding first preset areas through at least one first thread; wherein at least one first thread corresponds to one second thread.
The network data refers to data stored in a storage system of the server through a network transmission channel. The network transmission channel refers to a channel for sending data or messages, such as a Socket connection channel.
The first preset area refers to a storage area for temporarily storing a message to be processed or network data to be processed, such as a message queue or a buffer queue, etc.
The first thread is used for receiving the network data from the responsible network transmission channel and sending the network data to the corresponding first preset area, for example, an IO thread. The number of the first threads is at least one or more, the number of the first threads can be set according to business requirements, and the time for improving the operation of the second threads is met.
In some examples, sending, by at least one first thread, the network data to the respective first preset areas includes: when monitoring the network data sent by any network transmission channel, the first thread in charge of the network transmission channel acquires the network data; and sending the network data to a first message queue corresponding to the first thread through the first thread.
For example, the server allocates a plurality of first threads, such as a plurality of IO threads, to each service port, and the plurality of terminals may establish Socket connection with one service port of the server to generate a Socket connection channel. When the number of IO threads of the service port 80 in charge of the server is 3, when there are 9 terminals to perform Socket connection with the service port 80 of the server, 9 Socket connection channels are generated, the server can equally allocate the 9 Socket connection channels to the 3 IO threads of the service port 80, and each IO thread is in charge of the 3 Socket connection channels. When any IO thread in the 3 IO threads monitors that the online shopping client in the corresponding terminal sends network data, for example, the online shopping order data may include data such as a user ID, an order number, a price, a purchase quantity, and a shipping address. And the IO thread in charge of the terminal receives the online shopping order and sends the online shopping order data to the first message queue corresponding to the IO thread.
It should be noted that each IO thread may have its own identifier, such as a thread number or a thread name, for distinguishing other IO threads. It should be understood that, after an IO thread is allocated to a corresponding Socket connection channel, a data sending terminal corresponding to or responsible for the IO thread is determined.
If the IO thread in charge of one network transmission channel does not receive the network data, the IO thread may be in a blocking state, and at this time, the IO thread may process the network data sent by other network transmission channels first. If the network transmission channels in charge of the IO thread all transmit network data, the IO thread may sequentially process according to the time sequence of receiving the data. If the time sequence is completely the same, the treatment can be selected randomly.
Each IO thread corresponds to one message queue or buffer queue, that is, there are how many message queues for how many IO threads.
In step 201, the IO thread reads the network data, calls the read method, and writes the data.
The message queue can be a lock-free queue, in the lock-free queue, the reader-writer respectively maintains the reading-writing pointer, and in the single-producer single-consumer mode, the reader-writer can concurrently read and write without mutual influence.
202: when the second thread is in an idle state and the second thread detects that network data exists in any first preset area, the second thread processes the network data in the first preset area and sends a processing result to the corresponding second preset area.
The second thread is used for acquiring the network data from the first preset area, processing the network data, and sending a processing result to a corresponding second preset area, such as a Worker thread. The number of the second threads is one.
The processing result is a result indicating that the message to be processed or the data to be processed is processed, for example, the message to be processed or the data to be processed is successfully stored in the data table of the Redis storage system.
The second preset area refers to a storage area for temporarily storing the processing result, such as a message queue or a buffer queue, and the like.
In some examples, processing, by the second thread, the network data in the first preset area, and sending a processing result of the data to a corresponding second preset area includes: when the second thread in the idle state finds that any one of the first message queues has network data, acquiring the network data; analyzing the network data, and processing the network data according to an analysis result; and sending the processing result to a second message queue corresponding to the first message queue.
The method for searching for the network data in the message queue may be as follows: network data is looked up from the message queue by a Polling (Polling) method or mechanism.
The polling mode or mechanism is that the second thread sends out inquiry to the message queue regularly, and inquires whether each message queue needs its service or not in sequence, if so, the service is given, and after the service is over, the next message queue is inquired, and then the process is repeated.
Analyzing the network data refers to analyzing the network data according to a preset protocol, such as a Redis protocol, to obtain an analysis result, such as the network data, and a manner of processing the network data.
The second thread in the idle state means that the second thread is not currently executing any task or processing any data, and is in a state of waiting for the execution of the task or the processing of the data.
For example, according to the foregoing, when the second process of the server is in an idle state, such as a Worker thread, polling the first message queues of all IO threads in the Redis storage system, when polling that there is a message or data to be processed in any of the first message queues, such as online shopping order data, the Worker thread may acquire the online shopping order data from the first message queue, at this time, the second process, such as the Worker thread, changes from the idle state to a busy state or a processing task state, the Worker thread analyzes the online shopping order data by using a Redis protocol, acquires each data in the online shopping order data, and stores each data in an indication of a data table of the Redis storage system. And the Worker thread stores all the data into the Redis storage system data table according to the indication, and after the data is successfully stored into the Redis storage system data table, the Worker thread of the server sends the processing result, which is successfully stored into the Redis storage system data table, to a second message queue corresponding to the IO thread, namely the second message queue corresponding to the first message queue, wherein the IO thread is the IO thread for acquiring the online shopping order data. At this time, the second thread, such as the Worker thread, changes from a busy state to an idle state, and may continue to perform polling on the first message queues of all IO threads in the Redis storage system.
It should be noted that the Worker thread also completes the processing of messages or data in sequence according to the logical order. In addition, in the embodiment, the number of Worker threads is only one. Because the number of the Worker threads is only one, the IO thread and the Worker threads can process data and messages according to a logic sequence. However, each logical sequence is composed of any one IO thread of a plurality of IO threads and the one Worker thread. Each IO thread corresponds to a second message queue.
203: after the completion of the transmission of the processing result, the second thread is in an idle state.
Since the detailed description of the step 203 is already set forth in the foregoing, it is not repeated here.
In some examples, the method 200 further comprises: and returning the processing result in the second preset area through the first thread corresponding to any one first preset area.
The first thread may be further configured to send the processing result of the data to the corresponding second preset region, and send the processing result in the second preset region to the corresponding network transmission channel.
In some examples, returning a processing result in the second preset area through the first thread corresponding to any one of the first preset areas includes: and returning the processing result through the network transmission channel when the processing result is found to exist in the first message queue through the corresponding first thread.
The method for searching for the processing result in the first message queue may be: the processing result is looked up from the message queue by a Polling (Polling) mode or mechanism.
For example, according to the foregoing, a plurality of IO threads of the server respectively poll their own second message queues, and when the IO thread polls that the processing result exists in their own second message queues, "successfully store in the Redis storage system data table," then the IO thread takes out the processing result from the second message queue, and returns the processing result "successfully store in the Redis storage system data table" to the client in the corresponding terminal through the Socket connection channel that the IO thread is responsible for.
Wherein at least one first thread runs independently of each other without interfering with each other; the second thread is in a serial relationship with any of the first threads. Since the foregoing has been described in detail, it is not repeated here.
In some examples, the method 200 further comprises: creating at least one first thread and one second thread, and configuring corresponding functions for the first thread and the second thread; and allocating the network transmission channel responsible for each first thread, and when a plurality of network data from different transmission channels are detected, executing the step of sending the network data to the corresponding first preset area through the corresponding first thread.
For example, according to the foregoing, the server may create a plurality of IO threads and one Worker thread according to the Redis storage system program, and configure corresponding functions, such as an execution command or an execution instruction or an operation, for each IO thread, so as to implement that when receiving network data from a responsible network transmission channel, the network data is sent to a corresponding first preset region, a processing result is sent to a corresponding second preset region, and a processing result in the second preset region is sent to a corresponding network transmission channel. And configuring corresponding functions, such as executing commands or executing instructions or operations, for the Worker thread, so as to acquire network data from the first preset area and process the network data. After the server allocates a respective network transmission channel to each IO, step 201 may be executed. Since the detailed functions of each IO thread and the Worker thread have been described in detail above, they will not be described in detail here.
The native Redis memory system adopts a single-process single-thread form when processing events, namely, the processing processes are all in one thread, which is also a performance bottleneck, a large amount of CPU time is consumed in data reading and writing, and the real command execution logic only accounts for 30%; in this embodiment, the Redis storage system is reconfigured in a multi-thread manner, and a network IO thread and a Worker thread are split, wherein the IO thread can be arbitrarily expanded, and only one Worker thread is used to ensure the serial execution of commands, so that the performance improvement and the functional compatibility of the Redis storage system are ensured. And the performance can be improved to about 3 times of the original performance.
Another exemplary embodiment of the present application provides a method for processing network data. The method provided by the embodiment of the present application is executed by a second device, such as a server, and includes the following steps:
a: creating at least one first thread and a second thread corresponding to the at least one first thread, and configuring corresponding functions for the first thread and the second thread; the first thread is used for receiving network data from a responsible network transmission channel and sending the network data to a corresponding first preset area, the second thread is used for acquiring the network data from the first preset area under the condition of being in an idle state, processing the network data and sending a processing result to a corresponding second preset area, and the first thread is also used for sending the processing result in the corresponding second preset area to the corresponding network transmission channel.
It should be noted that, since step a has been described in detail in the foregoing, it is not described herein again.
Fig. 3 is a schematic structural framework diagram of a network data processing apparatus according to another exemplary embodiment of the present application. The apparatus 300 may be applied to a second device, such as a server, and the apparatus 300 includes: the sending module 301 and the processing module 302 are described in detail below with respect to the functions of the modules:
a sending module 301, configured to send, when network data is detected, the network data to a corresponding first preset area by at least one first thread; wherein at least one first thread corresponds to one second thread.
The processing module 302 is configured to, when a second thread is in an idle state and the second thread detects that network data exists in any one of the first preset areas, process the network data in the first preset area through the second thread, and send a processing result to a corresponding second preset area.
After the completion of the transmission of the processing result, the second thread is in an idle state.
In some examples, the apparatus 300 further comprises: and the return module is used for returning the processing result in the second preset area through the first thread corresponding to any first preset area.
In some examples, the apparatus 300 further comprises: the system comprises a creating module, a setting module and a processing module, wherein the creating module is used for creating at least one first thread and at least one second thread and configuring corresponding functions for the first thread and the second thread; and the distribution module is used for distributing the network transmission channel responsible for each first thread, and when a plurality of network data from different transmission channels are detected, executing the step of sending the network data to the corresponding first preset area through the corresponding first thread.
In some examples, the sending module 301 includes: the acquisition unit is used for monitoring network data sent by any network transmission channel and taking charge of a first thread of the network transmission channel to acquire the network data; the first sending unit is used for sending the network data to a first message queue corresponding to the first thread through the first thread.
In some examples, the processing module 302 includes: the searching unit is used for obtaining network data when the second thread in an idle state finds that any one of at least one first message queue corresponding to at least one first thread has the network data; the analysis unit is used for analyzing the network data and processing the network data according to the analysis result; and the second sending unit is used for sending the processing result to a second message queue corresponding to the first message queue.
In some examples, the sending module 301 is configured to, when finding that a processing result exists in the second message queue corresponding to the first thread through the corresponding first thread, return the processing result through the network transmission channel.
In some examples, the at least one first thread runs independently of each other; the second thread is in a serial relationship with any of the first threads.
In some examples, the apparatus 300 is suitable for use in a Redis storage system.
In some examples, the first thread is an IO thread.
In some examples, the second thread is a Worker thread.
Fig. 4 is a schematic structural framework diagram of another network data processing apparatus according to another exemplary embodiment of the present application. The apparatus 400 may be applied to a second device, such as a server, and the apparatus 400 includes: a creating module 401, the functions of which are explained in detail below:
a creating module 401, configured to create at least one first thread and a second thread corresponding to the at least one first thread, and configure corresponding functions for the first thread and the second thread; the first thread is used for receiving network data from a responsible network transmission channel and sending the network data to a corresponding first preset area, the second thread is used for acquiring the network data from the first preset area under the condition of being in an idle state, processing the network data and sending a processing result to a corresponding second preset area, and the first thread is also used for sending the processing result in the corresponding second preset area to the corresponding network transmission channel.
Having described the internal functions and structure of the processing device 300 shown in fig. 3, in one possible design, the structure of the processing device 300 shown in fig. 3 may be implemented as a server, as shown in fig. 5, and the server 500 may include: memory 501, processor 502, and communications component 503;
a memory 501 for storing a computer program;
a processor 502 for executing a computer program for: when the network data are detected, at least one first thread sends the network data to a corresponding first preset area; wherein at least one first thread corresponds to one second thread; when a second thread is in an idle state and the second thread detects that network data exists in any first preset area, processing the network data in the first preset area through the second thread and sending a processing result to a corresponding second preset area; after the completion of the transmission of the processing result, the second thread is in an idle state.
The communication component 503 is configured to return a processing result in the second preset area through the first thread corresponding to any one of the first preset areas.
In some examples, processor 502 is further configured to: creating at least one first thread and one second thread, and configuring corresponding functions for the first thread and the second thread; and allocating the network transmission channel responsible for each first thread, and when a plurality of network data from different transmission channels are detected, executing the step of sending the network data to the corresponding first preset area through the corresponding first thread.
In some examples, the processor 502 is specifically configured to: when monitoring the network data sent by any network transmission channel, the first thread in charge of the network transmission channel acquires the network data; and sending the network data to a first message queue corresponding to the first thread through the first thread.
In some examples, the processor 502 is specifically configured to: when the second thread in the idle state finds that network data exist in any first message queue in at least one first message queue corresponding to at least one first thread, network data are obtained; analyzing the network data, and processing the network data according to an analysis result; and sending the processing result to a second message queue corresponding to the first message queue.
In some examples, when the processing result is found to exist in the second message queue corresponding to the first thread through the corresponding first thread, the communication component 503 is specifically configured to: and returning the processing result through a network transmission channel.
In some examples, the at least one first thread runs independently of each other; the second thread is in a serial relationship with any of the first threads.
In some examples, the processor 502 is suitable for use in a Redis memory system.
In some examples, the first thread is an IO thread.
In some examples, the second thread is a Worker thread.
In addition, an embodiment of the present invention provides a computer storage medium, and the computer program, when executed by one or more processors, causes the one or more processors to implement the steps of the method for processing network data in the embodiment of the method of fig. 2.
Having described the internal functions and structure of the processing apparatus 400 of information shown in fig. 4, in one possible design, the structure of the processing apparatus 400 shown in fig. 4 may be implemented as a server, as shown in fig. 6, and the server 600 may include: memory 601, processor 602, and communications component 603;
a memory 601 for storing a computer program;
a processor 602 for executing a computer program for: creating at least one first thread and a second thread corresponding to the at least one first thread, and configuring corresponding functions for the first thread and the second thread; the first thread is used for receiving network data from a responsible network transmission channel and sending the network data to a corresponding first preset area, the second thread is used for acquiring the network data from the first preset area under the condition of being in an idle state, processing the network data and sending a processing result to a corresponding second preset area, and the first thread is also used for sending the processing result in the corresponding second preset area to the corresponding network transmission channel.
In addition, an embodiment of the present invention provides a computer storage medium, and the computer program, when executed by one or more processors, causes the one or more processors to implement step a of the method for processing network data in the embodiment.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 201, 202, 203, etc., are merely used for distinguishing different operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable multimedia data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable multimedia data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable multimedia data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable multimedia data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (15)

1. A method for processing network data, comprising:
when network data are detected, the network data are sent to corresponding first preset areas through at least one first thread; wherein the at least one first thread corresponds to a second thread;
when the second thread is in an idle state and the second thread detects that the network data exists in any one of the first preset areas, processing the network data in the first preset area through the second thread and sending a processing result to the corresponding second preset area;
after the transmission of the processing result is completed, the second thread is in an idle state.
2. The method of claim 1, further comprising:
and returning the processing result in the second preset area through the first thread corresponding to any one first preset area.
3. The method of claim 1, further comprising:
creating the at least one first thread and the second thread, and configuring corresponding functions for the first thread and the second thread;
and allocating the network transmission channel responsible for each first thread, and when a plurality of network data from different transmission channels are detected, executing the step of sending the network data to the corresponding first preset area through the corresponding first thread.
4. The method of claim 1, wherein sending the network data to the respective first preset areas via the at least one first thread comprises:
when monitoring network data sent by any network transmission channel, a first thread in charge of the network transmission channel acquires the network data;
and sending the network data to a first message queue corresponding to the first thread through the first thread.
5. The method of claim 1, wherein the processing the network data in the first preset area through the second thread and sending a processing result to a corresponding second preset area comprises:
when the second thread in the idle state finds that network data exist in any first message queue in at least one first message queue corresponding to at least one first thread, the network data are obtained;
analyzing the network data, and processing the network data according to an analysis result;
and sending the processing result to a second message queue corresponding to the first message queue.
6. The method according to claim 2, wherein the returning the processing result in the second preset area through the first thread corresponding to any one of the first preset areas comprises:
and returning the processing result through a network transmission channel when the processing result exists in the second message queue corresponding to the first thread through the corresponding first thread.
7. The method of claim 6, wherein the at least one first thread runs independently of each other;
the second thread is in a serial relationship with any of the first threads.
8. Method according to claims 1-7, wherein the method is adapted for a Redis storage system.
9. The method of claims 1-7, wherein the first thread is an IO thread.
10. The method of claims 1-7, wherein the second thread is a Worker thread.
11. A method for processing network data, comprising:
creating at least one first thread and a second thread corresponding to the at least one first thread, and configuring corresponding functions for the first thread and the second thread;
the first thread is used for receiving network data from a responsible network transmission channel and sending the network data to a corresponding first preset area, the second thread is used for acquiring the network data from the first preset area under the condition of being in an idle state, processing the network data and sending a processing result to a corresponding second preset area, and the first thread is also used for sending the processing result in the corresponding second preset area to a corresponding network transmission channel.
12. A computing device comprising a memory, a processor, and a communication component;
the memory for storing a computer program;
the processor to execute the computer program to:
when network data are detected, the network data are sent to corresponding first preset areas through at least one first thread; wherein the at least one first thread corresponds to a second thread;
when the second thread is in an idle state and the second thread detects that the network data exists in any one of the first preset areas, processing the network data in the first preset area through the second thread and sending a processing result to the corresponding second preset area;
after the transmission of the processing result is completed, the second thread is in an idle state.
13. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by one or more processors, causes the one or more processors to perform the steps of the method of any one of claims 1-10.
14. A computing device comprising a memory and a processor;
the memory for storing a computer program;
the processor to execute the computer program to:
creating a plurality of first threads and second threads corresponding to the plurality of first threads, and configuring corresponding functions for the first threads and the second threads;
the first thread is used for receiving network data from a responsible network transmission channel and sending the network data to a corresponding first preset area, the second thread is used for acquiring the network data from the first preset area under the condition of being in an idle state, processing the network data and sending a processing result to a corresponding second preset area, and the first thread is also used for sending the processing result in the corresponding second preset area to a corresponding network transmission channel.
15. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by one or more processors, causes the one or more processors to perform the steps of the method of claim 11.
CN201910755314.5A 2019-08-15 2019-08-15 Network data processing method, equipment and storage medium Pending CN112395076A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910755314.5A CN112395076A (en) 2019-08-15 2019-08-15 Network data processing method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910755314.5A CN112395076A (en) 2019-08-15 2019-08-15 Network data processing method, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112395076A true CN112395076A (en) 2021-02-23

Family

ID=74601742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910755314.5A Pending CN112395076A (en) 2019-08-15 2019-08-15 Network data processing method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112395076A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164256A (en) * 2011-12-08 2013-06-19 深圳市快播科技有限公司 Processing method and system capable of achieving one machine supporting high concurrency
CN103577257A (en) * 2012-08-03 2014-02-12 杭州勒卡斯广告策划有限公司 REST (representational state transfer) service method, device and system
CN104536724A (en) * 2014-12-25 2015-04-22 华中科技大学 Hash table concurrent access performance optimization method under multi-core environment
US20170171302A1 (en) * 2015-12-15 2017-06-15 Samsung Electronics Co., Ltd. Storage system and method for connection-based load balancing
CN107704328A (en) * 2017-10-09 2018-02-16 郑州云海信息技术有限公司 Client accesses method, system, device and the storage medium of file system
CN108055255A (en) * 2017-12-07 2018-05-18 华东师范大学 A kind of event base, expansible data management system and its management method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164256A (en) * 2011-12-08 2013-06-19 深圳市快播科技有限公司 Processing method and system capable of achieving one machine supporting high concurrency
CN103577257A (en) * 2012-08-03 2014-02-12 杭州勒卡斯广告策划有限公司 REST (representational state transfer) service method, device and system
CN104536724A (en) * 2014-12-25 2015-04-22 华中科技大学 Hash table concurrent access performance optimization method under multi-core environment
US20170171302A1 (en) * 2015-12-15 2017-06-15 Samsung Electronics Co., Ltd. Storage system and method for connection-based load balancing
CN107704328A (en) * 2017-10-09 2018-02-16 郑州云海信息技术有限公司 Client accesses method, system, device and the storage medium of file system
CN108055255A (en) * 2017-12-07 2018-05-18 华东师范大学 A kind of event base, expansible data management system and its management method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王东滨;胡铭曾;智慧;余翔湛;: "面向网络数据实时检测的多线程内存管理技术", 高技术通讯, no. 12, 15 December 2008 (2008-12-15), pages 25 - 29 *

Similar Documents

Publication Publication Date Title
CN110062924B (en) Capacity reservation for virtualized graphics processing
CN109983441B (en) Resource management for batch jobs
US9413683B2 (en) Managing resources in a distributed system using dynamic clusters
CN109684065B (en) Resource scheduling method, device and system
US8442955B2 (en) Virtual machine image co-migration
CN108055343B (en) Data synchronization method and device for computer room
CN110941481A (en) Resource scheduling method, device and system
US9063918B2 (en) Determining a virtual interrupt source number from a physical interrupt source number
KR20140061444A (en) Volatile memory representation of nonvolatile storage device set
CN116601606A (en) Multi-tenant control plane management on a computing platform
US10701154B2 (en) Sharding over multi-link data channels
CN111679911B (en) Management method, device, equipment and medium of GPU card in cloud environment
US11237761B2 (en) Management of multiple physical function nonvolatile memory devices
CN113296874B (en) Task scheduling method, computing device and storage medium
EP3146426A1 (en) High-performance computing framework for cloud computing environments
US20240220334A1 (en) Data processing method in distributed system, and related system
CN113535087B (en) Data processing method, server and storage system in data migration process
CN107528871B (en) Data analysis in storage systems
CN111600771B (en) Network resource detection system and method
CN111475279B (en) System and method for intelligent data load balancing for backup
EP3264254A1 (en) System and method for a simulation of a block storage system on an object storage system
CN113448867B (en) Software pressure testing method and device
CN112395076A (en) Network data processing method, equipment and storage medium
US10824640B1 (en) Framework for scheduling concurrent replication cycles
CN109617954B (en) Method and device for creating cloud host

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination