CN111190745B - Data processing method, device and computer readable storage medium - Google Patents

Data processing method, device and computer readable storage medium Download PDF

Info

Publication number
CN111190745B
CN111190745B CN201911071773.8A CN201911071773A CN111190745B CN 111190745 B CN111190745 B CN 111190745B CN 201911071773 A CN201911071773 A CN 201911071773A CN 111190745 B CN111190745 B CN 111190745B
Authority
CN
China
Prior art keywords
queue
target
standby
data processing
configuration information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911071773.8A
Other languages
Chinese (zh)
Other versions
CN111190745A (en
Inventor
王亮
肖怀锋
顾栋波
高立周
赵光普
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911071773.8A priority Critical patent/CN111190745B/en
Publication of CN111190745A publication Critical patent/CN111190745A/en
Application granted granted Critical
Publication of CN111190745B publication Critical patent/CN111190745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application discloses a data processing method, a data processing device and a computer readable storage medium, wherein the standby queue resource pool is generated and comprises a standby queue; acquiring the working state of an operation queue, and determining the operation queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with the target queue; and associating the standby queue with the transmission configuration information so that the standby queue replaces the target queue for data processing. Therefore, the standby queue is generated in advance, when the running queue is in an abnormal state, the running queue is determined to be the target queue, and the standby queue is associated with the transmission configuration information of the target queue, so that the standby queue can be quickly replaced with the target queue in the abnormal state to perform data processing, the target queue in the abnormal state does not need to be waited to be reset, long-time communication interruption is avoided, and the data processing efficiency is greatly improved.

Description

Data processing method, device and computer readable storage medium
Technical Field
The present invention relates to the field of computer communications technologies, and in particular, to a data processing method, apparatus, and computer readable storage medium.
Background
The rapid development of cloud computing makes computing work more and more concentrated on a data center, and more terminals only use networks to rapidly send requested tasks to the data center for computing, so that the demands of the terminals on computing capacity are reduced, but the demands of the data center on data transceiving and computing capacity are increasingly increased.
In the prior art, the bottom data transceiving of the data center is generally realized through queues of various hardware, so as to realize an intelligent network card with multiple queues of network cards, the intelligent network card is provided with an allocation mechanism based on a plurality of direct memory access (Direct Memory Access, DMA) queues, the messages in the network can be distributed to different queues, and cores of different processors (central processing unit, CPU) operate different queues, so that high-speed data transmission and processing are realized.
In the research and practice process of the prior art, the inventor of the application finds that in the prior art, the queue in the intelligent network card is easy to be abnormally hung and dead, and in the process of resetting the queue which is abnormally hung and dead, long-time communication interruption can be caused, and the data processing efficiency is lower.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device and a computer readable storage medium, which can improve the efficiency of data processing.
In order to solve the technical problems, the embodiment of the application provides the following technical scheme:
a data processing method, comprising:
generating a standby queue resource pool, wherein the standby queue resource pool comprises a standby queue;
acquiring the working state of an operation queue, and determining the operation queue with the working state in an abnormal state as a target queue;
determining transmission configuration information associated with the target queue;
and associating the standby queue with the transmission configuration information so that the standby queue replaces the target queue for data processing.
Correspondingly, the embodiment of the application also provides a data processing device, which comprises:
the generating unit is used for generating a standby queue resource pool, wherein the standby queue resource pool comprises a standby queue;
the system comprises an acquisition unit, a target queue and a control unit, wherein the acquisition unit is used for acquiring the working state of the operation queue and determining the operation queue with the working state in an abnormal state as the target queue;
a determining unit, configured to determine transmission configuration information associated with the target queue;
And the association unit is used for associating the standby queue with the transmission configuration information so that the standby queue replaces the target queue to perform data processing.
In some embodiments, the apparatus further comprises:
a reset unit, configured to control the target queue to be reset when detecting that the system is in an idle state;
and moving the reset target queue to a standby queue resource pool, and changing the target queue into a standby queue.
In some embodiments, the acquisition unit comprises:
the first detection unit is used for detecting whether statistics of the messages received by the application layer are continuously zero in a first preset time;
the second detection unit is used for detecting whether the packet loss statistics of the submitted messages of the operation queue are continuously increased in the second preset time when the statistics of the received messages of the application layer are detected to be continuously zero in the first preset time;
and the judging subunit is used for judging the working state of the running queue as an abnormal state and determining the running queue as a target queue when the packet loss statistics of the running queue submitted message is detected to continuously increase in the second preset time.
In some embodiments, the determining subunit is configured to:
Performing exception accumulation on the operation queue;
and when the abnormal accumulated value reaches a preset threshold value, judging the working state of the running queue as an abnormal state, and determining the running queue as a target queue.
In some embodiments, the acquiring unit further comprises:
the zero clearing unit is used for clearing the abnormal accumulated value when the statistics of the application layer received messages is detected not to be continuously zero in the first preset time; or alternatively
And when detecting that the packet loss statistics of the operation queue submitted message is not continuously increased within the second preset time, resetting the abnormal accumulated value.
In some embodiments, the generating unit includes:
a receiving subunit, configured to receive a resource amount of the standby queue resource pool;
and the application subunit is used for generating a standby queue resource pool according to the resource quantity and applying for the corresponding quantity of standby queues to perform activation processing.
In some embodiments, the receiving subunit is configured to:
acquiring the total resource amount of an operation queue and a current data receiving and transmitting stability evaluation value;
determining a corresponding proportion value according to the data receiving and transmitting stability evaluation value;
and generating the resource quantity of the standby queue resource pool based on the total resource quantity and the proportion value.
Accordingly, an embodiment of the present invention further provides a computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the above-mentioned data processing method.
By generating a standby queue resource pool, the standby queue resource pool comprises a standby queue; acquiring the working state of an operation queue, and determining the operation queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with the target queue; and associating the standby queue with the transmission configuration information so that the standby queue replaces the target queue for data processing. Therefore, the standby queue is generated in advance, when the running queue is in an abnormal state, the running queue is determined to be the target queue, and the standby queue is associated with the transmission configuration information of the target queue, so that the standby queue can be quickly replaced with the target queue in the abnormal state to perform data processing, the target queue in the abnormal state does not need to be waited to be reset, long-time communication interruption is avoided, and the data processing efficiency is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a scenario of a data processing system provided in an embodiment of the present application;
FIG. 2a is a schematic flow chart of a data processing method according to an embodiment of the present application;
fig. 2b is a schematic structural diagram of a receiver expansion technique according to an embodiment of the present application;
FIG. 3 is another flow chart of a data processing method according to an embodiment of the present disclosure;
fig. 4a is a schematic view of a scenario of a data processing method according to an embodiment of the present application;
FIG. 4b is a schematic structural diagram of a deflector technique according to an embodiment of the present disclosure;
FIG. 4c is a schematic diagram of another scenario of the data processing method according to the embodiment of the present application;
FIG. 4d is a schematic diagram of another scenario of the data processing method according to the embodiment of the present application;
FIG. 5a is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
FIG. 5b is a schematic diagram of another configuration of a data processing apparatus according to an embodiment of the present application;
FIG. 5c is another schematic diagram of a data processing apparatus according to an embodiment of the present application;
FIG. 5d is a schematic diagram of another configuration of a data processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Embodiments of the present application provide a data processing method, apparatus, and computer readable storage medium.
Referring to fig. 1, fig. 1 is a schematic view of a scenario of a data processing system provided in an embodiment of the present application, where the scenario is applied to a cloud data computing center, and includes a user space, a processor, and an intelligent network card, and the user space may perform processing of a data packet. The cloud data computing center may include a plurality of processors, such as 64-core processors. The intelligent network card can assist a processor to process network loads through a Field Programmable Gate Array (FPGA), the intelligent network card can provide distributed computing resources, namely, a network card multi-queue is realized, namely, the intelligent network card has an allocation mechanism based on a plurality of direct memory access queues, different queues can be seen, cores of different processors are operated, the overhead of locking caused by simultaneous access of a plurality of threads to the same queue is avoided, after a message of the network is received, the message can be distributed to different queues by the intelligent network card, for example, the intelligent network card is realized through receiver extension (Receive Side Scaling, RSS) and Flow Director (Flow Director) technology proposed by Microsoft, the former distributes packets to the plurality of queues uniformly according to hash values, and the latter distributes the packets to the designated queues based on searching and accurate matching.
Because there are multiple queues to work at the same time, there is a situation that the queues are likely to be suspended, in the prior art, a watchdog (watch dog) function is realized in a network card driving frame, if the queues send overtime, a reset network port queue is triggered, in the resetting process of the operation, the queues are in a waiting process, so that communication is interrupted for 1 to 5 seconds, and data processing performance is seriously affected.
It should be noted that, the schematic view of the scenario of the data processing system shown in fig. 1 is only an example, and the data processing system and scenario described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided in the embodiments of the present application, and those skilled in the art can know that, with the evolution of the data processing system and the appearance of a new service scenario, the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems.
The following will describe in detail. The numbers of the following examples are not intended to limit the preferred order of the examples.
Embodiment 1,
In this embodiment, description will be made from the point of view of a data processing apparatus, which may be integrated in a server having a storage unit and a microprocessor mounted therein and having an operation capability, the server may be a cloud host.
A data processing method, comprising: generating a standby queue resource pool, wherein the standby queue resource pool comprises a standby queue; acquiring the working state of an operation queue, and determining the operation queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with the target queue; and associating the standby queue with the transmission configuration information so that the standby queue replaces the target queue for data processing.
Referring to fig. 2a, fig. 2a is a flow chart of a data processing method according to an embodiment of the present application. The data processing method comprises the following steps:
in step 101, a pool of standby queue resources is generated.
In order to accelerate the operation efficiency, a plurality of processors, such as 64 core processors, may be installed in a cloud host, and each core of the processors may operate a different queue, and the queue connected to the processor may be referred to as an operation queue, where the queue is a special linear table, and is characterized in that it only allows a deletion operation at the front end (front) of the table and an insertion operation at the rear end (rear) of the table, where the queue may be a network card multi-queue in the intelligent network card, and each of the network card multi-queue may be separately used for data transceiving resources.
In this embodiment of the present application, when a cloud host initializes an operation queue, a standby queue resource pool is generated, and the number of operation queues that is greater than the system requirement is applied, where the standby queue resource pool may be a special memory space, and for a plurality of queues, the queues stored in the standby queue resource pool are called standby queues, that is, the standby queue resource pool includes a standby queue, where the number of the standby queues is determined by the size of the storage space of the standby queue resource pool, the larger the number of the standby queues, the smaller the storage space of the standby queue resource pool, and the smaller the number of the standby queues.
In some embodiments, the step of generating the reserve queue resource pool may include:
(1) Receiving the resource quantity of a standby queue resource pool;
(2) And generating a standby queue resource pool according to the resource quantity, and applying for a corresponding quantity of standby queues to perform activation processing.
When the storage space of the standby queue resource pool is too large, the standby queue is too large, the resources of the system are occupied, the system is blocked, and when the storage space of the standby queue resource pool is too small, the standby queue is too small, and cannot meet the later replacement requirement, therefore, when the intelligent network card is started, a user can set the storage space of the standby queue resource pool, namely, the user can apply for the standby queue resource pool with the corresponding size according to the actual use condition, and after receiving the resource quantity of the standby queue resource pool set by the user, the cloud host generates the standby queue resource pool with the corresponding size according to the resource quantity and applies for activating the standby queues with the corresponding quantity, so that the function of controlling the quantity of the standby queues by the user is achieved, and the resources of the system are saved.
In some embodiments, the step of receiving the resource amount of the reserve queue resource pool may include:
(1.1) acquiring the total resource amount of an operation queue and a current data receiving and transmitting stability evaluation value;
(1.2) determining a corresponding proportion value according to the data receiving and transmitting stability evaluation value;
(1.3) generating a resource amount of the reserve queue resource pool based on the total resource amount and the ratio value.
The more the running queues, the larger the total amount of resources, the fewer the running queues, and the smaller the total amount of resources. And acquiring a current data receiving and transmitting stability evaluation value, wherein the data receiving and transmitting stability evaluation value can be an evaluation value of the stability of the running environment of the running queue, such as an evaluation value of the stability of the network environment and/or an evaluation value of the running state of the running queue, and the higher the data receiving and transmitting stability evaluation value is, the more stable the running of the running queue is, the lower the probability of occurrence of an abnormal state is, the lower the data receiving and transmitting stability evaluation value is, the more unstable the running of the running queue is, and the higher the probability of occurrence of the abnormal state is.
Further, a corresponding proportion value can be determined according to the data receiving and transmitting stable evaluation value, the range of the proportion value is a numerical value between 0 and 1, the higher the data receiving and transmitting stable evaluation value is, the lower the proportion value is, the lower the data receiving and transmitting stable evaluation value is, and the higher the proportion value is, so that a standby queue which is suitable for the number of the current operation environments is generated based on the product of the total amount of resources of the operation queue and the proportion value, the resources of a system are saved, and the condition that the number of the standby queue is insufficient is avoided.
In step 102, the working state of the operation queue is acquired, and the operation queue whose working state is in an abnormal state is determined as the target queue.
When the working state of the operation queue is in the normal state, the operation queue in the normal state performs data processing, and when the working state of the operation queue is in the abnormal state, the operation queue in the abnormal state cannot perform normal data processing, so that a message of the operation queue cannot be transmitted to a processor, and the packet loss rate of the message of the operation queue is always increased.
In this embodiment of the present application, the working state of an operation queue in an intelligent network card is detected in real time, and when it is detected that the operation queue cannot transmit a packet to a corresponding processor, and the packet loss rate of a packet submitted by the operation queue is continuously increased, the working state of the operation queue is determined to be an abnormal state, unlike the prior art, in this embodiment of the present application, the operation queue is not reset, and the operation queue in the abnormal state is determined to be a target queue.
In some embodiments, the step of obtaining the working state of the working queue and determining the working queue with the working state in the abnormal state as the target queue may include:
(1) Detecting whether statistics of the messages received by an application layer are continuously zero in a first preset time;
(2) When detecting that the statistics of the messages received by the application layer are continuously zero in a first preset time, detecting whether the packet loss statistics of the messages submitted by the operation queue are continuously increased in a second preset time;
(3) When the packet loss statistics of the operation queue submitted message is detected to continuously increase in the second preset time, the working state of the operation queue is judged to be an abnormal state, and the operation queue is determined to be a target queue.
In actual use, when the running queue is in an abnormal state, the intelligent network card cannot transmit the message to the corresponding processor through the running queue when the running queue receives the message, so that the situation that the running queue is in a short-time stuck state is avoided to carry out misjudgment when the running queue is further transmitted to the upper layer application layer is set to the first preset time, therefore, whether the statistics of the received message of the application layer is continuously zero in the first preset time is detected, and when the statistics of the received message of the application layer is detected to be continuously zero in the first preset time, the second checking condition is triggered.
The second checking condition is to continuously detect whether the packet loss statistics of the submitted messages of the operation queue continuously increases within a second preset time, when the operation queue is in an abnormal state, the operation queue can continuously increase the packet loss statistics of the submitted messages due to the continuous increase of the messages under the condition that the operation queue cannot transmit the messages to the corresponding processor, the second preset time is set to further avoid misjudgment of the temporary packet loss condition of the operation queue, and when the packet loss statistics of the submitted messages of the operation queue are further detected to continuously increase within the second preset time, the working state of the operation queue is judged to be the abnormal state, and the operation queue is determined to be the target queue.
In some embodiments, the step of determining the working state of the run queue as an abnormal state may include:
(1.1) performing exception accumulation on the operation queue;
and (1.2) judging the working state of the operation queue as an abnormal state when the abnormal accumulated value reaches a preset threshold value.
In order to further increase the accuracy of detecting the abnormal condition of the operation queue, when the statistics of the received message of the application layer is detected to be continuously zero in a first preset time, and further, when the statistics of the packet loss of the submitted message of the operation queue is detected to be continuously increased in a second preset time, the operation state of the operation queue is not immediately judged to be the abnormal state, the operation queue is abnormally accumulated, a corresponding preset threshold is set, the preset threshold is a critical value for defining whether the operation state of the operation queue is the abnormal state, when the abnormal accumulated value of the operation queue reaches the preset threshold, the abnormal state of the target queue is continuously generated, the operation state of the operation queue is judged to be the abnormal state, and when the abnormal accumulated value of the operation queue does not reach the preset threshold, the step of executing the statistics for detecting whether the received message of the application layer is continuously zero in the first preset time is performed, so that the error touch probability of the burr signal is effectively removed through the preset threshold is improved.
In some embodiments, when it is detected that statistics of the application layer received messages are not continuously zero within a first preset time, the abnormal accumulated value is cleared; or when the packet loss statistics of the operation queue submitted message is detected not to continuously increase in the second preset time, resetting the abnormal accumulated value.
And when the statistics of the received messages of the application layer are detected not to be continuously zero within the first preset time, the running state of the running queue is indicated to be normal, the abnormal accumulated value is cleared, and similarly, when the statistics of the packet loss of the submitted messages of the running queue is detected not to be continuously increased within the second preset time, the running state of the running queue is indicated to be normal, the abnormal accumulated value is cleared, and the working state of the running queue is determined to be the abnormal state when the abnormal accumulated value of the running queue continuously reaches a threshold value.
In step 103, transmission configuration information associated with the target queue is determined.
The transmission configuration information may be a communication protocol between the target queue and the corresponding processor and a rule of message allocation, through which the target queue may be allocated to the corresponding message, and the allocated message is directionally transmitted to the target processor according to the target processor specified by the transmission configuration information.
The embodiment of the application determines the transmission configuration information associated with the target queue in the abnormal state, so that the subsequent queue replacement can be performed through the transmission configuration information.
In an embodiment, since the intelligent network card has multiple technologies for distributing the message to different queues and then transmitting the message to the corresponding processor, for example, receiver expansion and director technologies, the transmission configuration information also includes multiple types, and in this embodiment of the present application, the receiver expansion technology is exemplified as follows:
in some embodiments, the step of determining the transmission configuration information associated with the target queue may include: and acquiring hash configuration information associated with the target queue according to the receiver extension, wherein the hash configuration information comprises the association relation between the target hash value and the target queue.
Referring to fig. 2b together, the receiver expansion technology proposed by microsoft can determine a corresponding keyword according to a packet type of a packet, and further calculate a hash value according to the keyword through a hash function, where the hash function generally selects a microsoft toeplitz algorithm (Microsoft Toeplitz Based Hash) or a symmetric hash, each hash value corresponds to a different operation queue, and distributes the packet to a plurality of operation queues uniformly, and because each operation queue corresponds to a different processor, different streams can be distributed to different processors, so as to achieve load balancing, which is helpful for improving locality and cache consistency of references.
Based on the above, when the target queue is a target queue generated by a receiver extension technology, hash configuration information associated with the target queue extended according to the receiver is obtained, wherein the hash configuration information includes an association relationship between a target hash value and the target queue.
In step 104, the reserve queue is associated with the transmission configuration information such that the reserve queue replaces the target queue for data processing.
Because the working state of the target queue is abnormal, that is, the target queue cannot process data through transmitting configuration information, a corresponding standby queue needs to be selected from the standby queue resource pool at this time.
Furthermore, after the corresponding standby queue is selected, the transmission configuration information associated with the standby queue and the target queue can be associated, so that the data flow is changed from the leading-in target queue to the leading-in corresponding standby queue, the whole operation process has no resource reload and can be controlled at the millisecond level, and the data processing efficiency is greatly improved.
In some embodiments, the step of associating the reserve queue with the transmission configuration information such that the reserve queue replaces the target queue for data processing may include:
(1) Switching the target queue to an inactive state;
(2) And associating the standby queue to replace the target queue with the transmission configuration information, so that the standby queue replaces the target queue to perform data processing.
The target queue is switched to an inactive state (inactive) because the working state of the target queue is an abnormal state, that is, the target queue cannot perform normal data processing, that is, the target queue is closed, and a corresponding standby queue is used for replacing the target queue and associating with the transmission configuration information, so that the data flow originally flowing into the target queue is changed into the standby queue, and the effect that the standby queue replaces the target queue to perform data processing is achieved.
In some embodiments, the step of associating the standby queue with the transmission configuration information instead of the target queue such that the standby queue performs data processing instead of the target queue may include:
(1.1) deleting the association relationship between the target hash value and the target queue;
(1.2) establishing an association between the target hash value and a standby queue, so that the standby queue replaces the target queue to work.
When the target queue is generated by a receiver expansion technology, acquiring hash configuration information, deleting the association relation between a target hash value and the target queue with an abnormal working state, and establishing the association relation between the target hash value and the standby queue, so that after the data packet is converted into the target hash value, the target hash value is correspondingly imported into the standby queue through the hash value and the queue corresponding table instead of the target queue, and the standby queue is replaced by the target queue to work.
As can be seen from the foregoing, in the embodiment of the present application, by generating a standby queue resource pool, the standby queue resource pool includes a standby queue; acquiring the working state of an operation queue, and determining the operation queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with the target queue; and associating the standby queue with the transmission configuration information so that the standby queue replaces the target queue for data processing. Therefore, the standby queue is generated in advance, when the running queue is in an abnormal state, the running queue is determined to be the target queue, and the standby queue is associated with the transmission configuration information of the target queue, so that the standby queue can be quickly replaced with the target queue in the abnormal state to perform data processing, the target queue in the abnormal state does not need to be waited to be reset, long-time communication interruption is avoided, and the data processing efficiency is greatly improved.
Embodiment II,
The method described in accordance with embodiment one is described in further detail below by way of example.
In this embodiment, the data processing apparatus will be specifically integrated in a server, and the server is described by taking a cloud host as an example, and specific reference will be made to the following description.
Referring to fig. 3, fig. 3 is another flow chart of the data processing method according to the embodiment of the present application. The method flow may include:
in step 201, the server obtains the total amount of resources of the running queue and the current data receiving and transmitting stability evaluation value, determines a corresponding proportion value according to the data receiving and transmitting stability evaluation value, and generates the amount of resources of the resource pool of the standby queue based on the total amount of resources and the proportion value.
Referring to fig. 4a, when the server initializes the queues, all the queues are in default state, i.e. inactive state, and the server can determine the total number of running queues of the network card required by the system, and determine the corresponding total amount of resources according to the total number, which are in direct proportion. Meanwhile, the server acquires a current data receiving and transmitting stability evaluation value, and the data receiving and transmitting stability evaluation value can be the running state of the running queue.
Further, a corresponding proportion value is determined according to the data receiving and transmitting stability evaluation value, the proportion value is a real number larger than 0 and smaller than 1, the larger the data stability evaluation value is, the smaller the corresponding proportion value is, the higher the corresponding proportion value is, the product of the total amount of resources and the proportion value is calculated, and the amount of resources of the standby queue resource pool is generated according to the product, so that the lower the data receiving and transmitting stability is, the lower the amount of resources of the standby queue resource pool is, and the higher the amount of resources of the standby queue resource pool is.
In step 202, the server generates a resource pool of the standby queues according to the resource amount, and applies for the corresponding number of standby queues to perform activation processing.
The server generates a standby queue resource pool according to the resource quantity, and applies for a standby queue corresponding to the resource quantity of the standby queue resource pool, wherein the larger the resource quantity is, the larger the quantity of the standby queue is, the smaller the resource quantity is, and the smaller the quantity of the standby queue is.
Referring to fig. 4a together, the server initializes the number of network card queues more than the system requirement, initializes the running queues of the user requirement number through a queue mapping (queue mapping) table, generates an Active queue in a corresponding Active state, initializes the plurality of queues to be activated, and puts the queues into a standby queue resource pool to wait for use.
It should be noted that, referring to fig. 4b together, in order to implement the intelligent network card to distribute the messages to different queues and further transmit the messages to different processors, the embodiment of the present application may implement the queue mapping table by using a Flow Director (Flow Director) technology proposed by intel corporation, through which a table of the Flow Director is stored in advance, where the table includes a header and a target processor, the header includes keyword information and a corresponding queue, and the target processor indicates which processor is in communication with. The size of the table is limited by hardware resources, the table records keywords of fields to be matched and actions after matching, and the driving is responsible for operating the table, including initializing, adding table items and deleting the table items, the network card searches the table of the Flow Director according to the keywords after receiving data packets from the line, and the function of accurately dividing the data packets into different running queues according to the fields of the packets is realized according to the action processing in the table items after matching.
In step 203, the server detects whether statistics of the application layer received messages last for zero in a first preset time.
Because the intelligent network card comprises a plurality of operation queues, in actual use, the phenomenon that the operation queues are in an abnormal state, namely are hung up easily occurs, when the operation queues are hung up, the operation queues are blocked, and messages cannot be transmitted to corresponding processors, namely cannot be uploaded to an application layer.
Therefore, the server may detect whether the statistics of the application layer received messages are continuously zero in the first preset time, and execute step 204 when the server detects that the statistics of the application layer received messages are continuously zero in the first preset time. When the server detects that the statistics of the application layer received message do not last for zero in the first preset time, step 206 is executed.
In step 204, the server detects whether the packet loss statistics of the run queue commit message continue to increase within a second preset time.
When the running queue is suspended, the queue is blocked, the server detects that statistics of the messages received by the application layer are continuously zero in a first preset time, and at this time, because the intelligent network card is still receiving the messages, the packet loss statistics of the messages submitted by the running queue are continuously increased.
Therefore, when the server detects that the statistics of the application layer received messages are continuously zero in the first preset time, it is further required to continuously detect whether the packet loss statistics of the running queue submitted messages are continuously increased in the second preset time, when the packet loss statistics of the running queue submitted messages are detected to be continuously increased in the second preset time, step 205 is executed, and when the packet loss statistics of the running queue submitted messages are detected to be not continuously increased in the second preset time, step 206 is executed.
In step 205, the server performs exception accumulation on the run queue.
When the server detects that the running queue in the intelligent network card can not upload the received message to the corresponding processor, and the number of lost packets of the submitted message of the running queue is continuously increased, the server performs abnormal accumulation on the running queue.
In step 206, the server clears the anomaly cumulative value.
And when the server detects that the packet loss statistics of the message submitted by the operation queue is not continuously increased in the second preset time, the operation state of the operation queue is restored to be normal, and the abnormal accumulated value is cleared.
In step 207, the server detects whether the abnormal accumulated value reaches a preset threshold.
Wherein, the server continuously counts the accumulated value of the running queue in which the abnormal state continuously occurs, and when the server detects that the abnormal accumulated value reaches the preset threshold, step 208 is executed. When the server detects that the abnormal accumulated value does not reach the preset threshold, the method returns to step 203 to continue to detect the abnormal state.
In step 208, the server determines the working state of the run queue as an abnormal state and determines the run queue as a target queue.
When the server detects that the abnormal accumulated value of the running queue continuously generating abnormal states reaches a preset threshold, the working state of the running queue is determined to be not abnormal, and the running queue is determined to be a target queue.
In step 209, the server obtains a queue mapping table for the target queue.
When the target queue is a target queue generated by a deflector technology, a corresponding queue mapping table of the target queue is obtained as follows:
header Target siteDevice for treating and managing
102.4001 2
10.4000 4
TABLE 1
As shown in table 1, the queue mapping table includes a header including key information and corresponding queues, e.g. "102.4001" indicates a key of 102 and corresponding queue of 4001, and a target processor indicating that the header "102.4001" is in communication with target processor 2.
In step 210, the server switches the target queue to an inactive state, modifies the queue mapping table, deletes the mapping relationship between the target keyword information in the queue mapping table and the target queue, and establishes the mapping relationship between the target keyword information and the standby queue, so that the standby queue replaces the target queue to work.
Referring to fig. 4a together, the server switches the active target queue back to the inactive state, modifies the queue mapping table, deletes the mapping relationship between the target keyword information and the target queue in the queue mapping table, namely deletes the header, and establishes the mapping relationship between the target keyword information and the standby queue, namely establishes the corresponding header of the target keyword information and the standby queue, for example, when the target queue is "4001" and the standby queue is "4002", deletes the mapping relationship between the target keyword and the target queue "102.4001", and establishes the mapping relationship "102.4002" between the target keyword information "102" and the standby queue "4002", so that the data stream which flows into the target queue "4001" through the keyword information "102" is changed into the standby queue "4002", thereby realizing the effect of replacing the target queue by the standby queue for data processing.
Referring to fig. 4c, after receiving the message, the intelligent network card enters different running queues respectively through a queue mapping technology, the server determines a target queue with an abnormal working state from the running queues, modifies a queue mapping table, deletes the mapping relation between the target keyword information in the queue mapping table and the target queue, and establishes the mapping relation between the target keyword information and the standby queue, so that the standby queue replaces the target queue to work.
In some embodiments, please refer to fig. 4d together, after receiving a message, the intelligent network card enters into different running queue groups respectively through a queue mapping technology, wherein each running queue group includes 2 queues, namely a running queue 0 and a corresponding running queue 1 (standby queue), the server determines a target queue whose working state is an abnormal state from the running queues, modifies a queue mapping table, deletes the mapping relation between target keyword information and the target queue in the queue mapping table, and establishes the mapping relation between the target keyword information and the standby queues in the same group, so that the standby queue replaces the target queue to work.
In step 211, when the server detects that the system is in an idle state, the target queue is controlled to be reset, and the reset target queue is moved to the standby queue resource pool and changed into the standby queue.
Referring to fig. 4a, when the server detects that the system is in an idle state, the target queue in an inactive state is controlled to be reset, the reset target queue is moved to the standby queue resource pool and changed to be a standby queue for next use, so that the running queue and the standby queue are always in a balanced state.
As can be seen from the foregoing, in this embodiment of the present application, a server obtains a total amount of resources of an operation queue and a current data transceiving stability evaluation value to generate a proper amount of resources of a standby queue resource pool, generates a standby queue resource pool according to the amounts of resources, applies for a corresponding amount of standby queues to perform activation processing, determines a working state of the operation queue as an abnormal state when detecting that an abnormal accumulated value continuously appearing in the operation queue exceeds a preset threshold, determines the operation queue as a target queue, obtains a queue mapping table of the target queue, switches the target queue to an inactive state, modifies the queue mapping table, deletes a mapping relationship between target keyword information and the target queue in the queue mapping table, establishes a mapping relationship between target keyword information and the standby queue, so that the standby queue replaces the target queue to work, and when detecting that a system is in a spatial state, controls the target queue to be reset, moves the reset target queue into the standby queue resource pool, and changes the target queue. Therefore, the target queue in an abnormal state does not need to be reset, long-time communication interruption is avoided, and the data processing efficiency is greatly improved.
Furthermore, the target queue is returned to the standby queue resource pool after being reset, so that the data processing efficiency is further improved.
Third embodiment,
In order to facilitate better implementation of the data processing method provided by the embodiment of the application, the embodiment of the application also provides a device based on the data processing method. Where the meaning of a noun is the same as in the data processing method described above, specific implementation details may be referred to in the description of the method embodiments.
Referring to fig. 5a, fig. 5a is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application, where the data processing apparatus may include a generating unit 301, an obtaining unit 302, a determining unit 303, an associating unit 304, and so on.
A generating unit 301, configured to generate a standby queue resource pool, where the standby queue resource pool includes a standby queue.
When the cloud host initializes the running queue, the generating unit 301 generates a standby queue resource pool, and applies for the number of running queues that is more than the system requirement, where the standby queue resource pool may be a special memory space, and for a plurality of queues, the queues stored in the standby queue resource pool are called standby queues, that is, the standby queue resource pool includes standby queues, where the number of standby queues is determined by the size of the storage space of the standby queue resource pool, the larger the number of standby queues, the smaller the storage space of the standby queue resource pool, and the smaller the number of standby queues.
In some embodiments, as shown in fig. 5b, the generating unit 301 includes:
a receiving subunit 3011, configured to receive a resource amount of the resource pool of the standby queue;
and the application subunit 3012 is configured to generate a resource pool of the standby queues according to the resource amount, and apply for a corresponding number of standby queues to perform activation processing.
In some embodiments, the receiving subunit 3011 is configured to obtain a total amount of resources of the running queue and a current data transceiving stability evaluation value; determining a corresponding proportion value according to the data receiving and transmitting stability evaluation value; and generating the resource quantity of the standby queue resource pool based on the total resource quantity and the proportion value.
An obtaining unit 302, configured to obtain a working state of the operation queue, and determine the operation queue whose working state is in an abnormal state as a target queue.
When the working state of the operation queue is in the normal state, the operation queue in the normal state performs data processing, and when the working state of the operation queue is in the abnormal state, the operation queue in the abnormal state cannot perform normal data processing, so that a message of the operation queue cannot be transmitted to a processor, and the packet loss rate of the message of the operation queue is always increased.
In this embodiment of the present application, the acquiring unit 302 detects the working state of the running queue in the intelligent network card in real time, and determines the working state of the running queue as an abnormal state when detecting that the running queue cannot transmit a packet to a corresponding processor and the packet loss rate of the submitted packet of the running queue is continuously increased.
In some embodiments, as shown in fig. 5c, the obtaining unit 302 includes:
the first detecting unit 3021 is configured to detect whether statistics of the application layer received messages are continuously zero within a first preset time.
A second detecting unit 3022, configured to detect, when it is detected that the statistics of the application layer received messages continue to be zero within the first preset time, whether the packet loss statistics of the running queue submitted messages continue to increase within the second preset time.
A zero clearing unit 3023, configured to zero-clear the abnormal accumulated value when it is detected that the statistics of the application layer received packet does not last for zero in the first preset time; or when the packet loss statistics of the operation queue submitted message is detected not to continuously increase in the second preset time, resetting the abnormal accumulated value.
And the determining subunit 3024 is configured to determine, when it is detected that the packet loss statistics of the running queue commit packet continue to increase within the second preset time, the working state of the running queue as an abnormal state, and determine the running queue as the target queue.
In some embodiments, the determining subunit 3024 is configured to: performing exception accumulation on the operation queue; when the abnormal accumulated value reaches a preset threshold value, the working state of the running queue is judged to be an abnormal state, and the running queue is determined to be a target queue.
A determining unit 303, configured to determine transmission configuration information associated with the target queue.
The transmission configuration information may be a communication protocol between the target queue and the corresponding processor and a rule of message allocation, through which the target queue may be allocated to the corresponding message, and the allocated message is directionally transmitted to the target processor according to the target processor specified by the transmission configuration information.
The determination unit 303 determines transmission configuration information associated with the target queue in which the abnormal state occurs so that the subsequent queue replacement can be performed by the transmission configuration information.
In one embodiment, since the intelligent network card has multiple technologies for distributing the message to different queues and further to corresponding processors, such as receiver expansion and flow director technologies, the transmission configuration information also includes multiple types.
In some embodiments, the determining unit 303 is configured to: and acquiring hash configuration information associated with the target queue according to the receiver extension, wherein the hash configuration information comprises the association relation between the target hash value and the target queue.
In some embodiments, the determining unit 303 is further configured to: and obtaining a queue mapping table of the target queue, wherein the queue mapping table comprises the mapping relation between the target keyword information and the target queue.
And the associating unit 304 is configured to associate the standby queue with the transmission configuration information, so that the standby queue replaces the target queue for data processing.
Because the working state of the target queue is abnormal, that is, the target queue cannot perform data processing through transmitting configuration information, at this time, the association unit 304 needs to select a corresponding standby queue from the standby queue resource pool, in an embodiment, the manner of selecting the corresponding standby queue from the standby queue resource pool by the association unit 304 may be random selection, or a specific standby queue may be selected in advance from the standby queue resource pool, and the specific standby queue and the target queue are bound into a group, so that when the corresponding standby queue is selected, a standby queue that is in a group with the target queue may be selected.
Furthermore, after the corresponding standby queue is selected, the association unit 304 may associate the transmission configuration information associated with the standby queue and the target queue, so that the data flow is changed from the target queue to the corresponding standby queue, and the whole operation process has no resource reload and can be controlled at millisecond level, thereby greatly improving the efficiency of data processing.
In some embodiments, as shown in fig. 5d, the association unit 304 includes:
a switching subunit 3041, configured to switch the target queue to an inactive state.
A substitution subunit 3042, configured to associate the standby queue with the transmission configuration information instead of the target queue, so that the standby queue performs data processing instead of the target queue.
In some embodiments, the replacing subunit 3042 is configured to delete the association relationship between the target hash value and the target queue; and establishing an association relation between the target hash value and the standby queue so that the standby queue replaces the target queue to work.
In some embodiments, the replacing subunit 3042 is configured to modify the queue mapping table, and delete the mapping relationship between the target key information and the target queue in the queue mapping table; and establishing a mapping relation between the target keyword information and the standby queue, so that the standby queue replaces the target queue to work.
In some embodiments, as shown in fig. 5d, the communication device may further comprise:
a reset unit 305 for controlling the target queue to be reset when detecting that the system is in an idle state; and moving the reset target queue to a standby queue resource pool, and changing the target queue into a standby queue.
The specific implementation of each unit can be referred to the previous embodiments, and will not be repeated here.
As can be seen from the foregoing, in the embodiment of the present application, the generating unit 301 generates a standby queue resource pool, where the standby queue resource pool includes a standby queue; the acquiring unit 302 acquires the working state of the operation queue, and determines the operation queue with the working state in an abnormal state as a target queue; the determining unit 303 determines transmission configuration information associated with the target queue; the association unit 304 associates the reserve queue with the transmission configuration information so that the reserve queue replaces the target queue for data processing. Therefore, the standby queue is generated in advance, when the running queue is in an abnormal state, the running queue is determined to be the target queue, and the standby queue is associated with the transmission configuration information of the target queue, so that the standby queue can be quickly replaced with the target queue in the abnormal state to perform data processing, the target queue in the abnormal state does not need to be waited to be reset, long-time communication interruption is avoided, and the data processing efficiency is greatly improved.
Fourth embodiment,
The embodiment of the application also provides a server, as shown in fig. 6, which shows a schematic structural diagram of the server according to the embodiment of the application, specifically:
the server may be a cloud host and may include one or more processors 401 of a processing core, one or more memories 402 of a computer readable storage medium, a power supply 403, an input unit 404, and the like. Those skilled in the art will appreciate that the server architecture shown in fig. 6 is not limiting of the server and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
the processor 401 is a control center of the server, connects respective portions of the entire server using various interfaces and lines, and performs various functions of the server and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall inspection of the server. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application program, etc., and the modem processor mainly processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by executing the software programs and modules stored in the memory 402. The memory 402 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the server, etc. In addition, memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 with access to the memory 402.
The server also includes a power supply 403 for powering the various components, and preferably, the power supply 403 may be logically connected to the processor 401 by a power management system so as to implement functions such as charge, discharge, and power consumption management by the power management system. The power supply 403 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The server may also include an input unit 404, which input unit 404 may be used to receive entered numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the server may further include a display processor or the like, which is not described herein. In this embodiment, the processor 401 in the server loads executable files corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 401 executes the application programs stored in the memory 402, so as to implement various functions as follows:
generating a standby queue resource pool, wherein the standby queue resource pool comprises a standby queue; acquiring the working state of an operation queue, and determining the operation queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with the target queue; and associating the standby queue with the transmission configuration information so that the standby queue replaces the target queue for data processing.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the portions of a certain embodiment that are not described in detail may be referred to the above detailed description of the data processing method, which is not repeated herein.
As can be seen from the foregoing, the server in the embodiment of the present application may generate a standby queue resource pool, where the standby queue resource pool includes a standby queue; acquiring the working state of an operation queue, and determining the operation queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with the target queue; and associating the standby queue with the transmission configuration information so that the standby queue replaces the target queue for data processing. Therefore, the standby queue is generated in advance, when the running queue is in an abnormal state, the running queue is determined to be the target queue, and the standby queue is associated with the transmission configuration information of the target queue, so that the standby queue can be quickly replaced with the target queue in the abnormal state to perform data processing, the target queue in the abnormal state does not need to be waited to be reset, long-time communication interruption is avoided, and the data processing efficiency is greatly improved.
Fifth embodiment (V),
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer readable storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform steps in any of the data processing methods provided by the embodiments of the present application. For example, the instructions may perform the steps of:
generating a standby queue resource pool, wherein the standby queue resource pool comprises a standby queue; acquiring the working state of an operation queue, and determining the operation queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with the target queue; and associating the standby queue with the transmission configuration information so that the standby queue replaces the target queue for data processing.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Because the instructions stored in the computer readable storage medium may execute the steps in any data processing method provided in the embodiments of the present application, the beneficial effects that any data processing method provided in the embodiments of the present application may be achieved, which are detailed in the previous embodiments and are not described herein.
The foregoing has described in detail the methods, apparatuses and computer readable storage medium for data processing provided by the embodiments of the present application, and specific examples have been applied herein to illustrate the principles and implementations of the present application, and the description of the foregoing embodiments is only for aiding in the understanding of the methods and core ideas of the present application; meanwhile, those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, and the present description should not be construed as limiting the present application in view of the above.

Claims (16)

1. A method of data processing, comprising:
generating a standby queue resource pool, wherein the standby queue resource pool comprises a standby queue;
if the fact that the operation queue cannot transmit the message to the application layer is detected, the packet loss rate of the message submitted by the operation queue is continuously increased, the working state of the operation queue is judged to be an abnormal state, and the operation queue with the working state in the abnormal state is determined to be a target queue;
determining transmission configuration information associated with the target queue, wherein the transmission configuration information comprises a communication protocol and a message distribution rule;
And associating the standby queue with the transmission configuration information so that the standby queue replaces the target queue for data processing.
2. The data processing method according to claim 1, wherein the step of associating the reserve queue with the transmission configuration information so that the reserve queue replaces the target queue for data processing includes:
switching the target queue to an inactive state;
and associating the standby queue to replace the target queue with the transmission configuration information, so that the standby queue replaces the target queue to perform data processing.
3. The data processing method of claim 2, wherein the step of determining the transmission configuration information associated with the target queue comprises:
obtaining hash configuration information associated with a target queue according to receiver expansion, wherein the hash configuration information comprises an association relation between a target hash value and the target queue;
the step of associating the standby queue with the transmission configuration information instead of the target queue so that the standby queue performs data processing instead of the target queue includes:
Deleting the association relation between the target hash value and the target queue;
and establishing an association relation between the target hash value and a standby queue, so that the standby queue replaces the target queue to work.
4. The data processing method of claim 2, wherein the step of determining the transmission configuration information associated with the target queue comprises:
obtaining a queue mapping table of the target queue, wherein the queue mapping table comprises a mapping relation between target keyword information and the target queue;
the step of associating the standby queue with the transmission configuration information instead of the target queue so that the standby queue performs data processing instead of the target queue includes:
modifying the queue mapping table, and deleting the mapping relation between the target keyword information in the queue mapping table and the target queue;
and establishing a mapping relation between the target keyword information and a standby queue, so that the standby queue replaces the target queue to work.
5. The data processing method according to any one of claims 2 to 4, wherein the step of associating the reserve queue with the transmission configuration information in place of the target queue so that the reserve queue performs data processing in place of the target queue further comprises:
When the system is detected to be in an idle state, controlling the target queue to be reset;
and moving the reset target queue to a standby queue resource pool, and changing the target queue into a standby queue.
6. The method according to any one of claims 1 to 4, wherein if it is detected that the running queue cannot transfer the message to the application layer, and the packet loss rate of the message submitted by the running queue is continuously increased, the working state of the running queue is determined to be an abnormal state, and the running queue whose working state is in an abnormal state is determined to be a target queue, the method comprises:
detecting whether statistics of the messages received by an application layer are continuously zero in a first preset time;
when detecting that the statistics of the messages received by the application layer are continuously zero in a first preset time, detecting whether the packet loss statistics of the messages submitted by the operation queue are continuously increased in a second preset time;
when the packet loss statistics of the operation queue submitted messages are detected to continuously increase in a second preset time, the working state of the operation queue is judged to be an abnormal state, and the operation queue is determined to be a target queue.
7. The data processing method according to claim 6, wherein the step of determining the operation state of the operation queue as an abnormal state includes:
Performing exception accumulation on the operation queue;
and when the abnormal accumulated value reaches a preset threshold value, judging the working state of the running queue as an abnormal state.
8. The method of claim 7, wherein prior to the step of determining the working state of the run queue as an abnormal state, further comprising:
when the statistics of the application layer received messages are detected not to be continuously zero in the first preset time, resetting the abnormal accumulated value; or alternatively
And when detecting that the packet loss statistics of the operation queue submitted message is not continuously increased within the second preset time, resetting the abnormal accumulated value.
9. The method of any one of claims 1 to 4, wherein the step of generating a pool of reserve queue resources comprises:
receiving the resource quantity of a standby queue resource pool;
and generating a standby queue resource pool according to the resource quantity, and applying for a corresponding quantity of standby queues to perform activation processing.
10. The data processing method of claim 9, wherein the step of receiving the amount of resources of the reserve queue resource pool comprises:
acquiring the total resource amount of an operation queue and a current data receiving and transmitting stability evaluation value;
Determining a corresponding proportion value according to the data receiving and transmitting stability evaluation value;
and generating the resource quantity of the standby queue resource pool based on the total resource quantity and the proportion value.
11. A data processing apparatus, comprising:
the generating unit is used for generating a standby queue resource pool, wherein the standby queue resource pool comprises a standby queue;
the acquisition unit is used for determining the working state of the operation queue as an abnormal state and determining the operation queue with the working state in the abnormal state as a target queue if the operation queue is detected to be incapable of transmitting the message to an application layer and the packet loss rate of the message submitted by the operation queue is continuously increased;
a determining unit, configured to determine transmission configuration information associated with the target queue, where the transmission configuration information includes a communication protocol and a rule for allocating a message;
and the association unit is used for associating the standby queue with the transmission configuration information so that the standby queue replaces the target queue to perform data processing.
12. The processing apparatus according to claim 11, wherein the association unit includes:
a switching subunit, configured to switch the target queue to an inactive state;
And the replacing subunit is used for associating the standby queue to replace the target queue with the transmission configuration information so that the standby queue replaces the target queue to perform data processing.
13. The processing apparatus according to claim 12, wherein the determining unit is configured to:
obtaining hash configuration information associated with a target queue according to receiver expansion, wherein the hash configuration information comprises an association relation between a target hash value and the target queue;
the substitution subunit is configured to:
deleting the association relation between the target hash value and the target queue;
and establishing an association relation between the target hash value and a standby queue, so that the standby queue replaces the target queue to work.
14. The processing apparatus according to claim 12, wherein the determining unit is configured to:
obtaining a queue mapping table of the target queue, wherein the queue mapping table comprises a mapping relation between target keyword information and the target queue;
the substitution subunit is configured to:
modifying the queue mapping table, and deleting the mapping relation between the target keyword information in the queue mapping table and the target queue;
And establishing a mapping relation between the target keyword information and a standby queue, so that the standby queue replaces the target queue to work.
15. A computer readable storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor for performing the steps in the data processing method according to any of claims 1 to 10.
16. A server comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps in the data processing method according to any of claims 1 to 10.
CN201911071773.8A 2019-11-05 2019-11-05 Data processing method, device and computer readable storage medium Active CN111190745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911071773.8A CN111190745B (en) 2019-11-05 2019-11-05 Data processing method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911071773.8A CN111190745B (en) 2019-11-05 2019-11-05 Data processing method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111190745A CN111190745A (en) 2020-05-22
CN111190745B true CN111190745B (en) 2024-01-30

Family

ID=70709087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911071773.8A Active CN111190745B (en) 2019-11-05 2019-11-05 Data processing method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111190745B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920717B (en) * 2020-07-08 2023-08-22 腾讯科技(深圳)有限公司 Information processing method, device, electronic equipment and storage medium
CN112764896A (en) * 2020-12-31 2021-05-07 广州技象科技有限公司 Task scheduling method, device and system based on standby queue and storage medium
CN113300979A (en) * 2021-02-05 2021-08-24 阿里巴巴集团控股有限公司 Network card queue creating method and device under RDMA (remote direct memory Access) network
CN113301103B (en) * 2021-02-05 2024-03-12 阿里巴巴集团控股有限公司 Data processing system, method and device
CN113810228A (en) * 2021-09-13 2021-12-17 中国人民银行清算总中心 Message queue channel resetting method and device
CN114257492B (en) * 2021-12-09 2023-11-28 北京天融信网络安全技术有限公司 Fault processing method and device for intelligent network card, computer equipment and medium
CN114640574B (en) * 2022-02-28 2023-11-28 天翼安全科技有限公司 Main and standby equipment switching method and device
CN115086203B (en) * 2022-06-15 2024-03-08 中国工商银行股份有限公司 Data transmission method, device, electronic equipment and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789152A (en) * 2016-11-17 2017-05-31 东软集团股份有限公司 Processor extended method and device based on many queue network interface cards
CN109495540A (en) * 2018-10-15 2019-03-19 深圳市金证科技股份有限公司 A kind of method, apparatus of data processing, terminal device and storage medium
WO2019061647A1 (en) * 2017-09-26 2019-04-04 平安科技(深圳)有限公司 Queue message processing method and device, terminal device and medium
CN109976919A (en) * 2017-12-28 2019-07-05 北京京东尚科信息技术有限公司 A kind of transmission method and device of message request
CN110266551A (en) * 2019-07-29 2019-09-20 腾讯科技(深圳)有限公司 A kind of bandwidth prediction method, apparatus, equipment and storage medium
CN110290217A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Processing method and processing device, storage medium and the electronic device of request of data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789152A (en) * 2016-11-17 2017-05-31 东软集团股份有限公司 Processor extended method and device based on many queue network interface cards
WO2019061647A1 (en) * 2017-09-26 2019-04-04 平安科技(深圳)有限公司 Queue message processing method and device, terminal device and medium
CN109976919A (en) * 2017-12-28 2019-07-05 北京京东尚科信息技术有限公司 A kind of transmission method and device of message request
CN109495540A (en) * 2018-10-15 2019-03-19 深圳市金证科技股份有限公司 A kind of method, apparatus of data processing, terminal device and storage medium
CN110290217A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Processing method and processing device, storage medium and the electronic device of request of data
CN110266551A (en) * 2019-07-29 2019-09-20 腾讯科技(深圳)有限公司 A kind of bandwidth prediction method, apparatus, equipment and storage medium

Also Published As

Publication number Publication date
CN111190745A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111190745B (en) Data processing method, device and computer readable storage medium
CN112437018B (en) Flow control method, device, equipment and storage medium of distributed cluster
US10817497B2 (en) Migration flow control
JP5865820B2 (en) Information processing apparatus, program, and job control method
US20160170469A1 (en) Power balancing to increase workload density and improve energy efficiency
US9037703B1 (en) System and methods for managing system resources on distributed servers
WO2021258753A1 (en) Service processing method and apparatus, and electronic device and storage medium
US20120297216A1 (en) Dynamically selecting active polling or timed waits
US10884667B2 (en) Storage controller and IO request processing method
CN111913670B (en) Processing method and device for load balancing, electronic equipment and storage medium
CN104102543A (en) Load regulation method and load regulation device in cloud computing environment
CN106790565A (en) A kind of network attached storage group system
US8914582B1 (en) Systems and methods for pinning content in cache
KR102469927B1 (en) Apparatus for managing disaggregated memory and method for the same
US11438271B2 (en) Method, electronic device and computer program product of load balancing
CN111857992B (en) Method and device for allocating linear resources in Radosgw module
CN112600761A (en) Resource allocation method, device and storage medium
KR20150007698A (en) Load distribution system for virtual desktop service
CN109739634A (en) A kind of atomic task execution method and device
US20070266083A1 (en) Resource brokering method, resource brokering apparatus, and computer product
CN113268329A (en) Request scheduling method, device and storage medium
JP2007328413A (en) Method for distributing load
CN112615795A (en) Flow control method and device, electronic equipment, storage medium and product
CN112118314A (en) Load balancing method and device
CN114785739A (en) Method, device, equipment and medium for controlling service quality of logical volume

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant