CN116755891B - Event queue processing method and system based on multithreading - Google Patents

Event queue processing method and system based on multithreading Download PDF

Info

Publication number
CN116755891B
CN116755891B CN202311048068.2A CN202311048068A CN116755891B CN 116755891 B CN116755891 B CN 116755891B CN 202311048068 A CN202311048068 A CN 202311048068A CN 116755891 B CN116755891 B CN 116755891B
Authority
CN
China
Prior art keywords
event data
thread
event
processing
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311048068.2A
Other languages
Chinese (zh)
Other versions
CN116755891A (en
Inventor
樊骥
韩洋
钟采奕
李牧
朱谨颋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zhongke Hexun Technology Co ltd
Original Assignee
Chengdu Zhongke Hexun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zhongke Hexun Technology Co ltd filed Critical Chengdu Zhongke Hexun Technology Co ltd
Priority to CN202311048068.2A priority Critical patent/CN116755891B/en
Publication of CN116755891A publication Critical patent/CN116755891A/en
Application granted granted Critical
Publication of CN116755891B publication Critical patent/CN116755891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of computer data processing, and provides a multithreading-based event queue processing method and system, which are used for extracting event data from a user side, screening and distinguishing to obtain a plurality of event data sets according to the data attribute of the event data, classifying the event data, and facilitating the subsequent accurate thread arrangement processing for the event data; the working state of each thread of the server is also determined, and a reliable basis is provided for the subsequent processing of the appointed thread of the event data; the event data sets are directionally transmitted to the corresponding matched threads, and the received event data sets are arranged to form an event data queue, so that the identification screening efficiency and accuracy of each event data are improved; and distributing the event data subordinate to the event data queue to the thread for processing to obtain event data processing results, verifying and identifying the event data processing results, and returning the event data processing results to the user side, so that the memory occupied by the thread is released in time, and the memory use efficiency of the server side is improved.

Description

Event queue processing method and system based on multithreading
Technical Field
The present invention relates to the field of computer data processing technology, and in particular, to a method and a system for processing an event queue based on multithreading.
Background
The thread is the minimum unit that an operating system of a terminal such as a computer can perform operation scheduling. When a terminal such as a computer receives event data to be processed, the terminal can call a matched thread to process the event data, and in the process of scheduling the thread, whether the thread has the capability of processing the event data or not is generally considered, and the time consumption and the processing efficiency of the thread to process the event data are not considered. And the terminals such as the computer and the like call the corresponding threads to process according to the receiving sequence of the event data, namely the event data received earlier is processed by the threads preferentially. Although the above manner can ensure that the threads are fairly distributed to the event data for processing, the processing efficiency of the event data cannot be ensured, and event data with larger data quantity or more complex data structure is preferentially processed by the threads, at this time, the threads can take longer time to process the corresponding event data, so that subsequent event data cannot acquire the processing of the threads in time, the event data is accumulated continuously, and reliable and timely thread processing operation cannot be provided for the event data.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a multithreading-based event queue processing method and a multithreading-based event queue processing system, which are used for extracting event data from a user side, screening and distinguishing to obtain a plurality of event data sets according to the data attribute of the event data, classifying the event data, and facilitating the subsequent accurate arrangement of threads for the event data; the working state of each thread of the server is also determined, and a reliable basis is provided for the subsequent processing of the appointed thread of the event data; the event data sets are directionally transmitted to the corresponding matched threads, and the received event data sets are arranged to form an event data queue, so that the identification screening efficiency and accuracy of each event data are improved; and distributing the event data subordinate to the event data queue to the thread for processing to obtain event data processing results, verifying and identifying the event data processing results, and returning the event data processing results to the user side, so that the user side can obtain accurate and reliable results, the user side can conveniently arrange and combine the received processing results, the memory distribution state of the server side to the thread is regulated, the memory occupied by the thread is released in time, and the memory use efficiency of the server side is improved.
The invention provides a multithreading-based event queue processing method, which comprises the following steps:
step S1, extracting corresponding event data from a task catalog of a user side according to an event processing request from the user side; screening and distinguishing all event data according to the data attribute of the event data to obtain a plurality of event data sets;
step S2, according to the thread work log of the server side, determining the work state information of each thread; comparing each event data set with all threads, determining the threads corresponding to the matching of each event data set, and directionally transmitting each event data set to the corresponding matching threads; the event data sets received by the threads are arranged to obtain event data queues;
step S3, distributing the event data subordinate to the event data queue to the thread for processing according to the working state information of the thread, and obtaining a corresponding event data processing result; verifying the event data processing result, and judging whether secondary processing is required to be performed on corresponding event data according to the verification result;
step S4, after event identification is carried out on the final event data processing result of the event data subordinate to the event data queue, the final event data processing result is returned to the user side; and adjusting the memory allocation state of the service end to the thread according to the running state of the thread at the service end.
In one embodiment of the present disclosure, in the step S1, corresponding event data is extracted from a task directory of a user terminal according to an event processing request from the user terminal; screening and distinguishing all event data according to the data attribute of the event data to obtain a plurality of event data sets, wherein the event data sets comprise:
performing traceability analysis on an event processing request from a user side, and determining program port information of an application program of the user side initiating the event processing request; according to the program port information, carrying out data searching processing on a background running task catalog of the user side to obtain event data corresponding to the application program;
comparing the data type information of the event data with the historical event data processing record of the user side, and judging whether the data type information exists in the historical event data processing record or not; if so, dividing the event data into a first event data set; if not, dividing the event data into a second event data set; wherein the first event data set and the second event data set are isolated from each other inside the client.
In one embodiment of the disclosure, in the step S2, working state information of each thread is determined according to a thread work log of a server side; comparing each event data set with all threads, determining the threads corresponding to the matching of each event data set, and directionally transmitting each event data set to the corresponding matching threads; and arrange the event data set received by the thread to obtain an event data queue, including:
analyzing a thread work log of a server to obtain respective work busy state information and available memory space size information of all threads of the server, wherein the work busy state information and available memory space size information are used as the work state information of the threads;
comparing the data type information corresponding to the event data subordinate to the first event data set with the historical processing data type of the threads of the server side, and determining the threads corresponding to and matched with the first event data set;
comparing the data type information corresponding to the event data subordinate to the second event data set with the data types which can be processed by the threads of the server side, and determining the threads corresponding to and matched with the second event data set;
According to the position information of the threads corresponding to the matching of the first event data set and the second event data set at the server, respectively and directionally transmitting the first event data set and the second event data set to the threads corresponding to the matching; and according to the sequence that the data quantity of all the event data subordinate to the first event data set or the second event data set received by the thread is from small to large, respectively arranging all the event data subordinate to the first event data set and the second event data set to form an event data queue.
In the embodiment of the present disclosure, in the step S3, according to the working state information of the thread, event data subordinate to the event data queue is allocated to the thread for processing, so as to obtain a corresponding event data processing result; verifying the event data processing result, and judging whether secondary processing is required for corresponding event data according to the verification result, wherein the method comprises the following steps:
according to the work busy state information of the thread, when the thread is in a work busy state, event data subordinate to the first event data queue and the second event data queue are not distributed to the thread for processing; when the thread is in a working idle state, according to the size information of the available memory space of the thread, corresponding event data is selected from the first event data queue or the second event data queue to be distributed to the thread for processing, and a corresponding event data processing result is obtained;
Performing data scrambling verification on the event data processing result to obtain a data scrambling duty ratio of the event data processing result; if the data disorder code duty ratio is larger than or equal to a preset duty ratio threshold value, carrying out secondary processing on corresponding event data; if the data disorder code duty ratio is smaller than a preset duty ratio threshold value, the corresponding event data does not need to be processed for the second time.
In one embodiment of the disclosure, in the step S4, after event identification is performed on a final event data processing result of event data subordinate to the event data queue, the final event data processing result is returned to the user side; and according to the running state of the thread at the server, adjusting the memory allocation state of the server to the thread, including:
after the event names of the final event data processing results of the event data subordinate to the first event data queue and the second event data queue are identified, packaging the final event data processing results and returning the final event data processing results to the user side;
judging whether the thread is in a background active state at the server side, if so, keeping the current memory allocation state of the server side to the thread unchanged; if not, the memory allocated to the thread by the server is recovered.
The invention also provides a multithreading-based event queue processing system, which comprises:
the event data extraction and distinguishing module is used for extracting corresponding event data from a task catalog of the user side according to an event processing request from the user side; screening and distinguishing all event data according to the data attribute of the event data to obtain a plurality of event data sets;
the thread working state determining module is used for determining the working state information of each thread according to the thread working log of the server side;
the event data queue generating module is used for comparing each event data set with all threads, determining the corresponding matched thread of each event data set and directionally transmitting each event data set to the corresponding matched thread; the event data sets received by the threads are arranged to obtain event data queues;
the event data processing execution module is used for distributing the event data subordinate to the event data queue to the thread for processing according to the working state information of the thread to obtain a corresponding event data processing result;
the event data processing result verification module is used for verifying the event data processing result and judging whether the corresponding event data is required to be processed for the second time according to the verification result;
The event data processing result identification and return module is used for returning the final event data processing result to the user side after carrying out event identification on the final event data processing result of the event data subordinate to the event data queue;
and the thread memory allocation adjustment module is used for adjusting the memory allocation state of the service end to the thread according to the running state of the thread at the service end.
In one embodiment of the disclosure, the event data extraction and distinguishing module is configured to extract corresponding event data from a task directory of a user terminal according to an event processing request from the user terminal; screening and distinguishing all event data according to the data attribute of the event data to obtain a plurality of event data sets, wherein the event data sets comprise:
performing traceability analysis on an event processing request from a user side, and determining program port information of an application program of the user side initiating the event processing request; according to the program port information, carrying out data searching processing on a background running task catalog of the user side to obtain event data corresponding to the application program;
comparing the data type information of the event data with the historical event data processing record of the user side, and judging whether the data type information exists in the historical event data processing record or not; if so, dividing the event data into a first event data set; if not, dividing the event data into a second event data set; wherein the first event data set and the second event data set are isolated from each other inside the client.
In one embodiment of the disclosure, the thread working state determining module is configured to determine, according to a thread working log of a server, working state information of each thread, including:
analyzing a thread work log of a server to obtain respective work busy state information and available memory space size information of all threads of the server, wherein the work busy state information and available memory space size information are used as the work state information of the threads;
the event data queue generating module is used for comparing each event data set with all threads, determining the threads corresponding to the matching of each event data set, and directionally transmitting each event data set to the corresponding matching threads; and arrange the event data set received by the thread to obtain an event data queue, including:
comparing the data type information corresponding to the event data subordinate to the first event data set with the historical processing data type of the threads of the server side, and determining the threads corresponding to and matched with the first event data set;
comparing the data type information corresponding to the event data subordinate to the second event data set with the data types which can be processed by the threads of the server side, and determining the threads corresponding to and matched with the second event data set;
According to the position information of the threads corresponding to the matching of the first event data set and the second event data set at the server, respectively and directionally transmitting the first event data set and the second event data set to the threads corresponding to the matching; and according to the sequence that the data quantity of all the event data subordinate to the first event data set or the second event data set received by the thread is from small to large, respectively arranging all the event data subordinate to the first event data set and the second event data set to form an event data queue.
In one embodiment of the present disclosure, the event data processing execution module is configured to allocate, according to working state information of the thread, event data subordinate to the event data queue to the thread for processing, to obtain a corresponding event data processing result, where the event data processing execution module includes:
according to the work busy state information of the thread, when the thread is in a work busy state, event data subordinate to the first event data queue and the second event data queue are not distributed to the thread for processing; when the thread is in a working idle state, according to the size information of the available memory space of the thread, corresponding event data is selected from the first event data queue or the second event data queue to be distributed to the thread for processing, and a corresponding event data processing result is obtained;
The event data processing result verification module is configured to verify the event data processing result, and determine whether secondary processing is required for the corresponding event data according to the verification result, where the event data processing result verification module includes:
performing data scrambling verification on the event data processing result to obtain a data scrambling duty ratio of the event data processing result; if the data disorder code duty ratio is larger than or equal to a preset duty ratio threshold value, carrying out secondary processing on corresponding event data; if the data disorder code duty ratio is smaller than a preset duty ratio threshold value, the corresponding event data does not need to be processed for the second time.
In one disclosed embodiment of the present application, the event data processing result identification and return module is configured to return a final event data processing result of event data subordinate to the event data queue to the user terminal after performing event identification, where the event data processing result includes:
after the event names of the final event data processing results of the event data subordinate to the first event data queue and the second event data queue are identified, packaging the final event data processing results and returning the final event data processing results to the user side;
The thread memory allocation adjustment module is configured to adjust a memory allocation state of the server to the thread according to an operation state of the thread at the server, and includes:
judging whether the thread is in a background active state at the server side, if so, keeping the current memory allocation state of the server side to the thread unchanged; if not, the memory allocated to the thread by the server is recovered.
Compared with the prior art, the method and the system for processing the event queue based on the multithreading extract event data from the user side, screen and distinguish the event data to obtain a plurality of event data sets according to the data attribute of the event data, classify the event data, and facilitate the subsequent accurate arrangement of the thread processing for the event data; the working state of each thread of the server is also determined, and a reliable basis is provided for the subsequent processing of the appointed thread of the event data; the event data sets are directionally transmitted to the corresponding matched threads, and the received event data sets are arranged to form an event data queue, so that the identification screening efficiency and accuracy of each event data are improved; and distributing the event data subordinate to the event data queue to the thread for processing to obtain event data processing results, verifying and identifying the event data processing results, and returning the event data processing results to the user side, so that the user side can obtain accurate and reliable results, the user side can conveniently arrange and combine the received processing results, the memory distribution state of the server side to the thread is regulated, the memory occupied by the thread is released in time, and the memory use efficiency of the server side is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for processing a multithreading-based event queue according to the present invention;
FIG. 2 is a schematic diagram of a multithreaded event queue processing system according to the present invention.
Description of the embodiments
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flow chart of a method for processing an event queue based on multithreading according to an embodiment of the invention is shown. The event queue processing method based on the multithreading comprises the following steps:
step S1, extracting corresponding event data from a task catalog of a user side according to an event processing request from the user side; screening and distinguishing all event data according to the data attribute of the event data to obtain a plurality of event data sets;
step S2, according to the thread work log of the server side, determining the work state information of each thread; comparing each event data set with all threads, determining the threads corresponding to the matching of each event data set, and directionally transmitting each event data set to the corresponding matching threads; the event data set received by the thread is arranged to obtain an event data queue;
step S3, distributing the event data subordinate to the event data queue to the thread for processing according to the working state information of the thread, and obtaining a corresponding event data processing result; verifying the event data processing result, and judging whether secondary processing is required to be performed on the corresponding event data according to the verification result;
Step S4, after carrying out event identification on the final event data processing result of the event data subordinate to the event data queue, returning the final event data processing result to the user terminal; and adjusting the memory allocation state of the server to the thread according to the running state of the thread at the server.
From the above, it can be seen that the method for processing the event queue based on multithreading extracts event data from a user terminal, screens and distinguishes a plurality of event data sets according to the data attribute of the event data, classifies the event data, and facilitates the subsequent accurate arrangement of threads for the event data; the working state of each thread of the server is also determined, and a reliable basis is provided for the subsequent processing of the appointed thread of the event data; the event data sets are directionally transmitted to the corresponding matched threads, and the received event data sets are arranged to form an event data queue, so that the identification screening efficiency and accuracy of each event data are improved; and distributing the event data subordinate to the event data queue to the thread for processing to obtain event data processing results, verifying and identifying the event data processing results, and returning the event data processing results to the user side, so that the user side can obtain accurate and reliable results, the user side can conveniently arrange and combine the received processing results, the memory distribution state of the server side to the thread is regulated, the memory occupied by the thread is released in time, and the memory use efficiency of the server side is improved.
Preferably, in the step S1, according to an event processing request from a user side, corresponding event data is extracted from a task directory of the user side; screening and distinguishing all event data according to the data attribute of the event data to obtain a plurality of event data sets, wherein the event data sets comprise:
performing traceability analysis on an event processing request from a user side, and determining program port information of an application program of the user side initiating the event processing request; according to the program port information, carrying out data searching processing on a background running task catalog of the user side to obtain event data corresponding to the application program;
comparing the data type information of the event data with the historical event data processing record of the user side, and judging whether the data type information exists in the historical event data processing record; if so, dividing the event data into a first event data set; if not, dividing the event data into a second event data set; wherein the first event data set and the second event data set are isolated from each other within the client.
In the technical scheme, different application programs are installed in the user side such as the smart phone and the portable computer, each application program has the authority of initiating the event processing request, when the corresponding event task is processed in the running process of the application program, the event processing request can be initiated aiming at the event task which needs to be processed currently, and when the event processing request is initiated by the user side, the program port information of the corresponding application program is identified in the event processing request, so that the corresponding event data can be conveniently extracted from the user side by taking the event processing request as a reference. In actual work, corresponding program port information is acquired for an event processing request from a user side, then the program port information is used as an index, event data corresponding to the event processing of the application program is searched from a background running task catalog of the user side, and accuracy and efficiency of event data searching are guaranteed. The event data obtained from the user side may include different types of data, and each data type may or may not be processed by the server side during the processing of the historical event. The data type information of the event data is compared with the historical event data processing records of the user side, which event data types belong to the data types processed by the server side and which event data types belong to the data types not processed by the server side can be accurately distinguished, so that corresponding first event data sets and second event data sets are generated, and the first event data sets and the second event data sets can be processed in a targeted manner on the aspect of the server side conveniently. The first event data set and the second event data set are mutually isolated in the user terminal, so that event data in the first event data set and the second event data set are mutually not crosstalked, and the situation of event data confusion is avoided.
Preferably, in the step S2, the working state information of each thread is determined according to the thread work log of the server; comparing each event data set with all threads, determining the threads corresponding to the matching of each event data set, and directionally transmitting each event data set to the corresponding matching threads; and arrange the event data set received by the thread to obtain an event data queue, including:
analyzing the thread work log of the server to obtain the respective work busy and idle state information and the size information of the available memory space of all threads of the server, and taking the information as the work state information of the threads;
comparing the data type information corresponding to the event data subordinate to the first event data set with the historical processing data type of the thread of the server side, and determining the thread corresponding to the first event data set;
comparing the data type information corresponding to the event data subordinate to the second event data set with the data types which can be processed by the threads of the server side, and determining the threads corresponding to and matched with the second event data set;
according to the position information of the threads corresponding to the first event data set and the second event data set and matched with each other at the server, respectively and directionally transmitting the first event data set and the second event data set to the threads corresponding to the matching threads; and according to the sequence from small to large of the data volume of all the event data subordinate to the first event data set or the second event data set received by the thread, respectively arranging all the event data subordinate to the first event data set and the second event data set to form an event data queue.
In the above technical solution, the server may be, but is not limited to, a terminal such as a computer with stronger computing power than the client. The server side is internally provided with a plurality of threads, and each thread works independently and can process event data of corresponding data types. In addition, all threads included in the server side form a thread work log in the working process, and the thread work log is used for recording the work busy state and the available memory space size (namely, the memory space size which can be allocated to the threads by the server side) of each thread. Because the data type of the event data subordinate to the first event data set is the data type of the processing process of the server side, at the moment, the data type information corresponding to the event data subordinate to the first event data set is compared with the historical processing data type of the thread of the server side, and the thread corresponding to the matching of the first event data set is determined, namely the thread of the data type corresponding to the corresponding event data which is processed from the server side is determined; and comparing the data type information corresponding to the event data subordinate to the second event data set with the data types which can be processed by the threads of the server side, and determining the threads which are correspondingly matched with the second event data set, namely determining the threads which can be compatible with the data types corresponding to the corresponding event data from the server side, so as to ensure that the threads which are matched with the event data are found in the server side. According to the position information of the threads corresponding to the first event data set and the second event data set and matched with each other at the server, the first event data set and the second event data set are respectively and directionally transmitted to the corresponding matched threads, so that the data processing specificity of the threads to different event data sets can be ensured; and according to the sequence that the data volume of all event data subordinate to the first event data set or the second event data set received by the thread is from small to large, respectively arranging all event data subordinate to the first event data set and the second event data set to form an event data queue, so as to realize standardized arrangement of all event data subordinate to different event data sets, and facilitate subsequent accurate and rapid searching to obtain the required event data.
Preferably, in the step S3, according to the working state information of the thread, distributing the event data subordinate to the event data queue to the thread for processing, so as to obtain a corresponding event data processing result; verifying the event data processing result, and judging whether secondary processing is required to be performed on the corresponding event data according to the verification result, wherein the method comprises the following steps:
according to the work busy state information of the thread, when the thread is in a work busy state, event data subordinate to the first event data queue and the second event data queue are not distributed to the thread for processing; when the thread is in a working idle state, according to the size information of the available memory space of the thread, corresponding event data is selected from the first event data queue or the second event data queue to be distributed to the thread for processing, and a corresponding event data processing result is obtained;
performing data scrambling verification on the event data processing result to obtain a data scrambling duty ratio of the event data processing result; if the data messy code duty ratio is larger than or equal to a preset duty ratio threshold value, carrying out secondary processing on corresponding event data; if the data disorder code duty ratio is smaller than a preset duty ratio threshold value, the corresponding event data does not need to be processed for the second time.
In the technical scheme, when the thread is in a busy state, the thread is indicated that other event data cannot be continuously received for processing, otherwise, accumulation and blockage of the event data can be caused, and the processing efficiency of the event data is reduced. When the thread is in a working idle state, the fact that the thread can receive event data for processing is indicated, but the operation processing capacity of the thread on the event data depends on the size of a manageable memory space of the thread, when the manageable memory space of the thread is larger, the event data amount which can be processed is also larger, according to the size of the manageable memory space of the thread, the event data with the corresponding data size is selected from the first event data queue or the second event data queue to be distributed to the thread for processing, a corresponding event data processing result is obtained, and timely and efficient processing of the event data by the thread is ensured. The size of the available memory of the thread itself is not fixed, and can be adjusted according to the available memory of the server itself, and the larger the available memory of the server itself in an idle state, the larger the memory space allocated to the thread. In addition, in order to ensure the correctness of the event data processing result, data scrambling verification is required to be performed on the event data processing result, if the data scrambling ratio of the event data processing result is greater than or equal to a preset duty ratio threshold, secondary processing is required to be performed on the corresponding event data until the data scrambling ratio is less than the preset duty ratio threshold, and at the moment, the corresponding event data processing result is taken as a final event data processing result; and when the data disorder code duty ratio of the event data processing result is smaller than a preset duty ratio threshold value, directly taking the event data processing result as a final event data processing result.
Preferably, in the step S4, after event identification is performed on a final event data processing result of the event data subordinate to the event data queue, the final event data processing result is returned to the user terminal; and according to the running state of the thread at the server, adjusting the memory allocation state of the server to the thread, including:
after the final event data processing results of the event data subordinate to the first event data queue and the second event data queue are identified by event names, packaging the final event data processing results and returning the final event data processing results to the user side;
judging whether the thread is in a background active state at the server side, if so, keeping the current memory allocation state of the server side to the thread unchanged; if not, the memory allocated by the server to the thread is recovered.
In the above technical solution, after the event name is identified for the final event data processing result of the event data subordinate to each of the first event data queue and the second event data queue, the final event data processing result is packaged and returned to the user terminal, so that the user terminal can quickly distinguish and identify different event data processing results; judging whether the thread is in a background active state at the server side or not, namely judging whether the thread is processing other event data at the background layer of the server side or not, if so, keeping the current memory allocation state of the server side to the thread unchanged; if not, the memory allocated to the thread by the server is recovered, so that the free memory can be ensured to be released in time, and the memory use efficiency of the server is improved.
Referring to fig. 2, a schematic diagram of a frame of a multithreading-based event queue processing system according to an embodiment of the invention is provided. The multithreading-based event queue processing system includes:
the event data extraction and distinguishing module is used for extracting corresponding event data from the task catalog of the user terminal according to the event processing request from the user terminal; screening and distinguishing all event data according to the data attribute of the event data to obtain a plurality of event data sets;
the thread working state determining module is used for determining the working state information of each thread according to the thread working log of the server side;
the event data queue generating module is used for comparing each event data set with all threads, determining the corresponding matched thread of each event data set and directionally transmitting each event data set to the corresponding matched thread; the event data set received by the thread is arranged to obtain an event data queue;
the event data processing execution module is used for distributing the event data subordinate to the event data queue to the thread for processing according to the working state information of the thread to obtain a corresponding event data processing result;
The event data processing result verification module is used for verifying the event data processing result and judging whether the corresponding event data is required to be processed for the second time according to the verification result;
the event data processing result identification and return module is used for carrying out event identification on the final event data processing result of the event data subordinate to the event data queue and returning the final event data processing result to the user side;
and the thread memory allocation adjustment module is used for adjusting the memory allocation state of the server to the thread according to the running state of the thread at the server.
From the above, it can be seen that the multithreading-based event queue processing system extracts event data from a user side, screens and distinguishes a plurality of event data sets according to the data attribute of the event data, classifies the event data, and facilitates the subsequent accurate arrangement of threads for the event data; the working state of each thread of the server is also determined, and a reliable basis is provided for the subsequent processing of the appointed thread of the event data; the event data sets are directionally transmitted to the corresponding matched threads, and the received event data sets are arranged to form an event data queue, so that the identification screening efficiency and accuracy of each event data are improved; and distributing the event data subordinate to the event data queue to the thread for processing to obtain event data processing results, verifying and identifying the event data processing results, and returning the event data processing results to the user side, so that the user side can obtain accurate and reliable results, the user side can conveniently arrange and combine the received processing results, the memory distribution state of the server side to the thread is regulated, the memory occupied by the thread is released in time, and the memory use efficiency of the server side is improved.
Preferably, the event data extraction and distinguishing module is used for extracting corresponding event data from a task catalog of the user terminal according to an event processing request from the user terminal; screening and distinguishing all event data according to the data attribute of the event data to obtain a plurality of event data sets, wherein the event data sets comprise:
performing traceability analysis on an event processing request from a user side, and determining program port information of an application program of the user side initiating the event processing request; according to the program port information, carrying out data searching processing on a background running task catalog of the user side to obtain event data corresponding to the application program;
comparing the data type information of the event data with the historical event data processing record of the user side, and judging whether the data type information exists in the historical event data processing record; if so, dividing the event data into a first event data set; if not, dividing the event data into a second event data set; wherein the first event data set and the second event data set are isolated from each other within the client.
In the technical scheme, different application programs are installed in the user side such as the smart phone and the portable computer, each application program has the authority of initiating the event processing request, when the corresponding event task is processed in the running process of the application program, the event processing request can be initiated aiming at the event task which needs to be processed currently, and when the event processing request is initiated by the user side, the program port information of the corresponding application program is identified in the event processing request, so that the corresponding event data can be conveniently extracted from the user side by taking the event processing request as a reference. In actual work, corresponding program port information is acquired for an event processing request from a user side, then the program port information is used as an index, event data corresponding to the event processing of the application program is searched from a background running task catalog of the user side, and accuracy and efficiency of event data searching are guaranteed. The event data obtained from the user side may include different types of data, and each data type may or may not be processed by the server side during the processing of the historical event. The data type information of the event data is compared with the historical event data processing records of the user side, which event data types belong to the data types processed by the server side and which event data types belong to the data types not processed by the server side can be accurately distinguished, so that corresponding first event data sets and second event data sets are generated, and the first event data sets and the second event data sets can be processed in a targeted manner on the aspect of the server side conveniently. The first event data set and the second event data set are mutually isolated in the user terminal, so that event data in the first event data set and the second event data set are mutually not crosstalked, and the situation of event data confusion is avoided.
Preferably, the thread working state determining module is configured to determine working state information of each thread according to a thread working log of a server, and includes:
analyzing the thread work log of the server to obtain the respective work busy and idle state information and the size information of the available memory space of all threads of the server, and taking the information as the work state information of the threads;
the event data queue generating module is used for comparing each event data set with all threads, determining the threads corresponding to the matching of each event data set, and directionally transmitting each event data set to the corresponding matching threads; and arrange the event data set received by the thread to obtain an event data queue, including:
comparing the data type information corresponding to the event data subordinate to the first event data set with the historical processing data type of the thread of the server side, and determining the thread corresponding to the first event data set;
comparing the data type information corresponding to the event data subordinate to the second event data set with the data types which can be processed by the threads of the server side, and determining the threads corresponding to and matched with the second event data set;
According to the position information of the threads corresponding to the first event data set and the second event data set and matched with each other at the server, respectively and directionally transmitting the first event data set and the second event data set to the threads corresponding to the matching threads; and according to the sequence from small to large of the data volume of all the event data subordinate to the first event data set or the second event data set received by the thread, respectively arranging all the event data subordinate to the first event data set and the second event data set to form an event data queue.
In the above technical solution, the server may be, but is not limited to, a terminal such as a computer with stronger computing power than the client. The server side is internally provided with a plurality of threads, and each thread works independently and can process event data of corresponding data types. In addition, all threads included in the server side form a thread work log in the working process, and the thread work log is used for recording the work busy state and the available memory space size (namely, the memory space size which can be allocated to the threads by the server side) of each thread. Because the data type of the event data subordinate to the first event data set is the data type of the processing process of the server side, at the moment, the data type information corresponding to the event data subordinate to the first event data set is compared with the historical processing data type of the thread of the server side, and the thread corresponding to the matching of the first event data set is determined, namely the thread of the data type corresponding to the corresponding event data which is processed from the server side is determined; and comparing the data type information corresponding to the event data subordinate to the second event data set with the data types which can be processed by the threads of the server side, and determining the threads which are correspondingly matched with the second event data set, namely determining the threads which can be compatible with the data types corresponding to the corresponding event data from the server side, so as to ensure that the threads which are matched with the event data are found in the server side. According to the position information of the threads corresponding to the first event data set and the second event data set and matched with each other at the server, the first event data set and the second event data set are respectively and directionally transmitted to the corresponding matched threads, so that the data processing specificity of the threads to different event data sets can be ensured; and according to the sequence that the data volume of all event data subordinate to the first event data set or the second event data set received by the thread is from small to large, respectively arranging all event data subordinate to the first event data set and the second event data set to form an event data queue, so as to realize standardized arrangement of all event data subordinate to different event data sets, and facilitate subsequent accurate and rapid searching to obtain the required event data.
Preferably, the event data processing execution module is configured to distribute, according to working state information of the thread, event data subordinate to the event data queue to the thread for processing, to obtain a corresponding event data processing result, where the event data processing execution module includes:
according to the work busy state information of the thread, when the thread is in a work busy state, event data subordinate to the first event data queue and the second event data queue are not distributed to the thread for processing; when the thread is in a working idle state, according to the size information of the available memory space of the thread, corresponding event data is selected from the first event data queue or the second event data queue to be distributed to the thread for processing, and a corresponding event data processing result is obtained;
the event data processing result verification module is used for verifying the event data processing result, judging whether the corresponding event data needs to be processed for the second time according to the verification result, and comprises the following steps:
performing data scrambling verification on the event data processing result to obtain a data scrambling duty ratio of the event data processing result; if the data messy code duty ratio is larger than or equal to a preset duty ratio threshold value, carrying out secondary processing on corresponding event data; if the data disorder code duty ratio is smaller than a preset duty ratio threshold value, the corresponding event data does not need to be processed for the second time.
In the technical scheme, when the thread is in a busy state, the thread is indicated that other event data cannot be continuously received for processing, otherwise, accumulation and blockage of the event data can be caused, and the processing efficiency of the event data is reduced. When the thread is in a working idle state, the fact that the thread can receive event data for processing is indicated, but the operation processing capacity of the thread on the event data depends on the size of a manageable memory space of the thread, when the manageable memory space of the thread is larger, the event data amount which can be processed is also larger, according to the size of the manageable memory space of the thread, the event data with the corresponding data size is selected from the first event data queue or the second event data queue to be distributed to the thread for processing, a corresponding event data processing result is obtained, and timely and efficient processing of the event data by the thread is ensured. The size of the available memory of the thread itself is not fixed, and can be adjusted according to the available memory of the server itself, and the larger the available memory of the server itself in an idle state, the larger the memory space allocated to the thread. In addition, in order to ensure the correctness of the event data processing result, data scrambling verification is required to be performed on the event data processing result, if the data scrambling ratio of the event data processing result is greater than or equal to a preset duty ratio threshold, secondary processing is required to be performed on the corresponding event data until the data scrambling ratio is less than the preset duty ratio threshold, and at the moment, the corresponding event data processing result is taken as a final event data processing result; and when the data disorder code duty ratio of the event data processing result is smaller than a preset duty ratio threshold value, directly taking the event data processing result as a final event data processing result.
Preferably, the event data processing result identification and return module is configured to return a final event data processing result of event data subordinate to the event data queue to the user side after performing event identification on the final event data processing result, and includes:
after the final event data processing results of the event data subordinate to the first event data queue and the second event data queue are identified by event names, packaging the final event data processing results and returning the final event data processing results to the user side;
the thread memory allocation adjustment module is configured to adjust a memory allocation state of the server to the thread according to an operation state of the thread at the server, and includes:
judging whether the thread is in a background active state at the server side, if so, keeping the current memory allocation state of the server side to the thread unchanged; if not, the memory allocated by the server to the thread is recovered.
In the above technical solution, after the event name is identified for the final event data processing result of the event data subordinate to each of the first event data queue and the second event data queue, the final event data processing result is packaged and returned to the user terminal, so that the user terminal can quickly distinguish and identify different event data processing results; judging whether the thread is in a background active state at the server side or not, namely judging whether the thread is processing other event data at the background layer of the server side or not, if so, keeping the current memory allocation state of the server side to the thread unchanged; if not, the memory allocated to the thread by the server is recovered, so that the free memory can be ensured to be released in time, and the memory use efficiency of the server is improved.
As can be seen from the content of the above embodiment, the method and the system for processing the event queue based on multithreading extract event data from a user terminal, screen and distinguish to obtain a plurality of event data sets according to the data attribute of the event data, classify the event data, and facilitate the subsequent accurate arrangement of threads for the event data; the working state of each thread of the server is also determined, and a reliable basis is provided for the subsequent processing of the appointed thread of the event data; the event data sets are directionally transmitted to the corresponding matched threads, and the received event data sets are arranged to form an event data queue, so that the identification screening efficiency and accuracy of each event data are improved; and distributing the event data subordinate to the event data queue to the thread for processing to obtain event data processing results, verifying and identifying the event data processing results, and returning the event data processing results to the user side, so that the user side can obtain accurate and reliable results, the user side can conveniently arrange and combine the received processing results, the memory distribution state of the server side to the thread is regulated, the memory occupied by the thread is released in time, and the memory use efficiency of the server side is improved.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. The event queue processing method based on the multithreading is characterized by comprising the following steps of:
step S1, extracting corresponding event data from a task catalog of a user side according to an event processing request from the user side; screening and distinguishing all event data according to the data attribute of the event data to obtain a plurality of event data sets;
comparing the data type information of the event data with the historical event data processing record of the user side, and judging whether the data type information exists in the historical event data processing record or not; if so, dividing the event data into a first event data set; if not, dividing the event data into a second event data set; wherein the first event data set and the second event data set are isolated from each other inside the client;
step S2, according to the thread work log of the server side, determining the work state information of each thread; comparing each event data set with all threads, determining the threads corresponding to the matching of each event data set, and directionally transmitting each event data set to the corresponding matching threads; the event data sets received by the threads are arranged to obtain event data queues;
Comparing data type information corresponding to event data subordinate to the first event data set with historical processing data types of threads of the server side, and determining threads corresponding to and matched with the first event data set; comparing the data type information corresponding to the event data subordinate to the second event data set with the data types which can be processed by the threads of the server side, and determining the threads corresponding to and matched with the second event data set;
step S3, distributing the event data subordinate to the event data queue to the thread for processing according to the working state information of the thread, and obtaining a corresponding event data processing result; verifying the event data processing result, and judging whether secondary processing is required to be performed on corresponding event data according to the verification result;
step S4, after event identification is carried out on the final event data processing result of the event data subordinate to the event data queue, the final event data processing result is returned to the user side; and adjusting the memory allocation state of the service end to the thread according to the running state of the thread at the service end.
2. The multithreaded-based event queue processing method of claim 1, wherein:
in the step S1, according to an event processing request from a user side, extracting corresponding event data from a task directory of the user side, including:
performing traceability analysis on an event processing request from a user side, and determining program port information of an application program of the user side initiating the event processing request; and according to the program port information, carrying out data searching processing on the background operation task catalog of the user side to obtain event data corresponding to the application program.
3. The multithreaded-based event queue processing method of claim 2, wherein:
in the step S2, determining the working state information of each thread according to the thread work log of the server side, including:
analyzing a thread work log of a server to obtain respective work busy state information and available memory space size information of all threads of the server, wherein the work busy state information and available memory space size information are used as the work state information of the threads;
in the step S2, the arrangement of the event data set received by the thread to obtain an event data queue includes:
According to the position information of the threads corresponding to the matching of the first event data set and the second event data set at the server, respectively and directionally transmitting the first event data set and the second event data set to the threads corresponding to the matching; and according to the sequence that the data quantity of all the event data subordinate to the first event data set or the second event data set received by the thread is from small to large, respectively arranging all the event data subordinate to the first event data set and the second event data set to form an event data queue.
4. A method of multithreaded event queue processing as recited in claim 3, wherein:
in the step S3, according to the working state information of the thread, distributing the event data subordinate to the event data queue to the thread for processing, so as to obtain a corresponding event data processing result; verifying the event data processing result, and judging whether secondary processing is required for corresponding event data according to the verification result, wherein the method comprises the following steps:
according to the work busy state information of the thread, when the thread is in a work busy state, event data subordinate to the first event data queue and the second event data queue are not distributed to the thread for processing; when the thread is in a working idle state, according to the size information of the available memory space of the thread, corresponding event data is selected from the first event data queue or the second event data queue to be distributed to the thread for processing, and a corresponding event data processing result is obtained;
Performing data scrambling verification on the event data processing result to obtain a data scrambling duty ratio of the event data processing result; if the data disorder code duty ratio is larger than or equal to a preset duty ratio threshold value, carrying out secondary processing on corresponding event data; if the data disorder code duty ratio is smaller than a preset duty ratio threshold value, the corresponding event data does not need to be processed for the second time.
5. The multithreaded-based event queue processing method of claim 4, wherein:
in the step S4, after event identification is performed on a final event data processing result of the event data subordinate to the event data queue, the final event data processing result is returned to the user terminal; and according to the running state of the thread at the server, adjusting the memory allocation state of the server to the thread, including:
after the event names of the final event data processing results of the event data subordinate to the first event data queue and the second event data queue are identified, packaging the final event data processing results and returning the final event data processing results to the user side;
judging whether the thread is in a background active state at the server side, if so, keeping the current memory allocation state of the server side to the thread unchanged; if not, the memory allocated to the thread by the server is recovered.
6. A multithreaded-based event queue processing system comprising:
the event data extraction and distinguishing module is used for extracting corresponding event data from a task catalog of the user side according to an event processing request from the user side; screening and distinguishing all event data according to the data attribute of the event data to obtain a plurality of event data sets; comparing the data type information of the event data with the historical event data processing record of the user side, and judging whether the data type information exists in the historical event data processing record or not; if so, dividing the event data into a first event data set; if not, dividing the event data into a second event data set; wherein the first event data set and the second event data set are isolated from each other inside the client;
the thread working state determining module is used for determining the working state information of each thread according to the thread working log of the server side;
the event data queue generating module is used for comparing each event data set with all threads, determining the corresponding matched thread of each event data set and directionally transmitting each event data set to the corresponding matched thread; the event data sets received by the threads are arranged to obtain event data queues; comparing data type information corresponding to event data subordinate to the first event data set with historical processing data types of threads of the server side, and determining threads corresponding to and matched with the first event data set; comparing the data type information corresponding to the event data subordinate to the second event data set with the data types which can be processed by the threads of the server side, and determining the threads corresponding to and matched with the second event data set;
The event data processing execution module is used for distributing the event data subordinate to the event data queue to the thread for processing according to the working state information of the thread to obtain a corresponding event data processing result;
the event data processing result verification module is used for verifying the event data processing result and judging whether the corresponding event data is required to be processed for the second time according to the verification result;
the event data processing result identification and return module is used for returning the final event data processing result to the user side after carrying out event identification on the final event data processing result of the event data subordinate to the event data queue;
and the thread memory allocation adjustment module is used for adjusting the memory allocation state of the service end to the thread according to the running state of the thread at the service end.
7. The multithreaded event queue processing system of claim 6, wherein:
the event data extraction and distinguishing module is used for extracting corresponding event data from a task catalog of a user side according to an event processing request from the user side, and comprises the following steps:
Performing traceability analysis on an event processing request from a user side, and determining program port information of an application program of the user side initiating the event processing request; and according to the program port information, carrying out data searching processing on the background operation task catalog of the user side to obtain event data corresponding to the application program.
8. The multithreaded event queue processing system of claim 7, wherein:
the thread working state determining module is used for determining the working state information of each thread according to the thread working log of the server side, and comprises the following steps:
analyzing a thread work log of a server to obtain respective work busy state information and available memory space size information of all threads of the server, wherein the work busy state information and available memory space size information are used as the work state information of the threads;
the event data queue generating module is configured to rank the event data set received by the thread to obtain an event data queue, and includes:
according to the position information of the threads corresponding to the matching of the first event data set and the second event data set at the server, respectively and directionally transmitting the first event data set and the second event data set to the threads corresponding to the matching; and according to the sequence that the data quantity of all the event data subordinate to the first event data set or the second event data set received by the thread is from small to large, respectively arranging all the event data subordinate to the first event data set and the second event data set to form an event data queue.
9. The multithreaded event queue processing system of claim 8, wherein:
the event data processing execution module is configured to distribute, according to the working state information of the thread, event data subordinate to the event data queue to the thread for processing, to obtain a corresponding event data processing result, and includes:
according to the work busy state information of the thread, when the thread is in a work busy state, event data subordinate to the first event data queue and the second event data queue are not distributed to the thread for processing; when the thread is in a working idle state, according to the size information of the available memory space of the thread, corresponding event data is selected from the first event data queue or the second event data queue to be distributed to the thread for processing, and a corresponding event data processing result is obtained;
the event data processing result verification module is configured to verify the event data processing result, and determine whether secondary processing is required for the corresponding event data according to the verification result, where the event data processing result verification module includes:
performing data scrambling verification on the event data processing result to obtain a data scrambling duty ratio of the event data processing result; if the data disorder code duty ratio is larger than or equal to a preset duty ratio threshold value, carrying out secondary processing on corresponding event data; if the data disorder code duty ratio is smaller than a preset duty ratio threshold value, the corresponding event data does not need to be processed for the second time.
10. The multithreaded event queue processing system of claim 9, wherein:
the event data processing result identification and return module is configured to return a final event data processing result of event data subordinate to the event data queue to the user side after performing event identification on the final event data processing result, where the event data processing result includes:
after the event names of the final event data processing results of the event data subordinate to the first event data queue and the second event data queue are identified, packaging the final event data processing results and returning the final event data processing results to the user side;
the thread memory allocation adjustment module is configured to adjust a memory allocation state of the server to the thread according to an operation state of the thread at the server, and includes:
judging whether the thread is in a background active state at the server side, if so, keeping the current memory allocation state of the server side to the thread unchanged; if not, the memory allocated to the thread by the server is recovered.
CN202311048068.2A 2023-08-21 2023-08-21 Event queue processing method and system based on multithreading Active CN116755891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311048068.2A CN116755891B (en) 2023-08-21 2023-08-21 Event queue processing method and system based on multithreading

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311048068.2A CN116755891B (en) 2023-08-21 2023-08-21 Event queue processing method and system based on multithreading

Publications (2)

Publication Number Publication Date
CN116755891A CN116755891A (en) 2023-09-15
CN116755891B true CN116755891B (en) 2023-10-20

Family

ID=87953741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311048068.2A Active CN116755891B (en) 2023-08-21 2023-08-21 Event queue processing method and system based on multithreading

Country Status (1)

Country Link
CN (1) CN116755891B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117472593B (en) * 2023-12-27 2024-03-22 中诚华隆计算机技术有限公司 Method and system for distributing resources among multiple threads
CN117873733B (en) * 2024-03-11 2024-05-17 成都中科合迅科技有限公司 Multi-scene-oriented micro-service switching operation control method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102667718A (en) * 2009-10-30 2012-09-12 国际商业机器公司 Method and system for processing network events
CN103491190A (en) * 2013-09-30 2014-01-01 国家电网公司 Processing method for large-scale real-time concurrent charger monitoring data
US10025645B1 (en) * 2017-02-28 2018-07-17 Mediasift Limited Event Processing System
EP3968183A1 (en) * 2020-09-14 2022-03-16 Seventy Nine Three Luxembourg S.A. Multi-threaded asset data processing architecture
CN114928579A (en) * 2021-02-01 2022-08-19 腾讯科技(深圳)有限公司 Data processing method and device, computer equipment and storage medium
CN116401058A (en) * 2016-08-29 2023-07-07 慧与发展有限责任合伙企业 Associating working sets and threads

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8910171B2 (en) * 2009-04-27 2014-12-09 Lsi Corporation Thread synchronization in a multi-thread network communications processor architecture

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102667718A (en) * 2009-10-30 2012-09-12 国际商业机器公司 Method and system for processing network events
CN103491190A (en) * 2013-09-30 2014-01-01 国家电网公司 Processing method for large-scale real-time concurrent charger monitoring data
CN116401058A (en) * 2016-08-29 2023-07-07 慧与发展有限责任合伙企业 Associating working sets and threads
US10025645B1 (en) * 2017-02-28 2018-07-17 Mediasift Limited Event Processing System
EP3968183A1 (en) * 2020-09-14 2022-03-16 Seventy Nine Three Luxembourg S.A. Multi-threaded asset data processing architecture
CN114928579A (en) * 2021-02-01 2022-08-19 腾讯科技(深圳)有限公司 Data processing method and device, computer equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Comparison of Application-Level Fault Tolerance Schemes for Task Pools;Jonas Posner等;Future Generation Computer Systems;第105卷;第119-134页 *
Intelligent queue management of open vSwitch in multi-tenant data center;Huihui Ma等;Future Generation Computer Systems;第144卷;第50-62页 *
基于Hadoop的分布式监控平台的研究与实现;周儒军;中国优秀硕士学位论文全文数据库 信息科技辑(第12期);I140-403 *
多核平台下Esper数据流管理系统的性能分析研究;张华庆;中国优秀硕士学位论文全文数据库 信息科技辑(第5期);I138-46 *

Also Published As

Publication number Publication date
CN116755891A (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN116755891B (en) Event queue processing method and system based on multithreading
WO2021139438A1 (en) Big data resource processing method and apparatus, and terminal and storage medium
CN111506498A (en) Automatic generation method and device of test case, computer equipment and storage medium
CN110232010A (en) A kind of alarm method, alarm server and monitoring server
CN112667376A (en) Task scheduling processing method and device, computer equipment and storage medium
CN105224434B (en) Use the machine learning identification software stage
CN111813573B (en) Communication method of management platform and robot software and related equipment thereof
CN111563014A (en) Interface service performance test method, device, equipment and storage medium
WO2017113774A1 (en) Method and device for judging user priority in wireless communication system
CN110147470B (en) Cross-machine-room data comparison system and method
CN111625342B (en) Data tracing method, device and server
CN111651595A (en) Abnormal log processing method and device
CN112631731A (en) Data query method and device, electronic equipment and storage medium
CN111124791A (en) System testing method and device
CN115439928A (en) Operation behavior identification method and device
CN109697155B (en) IT system performance evaluation method, device, equipment and readable storage medium
CN112579552A (en) Log storage and calling method, device and system
CN113297249A (en) Slow query statement identification and analysis method and device and query statement statistical method and device
CN107168788A (en) The dispatching method and device of resource in distributed system
CN112506791A (en) Application program testing method and device, computer equipment and storage medium
CN112416558A (en) Service data processing method and device based on block chain and storage medium
CN114116811B (en) Log processing method, device, equipment and storage medium
CN109767546B (en) Quality checking and scheduling device and quality checking and scheduling method for valuable bills
CN115525392A (en) Container monitoring method and device, electronic equipment and storage medium
CN115016890A (en) Virtual machine resource allocation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant