CN108509257B - Message processing method and device based on multithreading - Google Patents

Message processing method and device based on multithreading Download PDF

Info

Publication number
CN108509257B
CN108509257B CN201710112550.6A CN201710112550A CN108509257B CN 108509257 B CN108509257 B CN 108509257B CN 201710112550 A CN201710112550 A CN 201710112550A CN 108509257 B CN108509257 B CN 108509257B
Authority
CN
China
Prior art keywords
thread
message
processing
queue
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710112550.6A
Other languages
Chinese (zh)
Other versions
CN108509257A (en
Inventor
沈林杰
闫猛
范浩雍
李安平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SuningCom Co ltd
Original Assignee
SuningCom Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SuningCom Co ltd filed Critical SuningCom Co ltd
Priority to CN201710112550.6A priority Critical patent/CN108509257B/en
Publication of CN108509257A publication Critical patent/CN108509257A/en
Application granted granted Critical
Publication of CN108509257B publication Critical patent/CN108509257B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders

Abstract

The embodiment of the invention discloses a message processing method and device based on multithreading, relates to the technical field of digital logistics, and can solve the problem caused by inter-thread communication in the prior art. The invention comprises the following steps: receiving a message sent by an embedded system, and generating a message according to an analysis result of the message, wherein the generated message corresponds to a to-be-processed transaction, and the to-be-processed transaction comprises at least one step; adding the generated message into a thread safety queue, wherein the thread safety queue comprises a data public access area with a specified size, and the data public access area is used for each thread to access data in the thread safety queue; extracting messages from the thread safety queue and adding the messages into a current idle thread for processing, wherein one thread is used for processing one step included in the to-be-processed transaction of the step; and sending the obtained processing result to the embedded system. The invention is suitable for high-concurrency message processing of the embedded system.

Description

Message processing method and device based on multithreading
Technical Field
The invention relates to the technical field of digital logistics, in particular to a message processing method and device based on multiple threads.
Background
At present, in the digital management of the logistics warehouse, digital management systems such as a task automatic distribution system and a message management system are basically adopted to distribute operations such as shelving up, shelving down, picking, packaging and sorting for warehouse workers, and the purpose is to bring operation details of each worker into the digital management, so that the management efficiency of the logistics warehouse is improved.
Moreover, with the gradual establishment of large logistics warehouses and the increasing automation degree, the number of tasks required to be processed and distributed by the adopted digital management system is increased explosively, which requires the improvement of the capability of the digital management system to process a plurality of tasks in parallel. In the currently adopted scheme for processing multiple tasks in parallel, messages are generally transmitted in a synchronous or asynchronous operation mode, and as the timeliness of processing of each task needs to be guaranteed preferentially, multiple threads are allocated to each task, so that the processing efficiency of each task is improved, and as the number of tasks increases gradually, if the parallel processing capability of a digital management system needs to be increased, more CPUs and memories need to be added to hardware equipment of the system, so that computing resources are expanded to establish more processing threads, and the concurrency of task processing is increased.
Therefore, the hardware configuration level of the system and the capacity of the computing resources need to be improved, that is, an operator of the logistics warehouse needs to purchase more hardware devices, and the construction and operation costs of the logistics warehouse are increased sharply; moreover, when the scale of hardware equipment of the system is enlarged, due to the need of managing communication among massive threads, abnormal frequent occurrence of task delay, overdue, interruption and the like is caused, and the timeliness of task processing is difficult to guarantee. Therefore, the digital management capability of the logistics warehouse is limited, and the size of the logistics warehouse is difficult to further expand.
Disclosure of Invention
Embodiments of the present invention provide a message processing method and apparatus based on multiple threads, which can alleviate the problem in the prior art caused by inter-thread communication.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides a method, including:
receiving a message sent by an embedded system, and generating a message according to an analysis result of the message, wherein the generated message corresponds to a transaction to be processed, and the transaction to be processed comprises at least one step;
adding the generated message into a thread safety queue, wherein the thread safety queue comprises a data public access area with a specified size, and the data public access area is used for accessing data in the thread safety queue by each thread;
extracting messages from the thread safety queue and adding the messages into a current idle thread for processing, wherein one thread is used for processing one step included in the to-be-processed transaction of the step;
and sending the obtained processing result to the embedded system.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the number of threads is an integer multiple of the number of steps of the to-be-processed transaction, and one step is processed by at least one thread.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, the method further includes:
receiving at least 2 messages sent by the embedded system concurrently;
and generating a message aiming at each received message, and adding the steps corresponding to the generated message into the processing queue of each thread respectively.
With reference to the first possible implementation manner of the first aspect, in a third possible implementation manner, the method further includes:
and generating a message aiming at each received message, and storing a processing result of one step into the thread safety queue after the step is processed by the thread.
With reference to the first aspect or the third possible implementation manner of the first aspect, in a fourth possible implementation manner, the method further includes:
generating a message aiming at each received message, and if a thread needs to read the processing result of another step when processing one step, inquiring whether the processing result of another step exists in the thread safety queue;
and if so, reading the processing result of the other step from the thread safety queue.
With reference to the first aspect or the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner, the method further includes:
and if the processing result of the other step cannot be read in the thread safety queue, suspending the thread for processing the one step until the processing result of the other step is successfully read from the thread safety queue.
In a second aspect, an embodiment of the present invention provides an apparatus, including:
the receiving module is used for receiving a message sent by the embedded system and generating a message according to the analysis result of the message, wherein the generated message corresponds to a transaction to be processed, and the transaction to be processed comprises at least one step;
the queue module is used for adding the generated message into a thread safety queue, wherein the thread safety queue comprises a data public access area with a specified size and is used for accessing data in the thread safety queue by each thread;
a processing module, configured to extract a packet from the thread security queue and add the packet to a currently idle thread for processing, where one thread is used to process one step included in the to-be-processed transaction of the step;
and the sending module is used for sending the obtained processing result to the embedded system.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the receiving module is specifically configured to receive at least 2 messages concurrently sent by the embedded system;
the processing module is specifically configured to generate a message for each received message, and add the step corresponding to the generated message to the processing queue of each thread;
the number of the threads is integral multiple of the number of the steps of the transaction to be processed, and one step is processed by at least one thread.
With reference to the second aspect, in a second possible implementation manner of the second aspect, the processing module is further configured to generate a packet for each received message, and store a processing result of one step into the thread safety queue after the step is processed by the thread.
With reference to the first possible implementation manner of the second aspect, in a third possible implementation manner, the processing module is further configured to
Generating a message aiming at each received message, and if a thread needs to read a processing result of another step when processing one step, inquiring whether the processing result of the another step exists in the thread safety queue through the queue module;
if yes, reading the processing result of the other step from the thread safety queue; and if the processing result of the other step cannot be read in the thread safety queue, suspending the thread for processing the one step until the processing result of the other step is successfully read from the thread safety queue.
The multithreading-based message processing method and the multithreading-based message processing device provided by the embodiment of the invention can inquire whether the execution result of the thread of the previous step exists in the thread safety queue or not by the thread of the next step in the dependency relationship for the tasks which may have the same step but may have different dependency relationships among the steps. Whereas dependency-free steps may be performed directly. The embodiment can be applied to PLC communication, and the receiving/sending thread is distinguished from the working thread. By opening up a public memory space (thread safety queue), cross-thread communication is reduced, the efficiency of message processing is improved, and the problem caused by communication among threads in the prior art is also solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a possible system architecture according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method according to an embodiment of the present invention;
fig. 3 and 4 are schematic diagrams of data transmission processes in possible embodiments provided by the embodiment of the present invention;
fig. 5 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the present invention is further described in detail with reference to the accompanying drawings and the detailed description below. Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are exemplary only for explaining the present invention and are not construed as limiting the present invention. As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The method flow in this embodiment may be specifically executed in a system as shown in fig. 1, where the method flow includes: a server, an embedded system connected to the server, such as a PLC system (Programmable Logic Controller).
Wherein, at a hardware level: the server may be specifically implemented as a server device, or may be a server cluster system for data processing, which is composed of a plurality of servers. And the PLC system can be erected by adopting the PLC equipment commonly used at present.
The thread described in this embodiment specifically depends on a process running on a server, such as: in the server device, the smallest computing unit running in the operating system is a thread, and a process represents a single execution block, and a process may include multiple threads, and the number of threads included in a process may be extended according to the actual application scenario, for example: the thread runs in an wcs program process in the server device.
An embodiment of the present invention provides a message processing method based on multiple threads, as shown in fig. 2, including:
and S1, receiving the message sent by the embedded system, and generating a message according to the analysis result of the message.
In this embodiment, the embedded System may specifically be a PLC System as shown in fig. 1, and the message sent by the embedded System may specifically be a socket message, which is used to execute the process of this embodiment, and specifically may be a WCS (ware house Control System) System running on a server as shown in fig. 1, where the WCS System establishes a socket connection with the PLC and obtains the message sent by the PLC System, and then the WCS System analyzes the message to generate a message. The generated message corresponds to a transaction to be processed, and the transaction to be processed comprises at least one step.
And S2, adding the generated message into a thread safety queue.
The thread security queue includes a data common access area with a specified size, which is used for each thread to access data in the thread security queue, and the data common access area with the specified size may be a part of a memory space of the server device shown in fig. 1, so that a thread running on the server can directly read data in the thread security queue from a memory.
And S3, extracting the message from the thread safety queue, and adding the current idle thread for processing.
Wherein one thread is used for processing one step included in the step-pending transaction. The transaction described in this embodiment may include: in a warehouse management scene, the transactions based on operation instruction interaction, such as warehouse entry, warehouse exit, warehouse movement and the like, borne in the PLC system can be confirmed by the server according to the received message, wherein the transaction to be processed is specifically any transaction.
The message described in this embodiment may specifically be a trigger message, which is used to trigger a certain transaction to start execution; or, the packet described in this embodiment includes data to be processed. And the data to be processed is imported into the corresponding transaction for processing, so as to obtain the processing result of the data in the message.
The steps described in this embodiment may be specifically understood as each link of the transaction processing flow, such as: the preset transactions can be stored in the server, and specifically, the preset transactions can be program codes written according to a specific business process, each step constituting a transaction is actually represented as a code segment, and an identification code of a thread which needs to execute the code segment is recorded in the code segment, so that the code segment which extracts the thread from the thread according to the identification code to run the step is extracted. Specifically, a receiving thread and a sending thread may be configured, where the receiving thread is configured to analyze the received packet and determine a transaction specifically corresponding to the received packet. If the code segment of one step of the transaction comprises the code for triggering the sending thread, after the step is executed, the execution result is imported into the sending thread and is output outwards through the sending thread.
In this embodiment, the data in the thread-safe queue includes: among the steps divided from the generated message corresponding to the to-be-processed transaction, some steps are intermediate results processed by corresponding threads, and the intermediate results are, for example: transaction a requires the execution of worker threads 1, 2, 3, where threads 1, 2, 3 are used to process steps 1, 2, 3 of transaction a, respectively. If the thread 2 executes the result that requires the thread 1 (wherein, the result of step 1 can be understood as an "intermediate result"), the thread 2 will be suspended during the execution until the corresponding data of the thread 1 is processed, and the thread 2 reads the intermediate result of the thread 1 from the thread security queue, and then executes the intermediate result according to the intermediate result of the thread 1.
The intermediate data to be transmitted between steps is stored in the thread security queue, and the transaction of communication between threads is not needed, and the thread security queue may not be used in the execution process, such as: step 1 is a step for triggering a log recording process, and step 2 is a step for triggering a message forwarding process, so that it is possible that step 2 is executed first and the message is output through a sending thread, while step 1 waits for an allocation thread to execute.
In this embodiment, as shown in fig. 3, the WCS system adds a message into a thread security queue, and stores an intermediate result obtained after each step is processed in a static memory of the server, so that a processing result of one step is transferred to another thread by one thread through the thread security queue.
And S4, sending the obtained processing result to the embedded system.
In this embodiment, for a task that may have the same step but may have different dependencies between steps, a thread at a subsequent step in the dependencies may query, in the thread security queue, whether there is an execution result of a thread at a previous step. Whereas the dependency-free steps may be performed directly. The embodiment can be applied to PLC communication, and the receiving/sending thread is distinguished from the working thread. By opening up a public memory space (thread safety queue), cross-thread communication is reduced, the efficiency of message processing is improved, and the problem caused by communication among threads in the prior art is also solved.
In this embodiment, the number of threads is an integer multiple of the number of steps of the transaction to be processed, and one step is processed by at least one thread. For the parallel processing process of multiple messages, the method specifically comprises the following steps:
and receiving at least 2 messages which are concurrently sent by the embedded system.
And generating a message aiming at each received message, and adding the steps corresponding to the generated message into the processing queue of each thread respectively.
Wherein, generating a message for each received message: after a step is processed by the thread, the processing result of the step is stored in the thread safety queue. For example: as shown in fig. 4, a data request (which may be referred to as a transaction) that a thread fetcher needs to process is received in a main process running on the WCS system, and each transaction is composed of multiple steps and is not identical. The mode of operation of the pipeline is similar in that one worker thread processes one step in a transaction. The total number of the working threads is the total number of different steps in all different transactions or an integral multiple of the number of the steps of the to-be-processed transaction. And queuing the processing in each working thread according to different steps in each transaction processing process. And then, the thread acquires the message from the security queue, processes the message and adds the result into the queue to be fed back. And the sending feedback thread acquires the message in the sending queue and sends the message to embedded systems such as the p l c and the like. Therefore, one task is split into a plurality of threads, and one thread only runs one kind of task steps, so that multi-task and multi-thread can be processed simultaneously. Especially, under the condition of a large number of repeated tasks, each thread is guaranteed to have the task to do, and a processing mode that the previous task is executed and then the next task is executed in the prior art is not needed.
Specifically, still include: and generating a message aiming at each received message, and if a thread needs to read the processing result of another step when processing one step, inquiring whether the processing result of the another step exists in the thread safety queue.
And if so, reading the processing result of the other step from the thread safety queue.
And if the processing result of the other step cannot be read in the thread safety queue at present, suspending the thread for processing the one step until the processing result of the other step is successfully read from the thread safety queue.
Wherein, thread suspension means: the thread is passively stored in the scene, part of the memory is exchanged out, is not accessed and is passively started, similar to an interrupt, and meanwhile, the suspended thread can continue to process the next message. And when the processing result of the other step is successfully read from the thread safety queue, the thread continues to process the step in the suspended state. Therefore, in each transaction processing process, queuing processing is performed in each working thread according to different steps, the utilization rate of the working threads is improved, each working thread is not in a waiting state, and the transaction can be completed quickly. For example: in this embodiment, there is no actual data interaction and internal communication between each worker thread, where all data interaction is implemented by data in a thread security queue (referred to as a queue for short), and may be understood as a common data access area, such as: the transaction A needs to execute the working threads 1, 2 and 3, and then after the main process receives the transaction A, the main process simultaneously sends instructions to the threads 1, 2 and 3 and puts data to the tail of the queue; and then the threads 1, 2 and 3 respectively start to execute, if the thread 2 executes the result that needs the thread 1, the thread 2 is suspended during execution until the corresponding data processing of the thread 1 is finished, and the thread is suspended but not blocked, so that the thread 2 can continuously execute other data processing, thereby improving the execution efficiency and the utilization rate of the cpu.
Specific examples thereof include: the front-stage system issues an order to the warehouse back-stage system, and the commodities in the order need to be picked from a certain bin in the warehouse A area, then need to be sent to the packaging area for packaging, and finally need to be sorted to the tail end for boxing and transportation. The order is taken as a matter, after the information enters wcs system, the system adds a thread operation for each of picking, packing and sorting thread, so that the picking work is carried out while the packing work position is allocated for the picking work position, thus the packing thread does not need to know which station to pack after the goods are picked and reach the packing area, the same packing operation is carried out in which sorting area, and sometimes even which vehicle is determined, thus the order of a line operation can be synchronously carried out after being decomposed, and the efficiency is greatly improved.
In the existing solutions, the problem of high concurrency and timely response is solved, and a common practice is to process messages in an asynchronous multithread processing or thread pool manner, due to the limitations of the operating system on thread safety and thread processing, such as: data access across threads is affected by thread waiting, so that abnormal access among threads, inconsistent message transmission among threads, incapability of detecting message processing progress and the like are caused, and the danger of abnormal process and even breakdown is caused seriously. In the prior art, a mode of communication among threads is adopted, so that the threads always occupy the memory blocks and cannot be released, if more tasks are waited, instantaneous cpu and memory occupation of an operating system are high due to high concurrent system requirements, and extremely high potential safety hazards exist for a long-time operating system. For example: in the prior art system, wcs data processing between systems is limited by the processing mechanism of each system itself (for example, the response time of dammar wcs is generally about 500ms at present, and instant requests are overtime once network oscillation occurs, which affects the abnormal operation of the lower systems).
In this embodiment, for a task that may have the same step but may have different dependencies between steps, a thread at a subsequent step in the dependencies may be queried in the thread security queue to determine whether there is an execution result of a thread at a previous step. Whereas dependency-free steps may be performed directly. The embodiment can be applied to PLC communication, and the receiving/sending thread is distinguished from the working thread. By opening up a public memory space (thread safety queue), cross-thread communication is reduced, the efficiency of message processing is improved, and the problem caused by communication among threads in the prior art is also solved. For example: through actual operation, the operation aging can be improved to a millisecond level, so that the automation equipment is not limited by interprocess communication any more, and the aging for processing data among systems can only stay at a second level.
An embodiment of the present invention provides a message processing apparatus based on multithreading, which may be run on a server as shown in fig. 1, for example, on a WCS system. As shown in fig. 5, the apparatus includes:
the receiving module is used for receiving a message sent by the embedded system and generating a message according to the analysis result of the message, wherein the generated message corresponds to a transaction to be processed, and the transaction to be processed comprises at least one step;
the queue module is used for adding the generated message into a thread safety queue, wherein the thread safety queue comprises a data public access area with a specified size and is used for accessing data in the thread safety queue by each thread;
a processing module, configured to extract a packet from the thread security queue and add the packet to a currently idle thread for processing, where one thread is used to process one step included in the to-be-processed transaction of the step;
and the sending module is used for sending the obtained processing result to the embedded system.
The receiving module is specifically configured to receive at least 2 messages concurrently sent by the embedded system;
the processing module is specifically configured to generate a message for each received message, and add the step corresponding to the generated message to the processing queue of each thread;
the number of the threads is an integral multiple of the number of the steps of the transaction to be processed, and one step is processed by at least one thread.
Specifically, the processing module is further configured to generate a packet for each received message, and store a processing result of one step into the thread safety queue after the step is processed by the thread.
Further, the processing module is further configured to generate a packet for each received message, and if a thread needs to read a processing result of another step when processing one of the steps, query, by the queue module, whether a processing result of the another step exists in the thread security queue;
if yes, reading the processing result of the other step from the thread safety queue; and if the processing result of the other step cannot be read in the thread safety queue, suspending the thread for processing the one step until the processing result of the other step is successfully read from the thread safety queue.
In this embodiment, for a task that may have the same step but may have different dependencies between steps, a thread at a subsequent step in the dependencies may query, in the thread security queue, whether there is an execution result of a thread at a previous step. Whereas dependency-free steps may be performed directly. The embodiment can be applied to PLC communication, and the receiving/sending thread is distinguished from the working thread. By opening up a public memory space (thread safety queue), cross-thread communication is reduced, the efficiency of message processing is improved, and the problem caused by communication among threads in the prior art is also solved.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the apparatus embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the description of the method embodiments for relevant points. The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (2)

1. A method for message processing based on multiple threads, comprising:
receiving a message sent by an embedded system, and generating a message according to an analysis result of the message, wherein the generated message corresponds to a to-be-processed transaction, and the to-be-processed transaction comprises at least one step;
adding the generated message into a thread safety queue, wherein the thread safety queue comprises a data public access area with a specified size, and the data public access area is used for each thread to access data in the thread safety queue;
extracting messages from the thread safety queue and adding the messages into a current idle thread for processing, wherein one thread is used for processing one step included in the to-be-processed transaction;
sending the obtained processing result to the embedded system;
the number of threads is integral multiple of the number of steps of the transaction to be processed, and one step is processed by at least one thread;
further comprising: generating a message aiming at each received message, and if a thread needs to read the processing result of another step when processing one step, inquiring whether the processing result of the another step exists in the thread safety queue; if yes, reading the processing result of the other step from the thread safety queue;
if the processing result of the other step cannot be read in the thread safety queue at present, the thread for processing the one step is suspended until the processing result of the other step is successfully read from the thread safety queue; data interaction and internal communication are not carried out among all the working threads, and all the data interaction is realized through data in the thread safety queue;
wherein, the order is used as a transaction, and after the message enters wcs system, the system adds a thread job for each of picking, packing and sorting thread;
further comprising:
receiving at least 2 messages sent by the embedded system concurrently;
generating a message aiming at each received message, and adding the steps corresponding to the generated messages into the processing queues of all threads respectively;
further comprising:
and generating a message aiming at each received message, and storing a processing result of one step into the thread safety queue after the step is processed by the thread.
2. A multithreading-based message processing apparatus, comprising:
the receiving module is used for receiving a message sent by the embedded system and generating a message according to the analysis result of the message, wherein the generated message corresponds to a transaction to be processed, and the transaction to be processed comprises at least one step;
the queue module is used for adding the generated message into a thread safety queue, wherein the thread safety queue comprises a data public access area with a specified size and is used for each thread to access the data in the thread safety queue;
the processing module is used for extracting messages from the thread safety queue and adding the messages into a current idle thread for processing, wherein one thread is used for processing one step included in the to-be-processed transaction;
the sending module is used for sending the obtained processing result to the embedded system;
the number of threads is integral multiple of the number of steps of the transaction to be processed, and one step is processed by at least one thread;
the processing module is further configured to generate a packet for each received message, and if a processing result of another step needs to be read when a thread processes one of the steps, query, by the queue module, from the thread security queue whether a processing result of the another step exists; if yes, reading the processing result of the other step from the thread safety queue; if the processing result of the other step cannot be read in the thread safety queue at present, the thread for processing the one step is suspended until the processing result of the other step is successfully read from the thread safety queue;
data interaction and internal communication are not carried out among all the working threads, and all the data interaction is realized through data in the thread safety queue;
wherein, the order is used as a transaction, and after the message enters wcs system, the system adds a thread job for each of picking, packing and sorting thread;
the receiving module is specifically configured to receive at least 2 messages concurrently sent by the embedded system;
the processing module is specifically configured to generate a message for each received message, and add the step corresponding to the generated message to the processing queue of each thread;
the number of the threads is an integral multiple of the number of the steps of the transaction to be processed, and one step is processed by at least one thread;
the processing module is further configured to generate a message for each received message, and store a processing result of one step into the thread safety queue after the step is processed by the thread.
CN201710112550.6A 2017-02-28 2017-02-28 Message processing method and device based on multithreading Active CN108509257B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710112550.6A CN108509257B (en) 2017-02-28 2017-02-28 Message processing method and device based on multithreading

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710112550.6A CN108509257B (en) 2017-02-28 2017-02-28 Message processing method and device based on multithreading

Publications (2)

Publication Number Publication Date
CN108509257A CN108509257A (en) 2018-09-07
CN108509257B true CN108509257B (en) 2022-07-22

Family

ID=63374198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710112550.6A Active CN108509257B (en) 2017-02-28 2017-02-28 Message processing method and device based on multithreading

Country Status (1)

Country Link
CN (1) CN108509257B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109634532B (en) * 2018-12-19 2022-06-14 湖南源科创新科技有限公司 Method for sharing access storage medium by multiple VxWorks hosts
CN112596478B (en) * 2020-12-08 2022-09-27 苏州高科中维软件科技有限公司 WCS warehouse control system architecture and implementation method
CN115695432B (en) * 2023-01-04 2023-04-07 河北华通科技股份有限公司 Load balancing method and device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100343812C (en) * 2005-03-25 2007-10-17 上海高智软件系统有限公司 Method for raising processing speed of interface system of attaching position register
US8954986B2 (en) * 2010-12-17 2015-02-10 Intel Corporation Systems and methods for data-parallel processing
CN102981904B (en) * 2011-09-02 2016-08-03 阿里巴巴集团控股有限公司 A kind of method for scheduling task and system
CN102508716B (en) * 2011-09-29 2015-04-15 用友软件股份有限公司 Task control device and task control method
CN104133724B (en) * 2014-04-03 2015-08-19 腾讯科技(深圳)有限公司 Concurrent tasks dispatching method and device
CN104793996A (en) * 2015-04-29 2015-07-22 中芯睿智(北京)微电子科技有限公司 Task scheduling method and device of parallel computing equipment

Also Published As

Publication number Publication date
CN108509257A (en) 2018-09-07

Similar Documents

Publication Publication Date Title
CN106802826B (en) Service processing method and device based on thread pool
CN107729139B (en) Method and device for concurrently acquiring resources
CN110825535B (en) Job scheduling method and system
CN108509257B (en) Message processing method and device based on multithreading
EP3562096B1 (en) Method and device for timeout monitoring
US20120246652A1 (en) Processor Management Via Thread Status
CN102622426A (en) Database writing system and database writing method
JP4506520B2 (en) Management server, message extraction method, and program
US10498817B1 (en) Performance tuning in distributed computing systems
WO2020232875A1 (en) Actor model-based task scheduling method and apparatus, and storage medium
US20150254105A1 (en) Data processing system, data processing method, and program
CN104954411A (en) Method for sharing network resource by distributed system, terminal thereof and system thereof
CN107122252B (en) Intersystem interaction method and device
CN105760240A (en) Distributed task processing method and device
CN102081554A (en) Cloud computing operating system as well as kernel control system and method thereof
CN107589990B (en) Data communication method and system based on thread pool
CN103150324A (en) Chained processing-based data collecting system and method
CN105516266A (en) Service monitoring methods and system, and related devices
CN101093454A (en) Method and device for executing SQL script file in distributed system
CN109408286A (en) Data processing method, device, system, computer readable storage medium
CN103107921A (en) Monitoring method and system
CN102457578A (en) Distributed network monitoring method based on event mechanism
EP2733613B1 (en) Controller and program
CN103412790A (en) Method and system for multicore concurrent scheduling of mobile safety middleware
CN111209112A (en) Exception handling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 210000, 1-5 story, Jinshan building, 8 Shanxi Road, Nanjing, Jiangsu.

Applicant after: SUNING.COM Co.,Ltd.

Address before: 210042 Suning Headquarters, No. 1 Suning Avenue, Xuanwu District, Nanjing City, Jiangsu Province

Applicant before: SUNING COMMERCE GROUP Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant