CN115840654B - Message processing method, system, computing device and readable storage medium - Google Patents

Message processing method, system, computing device and readable storage medium Download PDF

Info

Publication number
CN115840654B
CN115840654B CN202310103617.5A CN202310103617A CN115840654B CN 115840654 B CN115840654 B CN 115840654B CN 202310103617 A CN202310103617 A CN 202310103617A CN 115840654 B CN115840654 B CN 115840654B
Authority
CN
China
Prior art keywords
message
thread
memory area
inter
communication
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310103617.5A
Other languages
Chinese (zh)
Other versions
CN115840654A (en
Inventor
王森莽
李铁平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Superred Technology Co Ltd
Original Assignee
Beijing Superred Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Superred Technology Co Ltd filed Critical Beijing Superred Technology Co Ltd
Priority to CN202310103617.5A priority Critical patent/CN115840654B/en
Publication of CN115840654A publication Critical patent/CN115840654A/en
Application granted granted Critical
Publication of CN115840654B publication Critical patent/CN115840654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to the technical field of interprocess communication, and discloses a message processing method, a system, a computing device and a readable storage medium, wherein the method is executed in a consumption end of the computing device and comprises the following steps: creating a memory area shared by a consumption end and a production end, initializing a semaphore for inter-process communication between the consumption end and the production end and a semaphore for inter-thread communication between a plurality of message processing threads of the consumption end; creating and starting a plurality of message processing threads; and processing the message written into the memory area by the production end according to the input parameters by a plurality of message processing threads, wherein the input parameters comprise the signal quantity of inter-process communication and the signal quantity of inter-thread communication. The technical scheme of the invention improves the inter-process communication speed between the production end and the consumption end.

Description

Message processing method, system, computing device and readable storage medium
Technical Field
The present invention relates to the field of interprocess communication technologies, and in particular, to a method, a system, a computing device, and a readable storage medium for processing a message.
Background
Inter-process communication is the propagation or exchange of information between different processes, and typically the amount of data of the information communicated by the inter-process communication is low and the communication rate is also low. Currently, there is a solution for processing messages unit by unit in inter-process communication, but this solution requires a complex boundary processing algorithm. The scheme of inter-process communication also adopts a non-open-source third party library, but the mode adopting the third party library has low flexibility, poor stability and inconvenient maintenance.
For this reason, the present invention provides a message processing scheme to solve the problems in the prior art.
Disclosure of Invention
To this end, the present invention provides a method, system, computing device and readable storage medium for processing messages to solve or at least alleviate the above-identified problems.
According to a first aspect of the present invention there is provided a method of processing a message for execution in a consumer of a computing device, the method comprising: creating a memory area shared by a consumption end and a production end, initializing a semaphore for inter-process communication between the consumption end and the production end and a semaphore for inter-thread communication between a plurality of message processing threads of the consumption end; creating and starting a plurality of message processing threads; and processing the message written into the memory area by the production end according to the input parameters by a plurality of message processing threads, wherein the input parameters comprise the signal quantity of inter-process communication and the signal quantity of inter-thread communication.
Optionally, in the message processing method according to the present invention, the plurality of message processing threads include a message acquisition thread and a message analysis thread, and the processing of the message written in the memory area by the production end includes: acquiring the length of the message stored in the memory area through a message acquisition thread; acquiring the information stored in the memory area according to the length of the information stored in the memory area; storing the message into a first buffer area shared by a message acquisition thread and a message analysis thread; updating the semaphore of inter-thread communication so that the message analysis thread splits the message stored in the first buffer according to the updated semaphore of inter-thread communication; when the length of the message stored in the memory area is zero, monitoring the memory area; in response to a change in the semaphore for the interprocess communication, the steps beginning with retrieving the length of the message stored in the memory region continue to be performed.
Optionally, in the message processing method according to the present invention, the plurality of message processing threads further includes a message consuming thread, and processes a message written into the memory area by the production end, and further includes: judging whether the updated semaphore for communication between threads is counted to the information stored in the memory area or not through the information analysis thread; if yes, splitting the information stored in the first cache region, storing the split information into a second cache region shared by the information analysis thread and the information consumption thread, counting the split information stored in the second cache region, and updating the updated inter-thread communication signal quantity again, so that the information consumption thread carries out preset processing on the split information stored in the second cache region according to the updated inter-thread communication signal quantity; otherwise, suspending the message analysis thread and releasing the system resource occupied by the message analysis thread.
Optionally, in the method for processing a message according to the present invention, the processing of the message written in the memory area by the production end further includes: judging whether the updated signal quantity of the inter-thread communication is counted to the information stored in the memory area or not through the information consumption thread; if yes, carrying out preset processing on the split information stored in the second cache area; otherwise, suspending the message consumption thread and releasing the system resource occupied by the message consumption thread.
Optionally, in the message processing method according to the present invention, the plurality of message processing threads further include a message buffer length monitoring thread, and process a message written into the memory area by the production end, and further include: storing the information stored in the memory area into a third cache area shared by the information acquisition thread and the information cache length monitoring thread through the information acquisition thread; printing the length of the message stored in the third cache region through a message cache length monitoring thread; performing sleep processing on the message cache length monitoring thread for a preset time length; and after the message cache length monitoring thread finishes sleeping, monitoring the length of the message stored in the third cache region.
Optionally, in the message processing method according to the present invention, the method further includes: the method comprises the steps of obtaining the size of a maximum data structure unit of the computing device, and adjusting the size of a memory area to be an integral multiple of the size of the maximum data structure unit.
Optionally, in the message processing method according to the present invention, the method further includes: in the memory area, creating a circular queue structure variable shared by the consumer and the producer, and mapping the memory area of the target file so as to process the read-write process of the message through the circular queue structure variable and record the message through the target file.
Optionally, in the message processing method according to the present invention, the method further includes: a plurality of message processing threads are associated with a main thread of a consumer.
Optionally, in the message processing method according to the present invention, the method further includes: in response to the end of the plurality of message processing threads, the mapping of the storage area of the target file is canceled and the storage area is deleted.
Alternatively, in the processing method of a message according to the present invention, the predetermined processing includes printing.
According to a second aspect of the present invention, there is provided a method of processing a message, for execution in a production end of a computing device, the method comprising: initializing a memory area shared by a production end and a consumption end and a semaphore for interprocess communication between the consumption end and the production end; acquiring a memory unit of a memory area; the message is written into the memory unit by multithreading and the semaphore for the inter-process communication is updated so that the consumer processes the message according to any of the processing methods of the message.
According to a third aspect of the present invention, there is provided a message processing system, comprising a consuming end and a producing end, wherein: the consumer is adapted to: creating a memory area shared by a consumption end and a production end, initializing a semaphore for inter-process communication between the consumption end and the production end and a semaphore for inter-thread communication between a plurality of message processing threads of the consumption end; creating and starting a plurality of message processing threads; processing the message written into the memory area by the production end according to the input parameters by a plurality of message processing threads, wherein the input parameters comprise the signal quantity of inter-process communication and the signal quantity of inter-thread communication; the production end is suitable for: initializing a memory area shared by a production end and a consumption end and a semaphore for interprocess communication between the consumption end and the production end; acquiring a memory unit of a memory area; messages are written to memory locations by multithreading and the semaphores for inter-process communication are updated.
According to a fourth aspect of the present invention there is provided a computing device comprising: at least one processor; a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, the program instructions comprising instructions for performing the method as described above.
According to a fifth aspect of the present invention there is provided a readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform the method as described above.
According to the technical scheme, the memory area shared by the production end and the consumption end is created, so that the production end and the consumption end can access the memory area. The production end writes the message into the memory area, and the consumption end processes the message in the memory area, so that the production end and the consumption end do not need to directly transfer the message, but transfer the message by accessing the shared memory area, thereby improving the inter-process communication speed between the production end and the consumption end. The invention also processes the message through a plurality of message processing threads, thereby improving the processing speed of the message and further improving the inter-process communication speed between the production end and the consumption end.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which set forth the various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to fall within the scope of the claimed subject matter. The above, as well as additional objects, features, and advantages of the present disclosure will become more apparent from the following detailed description when read in conjunction with the accompanying drawings. Like reference numerals generally refer to like parts or elements throughout the present disclosure.
FIG. 1 illustrates a block diagram of the physical components (i.e., hardware) of a computing device 100;
FIG. 2 illustrates a flow diagram of a method 200 of processing a message according to one embodiment of the invention;
FIG. 3 shows a flow chart of a method 300 of processing a message according to another embodiment of the invention;
FIG. 4 illustrates a flow diagram of a message acquisition thread processing a message according to one embodiment of the invention;
FIG. 5 illustrates a flow diagram of a message parsing thread processing a message according to one embodiment of the invention;
FIG. 6 illustrates a flow diagram of a message consuming thread processing a message according to one embodiment of the invention;
FIG. 7 illustrates a flow diagram of a message cache length snoop thread processing a message, according to one embodiment of the invention;
fig. 8 shows a schematic diagram of a message processing system 800 according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 illustrates a block diagram of the physical components (i.e., hardware) of a computing device 100. In a basic configuration, computing device 100 includes at least one processing unit 102 and system memory 104. According to one aspect, the processing unit 102 may be implemented as a processor, depending on the configuration and type of computing device. The system memory 104 includes, but is not limited to, volatile storage (e.g., random access memory), non-volatile storage (e.g., read only memory), flash memory, or any combination of such memories. According to one aspect, the system memory 104 includes an operating system 105 and program modules 106, the program modules 106 include a message processing system 800, and the message processing system 800 is configured to perform the message processing methods 200 and 300 of the present invention.
According to one aspect, operating system 105 is suitable, for example, for controlling the operation of computing device 100. Further, examples are practiced in connection with a graphics library, other operating systems, or any other application program and are not limited to any particular application or system. This basic configuration is illustrated in fig. 1 by those components within dashed line 108. According to one aspect, computing device 100 has additional features or functionality. For example, according to one aspect, computing device 100 includes additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in fig. 1 by removable storage device 109 and non-removable storage device 110.
As set forth hereinabove, according to one aspect, program modules are stored in the system memory 104. According to one aspect, program modules may include one or more applications, the invention is not limited in the type of application, for example, the application may include: email and contacts applications, word processing applications, spreadsheet applications, database applications, slide show applications, drawing or computer-aided application, web browser applications, etc.
According to one aspect, the examples may be practiced in a circuit comprising discrete electronic components, a packaged or integrated electronic chip containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic components or a microprocessor. For example, examples may be practiced via a system on a chip (SOC) in which each or many of the components shown in fig. 1 may be integrated on a single integrated circuit. According to one aspect, such SOC devices may include one or more processing units, graphics units, communication units, system virtualization units, and various application functions, all of which are integrated (or "burned") onto a chip substrate as a single integrated circuit. When operating via an SOC, the functionality described herein may be operated via dedicated logic integrated with other components of computing device 100 on a single integrated circuit (chip). Embodiments of the invention may also be practiced using other techniques capable of performing logical operations (e.g., AND, OR, AND NOT), including but NOT limited to mechanical, optical, fluidic, AND quantum techniques. In addition, embodiments of the invention may be practiced within a general purpose computer or in any other circuit or system.
According to one aspect, the computing device 100 may also have one or more input devices 112, such as a keyboard, mouse, pen, voice input device, touch input device, and the like. Output device(s) 114 such as a display, speakers, printer, etc. may also be included. The foregoing devices are examples and other devices may also be used. Computing device 100 may include one or more communication connections 116 that allow communication with other computing devices 118. Examples of suitable communication connections 116 include, but are not limited to: RF transmitter, receiver and/or transceiver circuitry; universal Serial Bus (USB), parallel and/or serial ports.
The term computer readable media as used herein includes computer storage media. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information (e.g., computer readable instructions, data structures, or program modules). System memory 104, removable storage 109, and non-removable storage 110 are all examples of computer storage media (i.e., memory storage). Computer storage media may include Random Access Memory (RAM), read Only Memory (ROM), electrically erasable read only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other article of manufacture that can be used to store information and that can be accessed by computer device 100. According to one aspect, any such computer storage media may be part of computing device 100. Computer storage media does not include a carrier wave or other propagated data signal.
According to one aspect, communication media is embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal (e.g., carrier wave or other transport mechanism) and includes any information delivery media. According to one aspect, the term "modulated data signal" describes a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio Frequency (RF), infrared, and other wireless media.
In some embodiments of the invention, computing device 100 includes one or more processors and one or more readable storage media storing program instructions. The program instructions, when configured to be executed by one or more processors, cause a computing device to perform the method of processing messages in embodiments of the invention.
The message processing method of the present invention can be executed in the production side and the consumption side, and the message processing method executed in the production side will be described first.
Fig. 2 shows a flow chart of a method 200 of processing a message according to one embodiment of the invention. The method 200 may be performed in an operating system of a computing device (e.g., the aforementioned computing device 100). The operating system may be any operating system, such as: windows, linux, unix, but is not limited thereto. The computing device executing the method 200 includes a production end and a consumption end, where the production end may also refer to a producer process and the consumption end may also refer to a consumer process. The method 200 is suitable for execution in a production end of a computing device, such as the computing device 100 described previously. As shown in fig. 2, the method 200 begins at step 210.
In step 210, the memory area shared by the production side and the consumption side, and the semaphore for the inter-process communication between the consumption side and the production side are initialized.
According to the embodiment of the invention, after the consumption end creates the memory area shared by the consumption end and the production end, the production end and the consumption end initialize the shared memory area together. By reading or writing data into the shared memory area, communication between the processes can be realized between the consumer and the producer. By utilizing the memory area, the copying of data (such as messages) is not needed, and only two processes of the consumption end and the production end are required to be mapped into the same physical memory (namely the shared memory area) in the computing equipment, so that the consumption end and the production end can both see the memory area, and when one process reads the memory area, the other process can write the memory area.
According to the embodiment of the invention, the semaphore for the inter-process communication between the consumer and the producer is initialized. Here, the semaphore for the inter-process communication may be used to represent the number of resources that are allowed to be accessed, and the semaphore for the inter-process communication may be set to 0 when the semaphore for the inter-process communication is initialized, indicating that no resources are available to be accessed.
Subsequently, in step 220, memory cells of the memory region are acquired.
In some embodiments, a starting location of write information of a memory cell in a memory area shared by a producer and a consumer is obtained, so that a message is written into the memory area from the starting location. Optionally, in the memory area, the production end maps the storage area of the target file, so that the production end can directly access the target file corresponding to the address of the storage area of the target file, and acquire the message recorded in the target file. Optionally, the production end and the consumption end both map the storage area of the target file, so that the production end and the consumption end can directly access the target file, the production end reads the information from the address of the storage area of the target file, writes the information into the memory unit, and then the consumption end obtains the written information from the memory unit, and carries out corresponding processing on the information.
Subsequently, in step 230, the message is written to the memory location by multithreading and the semaphore for the interprocess communication is updated for processing by the consumer.
In some embodiments, the multithreading repeatedly obtains the current write address of the memory unit from the start position of the write information of the memory unit through an atomic operation, and writes the message into the memory unit. Taking the example of writing a plurality of messages, the first message is written into the starting position when being written, and the following messages are written into the latest writing address acquired currently (when the messages are written). Specifically, after reading a message from a target file corresponding to an address of a storage area of the target file, the production end writes the message into the memory unit.
In some embodiments, the semaphore for the inter-process communication may be set to 0 at initialization, after which the semaphore for the inter-process communication may be updated by increasing the semaphore for the inter-process communication by a corresponding amount, e.g., by one, when there is a message to be written to the memory element. When the signal quantity of the inter-process communication is a positive value, the signal quantity indicates that the resources allowing access exist in the memory area, and the consumption end can determine that the resources allowing access exist in the memory area through the signal quantity of the inter-process communication, so that the information can be acquired from the memory area and processed. Specifically, the consumer may process the message according to the method 300 of processing messages hereafter.
Next, a method of processing a message executed in the consumer will be described.
Fig. 3 shows a flow chart of a method 300 of processing a message according to another embodiment of the invention. The method 300 may be performed in an operating system of a computing device (e.g., the computing device 100 described previously). The operating system may be any operating system, such as: windows, linux, unix, but is not limited thereto. The computing device executing the method 300 includes a production end and a consumption end, where the production end may also refer to a producer process and the consumption end may also refer to a consumer process. The method 300 is suitable for execution in a consumer end of a computing device, such as the aforementioned computing device 100. As shown in fig. 3, method 300 begins at step 310.
In step 310, a memory region shared by the consumer and the producer is created, and a semaphore for inter-process communication between the consumer and the producer and a semaphore for inter-thread communication between multiple message processing threads of the consumer are initialized.
According to the embodiment of the invention, after the consumption end creates the memory area shared by the consumption end and the production end, the consumption end and the production end initialize the signal quantity of the inter-process communication together. The semaphore for an interprocess communication may be used to indicate the number of resources that are allowed to be accessed, and the semaphore for an interprocess communication may be set to 0 when the semaphore for an interprocess communication is initialized, indicating that no resources are available to be accessed. In addition, there is a need to initialize inter-thread traffic among multiple message processing threads at the consumer end, where the inter-thread traffic may be used to control access to resources, i.e., to control access to messages in the memory region, and the inter-thread traffic may represent the amount of common resources not used by the threads. The semaphore for inter-thread communication may be set during initialization of the semaphore for inter-thread communication to the amount of resources that may be used between the current threads, for example: the number of messages that the thread currently counts.
In some embodiments, the size of the largest data structure unit of the computing device is obtained and the size of the memory region is adjusted to an integer multiple of the size of the largest data structure unit.
Specifically, the communication protocol for the inter-process communication between the production end and the consumption end includes the structural units, and the size of the largest data structural unit can be obtained by comparing all the structural units involved in the communication protocol. Alternatively, the size of the maximum data structure unit may be the size of a memory page.
In some embodiments, in the memory region, a circular queue structure variable shared by the consumer and the producer is created, so that a read-write process of the message is processed through the created circular queue structure variable, for example: the producer is responsible for writing messages into the circular queue structure variables, and the consumer is responsible for reading messages from the circular queue structure variables. Optionally, in the memory area, the consumer maps the memory area of the target file, so that the consumer can directly access the target file corresponding to the address of the memory area of the target file, and record the message through the target file.
In step 320, a plurality of message processing threads are created and started by the consumer.
According to an embodiment of the present invention, the plurality of message processing threads may include a message acquisition thread, a message parsing thread and/or a message cache length listening thread, and further may include a message consumption thread. The message obtaining thread can be used for obtaining the message generated by the production end. The message parsing thread may be configured to parse the message acquired by the message acquisition thread, for example: and splitting the acquired message. The message consuming thread may be used to perform some additional processing on the parsed message, such as: and printing out the parsed message. The message cache length snoop thread may be used to snoop whether a message is present in the cache.
In step 330, the messages written into the memory area by the production end are processed by the plurality of message processing threads according to the incoming parameters, wherein the incoming parameters include the signal quantity of the inter-process communication and the signal quantity of the inter-thread communication.
The flow of processing messages by each of the plurality of message processing threads is described below.
FIG. 4 illustrates a flow diagram of a message acquisition thread processing a message according to one embodiment of the invention. As shown in fig. 4, in step 410, the length of a message stored in a memory area is acquired by a message acquisition thread.
Specifically, when a message is sent, the counted number of the message is accumulated, the length of the message is accumulated, when the message is received, the counted number of the message is reduced, and the length of the message is reduced. Optionally, the length of the message stored in the memory unit of the memory area is obtained.
Subsequently, in step 420, the message stored in the memory area is retrieved according to the length of the message.
Specifically, according to the length of the message stored in the acquired memory area, the message with the length in the memory area is acquired, that is, the message stored in the whole memory area is acquired as a whole.
Subsequently, in step 430, the message stored in the memory area is stored in a first buffer shared by the message retrieval thread and the message parsing thread.
In some embodiments, the message acquisition thread also stores the message stored in the memory region in a third buffer shared by the message acquisition thread and the message cache length monitoring thread, so that the message cache length monitoring thread processes the message stored in the shared third buffer.
Here, the first buffer area and the third buffer area exist in the computing device, and independently exist outside the memory area shared by the production end and the consumption end.
Subsequently, in step 440, the semaphore for the inter-thread communication is updated such that the message parsing thread splits the message deposited in the first cache region according to the updated semaphore for the inter-thread communication.
According to an embodiment of the invention, the semaphore for inter-thread communication is the number of messages currently counted by the thread. The semaphore of the inter-thread communication is updated, that is, the semaphore of the inter-thread communication is updated to the latest state, and the number of messages counted currently by the thread is updated for the message acquisition.
Then, in step 450, when the length of the message stored in the memory area is zero, the memory area is monitored.
According to the embodiment of the invention, when the length of the message stored in the memory area is zero, the message can be indicated that the message is not stored in the memory, and the memory area is monitored at the moment, so that when the message exists in the memory area, the message is continuously processed.
Subsequently, in step 460, the steps beginning with retrieving the length of the message stored in the memory region continue to be performed in response to a change in the semaphore for the interprocess communication.
According to an embodiment of the present invention, when the semaphore for the interprocess communication changes, which indicates that the resources in the memory area accessible to the consumer change, and there is a message accessible in the memory area, steps 410-460 are continued.
FIG. 5 illustrates a flow diagram of a message parsing thread processing a message according to one embodiment of the invention. As shown in fig. 5, in step 510, it is determined whether the updated semaphore for inter-thread communication is counted in the message by the message parsing thread.
Then, in step 520, if the updated inter-thread communication semaphore counts the message, the message stored in the first buffer is split and stored in a second buffer shared by the message parsing thread and the message consuming thread, the split message stored in the second buffer is counted, and the updated inter-thread communication semaphore is updated again, so that the message consuming thread performs predetermined processing on the split message stored in the second buffer according to the updated inter-thread communication semaphore.
The second buffer area exists in the computing device and independently exists outside the memory area shared by the production end and the consumption end.
In some embodiments, the messages stored in the first buffer may be split according to a specific length, for example, splitting the messages stored in the first buffer into a plurality of messages of a specific length to split a large message into a plurality of small messages. Of course, the message may be split according to other manners, which the present invention is not limited to.
According to an embodiment of the invention, the semaphore for inter-thread communication is the number of messages currently counted by the thread. And updating the semaphore of the inter-thread communication, namely updating the semaphore of the inter-thread communication to the latest state, and updating the latest state to the number of split messages counted currently by the message analysis thread.
If the updated semaphore for inter-thread communication does not account for the message, step 530 is performed to suspend the message resolution thread and free system resources occupied by the message resolution thread.
FIG. 6 illustrates a flow diagram of a message consuming thread processing a message according to one embodiment of the invention. As shown in fig. 6, in step 610, it is determined, by the message consuming thread, whether the traffic between threads after the re-update is counted into a message.
If the traffic volume of the inter-thread communication updated again is counted to the message, step 620 is executed to perform a predetermined process on the split message stored in the second buffer.
In some embodiments, the predetermined processing of the split message stored in the second buffer may be printing the split message stored in the second buffer one by one.
If the updated semaphore for inter-thread communication does not count the message, step 630 is performed to suspend the message consuming thread and release the system resources occupied by the message consuming thread.
FIG. 7 illustrates a flow diagram of a message cache length snoop thread processing a message, according to one embodiment of the invention. As shown in fig. 7, in step 710, the length of the message stored in the third buffer is printed by the message buffer length listening thread.
Subsequently, in step 720, a sleep process is performed on the message cache length snoop thread for a predetermined length of time.
In some embodiments, the predetermined time period may be set as desired, for example: the predetermined time period may be 1 second, although other time periods may be set. Here, the purpose of sleeping the message buffer length monitoring thread for a predetermined period of time is to facilitate statistics of packet loss that may exist in the multithreaded message, and to alleviate the bandwidth problem.
Then, in step 730, the message buffer length monitoring thread monitors the length of the message stored in the third buffer after the sleep is completed.
In some embodiments, the message cache length monitoring thread continuously monitors the messages stored in the third cache region, and when the length of the messages stored in the third cache region changes, steps 710-730 are continuously executed, so that statistics on the length of the messages stored in the current third cache region are facilitated.
In some embodiments, after step 330 of method 300, multiple message processing threads may also be associated with the main thread of the consumer. Optionally, the plurality of message processing threads are associated with the main thread of the consumer side by way of blocking.
Then, in response to the end of the plurality of message processing threads, the mapping of the storage area of the target file can be canceled, and the storage area shared by the production end and the consumption end can be deleted.
In some embodiments, the number of each thread in the message acquisition thread, the message analysis thread, the message consumption thread and the message cache length monitoring thread can be allocated according to the system resource condition of the operating system, so that the processing efficiency of the message is increased, and the inter-process communication speed between the production end and the consumption end is further improved.
The invention also provides a message processing system. Fig. 8 shows a schematic diagram of a message processing system 800 according to an embodiment of the invention. As shown in fig. 8, the processing system 800 of the message includes a consuming end 810 and a producing end 820.
Wherein the consumer 810 is adapted to: creating a memory area shared by a consumption end and a production end, initializing a semaphore for inter-process communication between the consumption end and the production end and a semaphore for inter-thread communication between a plurality of message processing threads of the consumption end; creating and starting a plurality of message processing threads; and processing the message written into the memory area by the production end according to the input parameters by a plurality of message processing threads, wherein the input parameters comprise the signal quantity of inter-process communication and the signal quantity of inter-thread communication.
Wherein the production end 820 is adapted to: initializing a memory area shared by a production end and a consumption end and a semaphore for interprocess communication between the consumption end and the production end; acquiring a memory unit of a memory area; messages are written to memory locations by multithreading and the semaphores for inter-process communication are updated.
It should be noted that, the details of the message processing system 800 provided in this embodiment are disclosed in detail in the descriptions based on fig. 1 to 7, and are not described herein again.
According to the technical scheme, the memory area shared by the production end and the consumption end is created, so that the production end and the consumption end can access the memory area. The production end writes the message into the memory area, and the consumption end processes the message in the memory area, so that the production end and the consumption end do not need to directly transfer the message, but transfer the message by accessing the shared memory area, thereby improving the inter-process communication speed between the production end and the consumption end. The invention also processes the message through a plurality of message processing threads, thereby improving the processing speed of the message and further improving the inter-process communication speed between the production end and the consumption end.
Further, the method and the device acquire the information from the memory area through the information acquisition thread, split the information acquired by the information acquisition thread through the information analysis thread, pre-treat the split information of the information analysis thread through the information consumption thread, monitor the length of the information acquired by the information acquisition thread through the information cache length monitoring thread, and finish the information treatment through the cooperation of the threads, thereby improving the inter-process communication speed between the production end and the consumption end.
In addition, the technical scheme of the invention can be applied to any operating system and has high flexibility. The number of each thread in the plurality of message processing threads can be allocated, so that the processing efficiency of the message can be improved, and the inter-process communication speed between the production end and the consumption end can be further improved. The invention can realize high-speed communication between processes without complex boundary processing algorithm.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions of the methods and apparatus of the present invention, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U-drives, floppy diskettes, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the mobile terminal will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to execute the message processing method of the invention in accordance with instructions in said program code stored in the memory.
By way of example, and not limitation, readable media comprise readable storage media and communication media. The readable storage medium stores information such as computer readable instructions, data structures, program modules, or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with examples of the invention. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into a plurality of sub-modules.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification, and all processes or units of any method or apparatus so disclosed, may be employed, except that at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments.
Furthermore, some of the embodiments are described herein as methods or combinations of method elements that may be implemented by a processor of a computer system or by other means of performing the functions. Thus, a processor with the necessary instructions for implementing the described method or method element forms a means for implementing the method or method element. Furthermore, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is for carrying out the functions performed by the elements for carrying out the objects of the invention.
As used herein, unless otherwise specified the use of the ordinal terms "first," "second," "third," etc., to describe a general object merely denote different instances of like objects, and are not intended to imply that the objects so described must have a given order, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments are contemplated within the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter.

Claims (9)

1. A method of processing a message, performed in a consumer end of a computing device, the method comprising:
creating a memory area shared by the consumption end and the production end, and initializing the semaphore of the consumption end and the production end for inter-process communication and the semaphore of the consumption end for inter-thread communication by a plurality of message processing threads;
creating and starting the plurality of message processing threads;
processing the message written into the memory area by the production end according to the input parameters by the plurality of message processing threads, wherein the input parameters comprise the semaphore of the inter-process communication and the semaphore of the inter-thread communication;
the message processing threads comprise a message acquisition thread and a message analysis thread, and the processing of the message written into the memory area by the production end comprises the following steps: acquiring the length of the message stored in the memory area through the message acquisition thread; acquiring the information stored in the memory area according to the length of the information stored in the memory area; storing the message into a first cache region shared by the message acquisition thread and the message analysis thread; updating the semaphore of the inter-thread communication so that the message analysis thread splits the message stored in the first cache region according to the updated semaphore of the inter-thread communication; when the length of the information stored in the memory area is zero, monitoring the memory area; and continuing to execute the steps from the step of acquiring the length of the message stored in the memory area in response to the change of the signal quantity of the inter-process communication.
2. The method of claim 1, wherein the plurality of message processing threads further comprises a message consuming thread that processes the message written to the memory area by the production side, further comprising:
judging whether the updated semaphore for communication between threads is counted to the information stored in the memory area or not through the information analysis thread;
if yes, splitting the information stored in the first cache region, storing the split information into a second cache region shared by the information analysis thread and the information consumption thread, counting the split information stored in the second cache region, and updating the updated inter-thread communication signal quantity again, so that the information consumption thread carries out preset processing on the split information stored in the second cache region according to the updated inter-thread communication signal quantity;
otherwise, suspending the message analysis thread and releasing the system resource occupied by the message analysis thread.
3. The method of claim 2, wherein processing the message written to the memory area by the production side further comprises:
judging whether the updated signal quantity of inter-thread communication is counted to the information stored in the memory area or not through the information consumption thread;
If yes, carrying out preset processing on the split information stored in the second cache area;
otherwise, suspending the message consumption thread and releasing the system resource occupied by the message consumption thread.
4. A method according to any one of claims 1 to 3, wherein the plurality of message processing threads further comprises a message cache length snoop thread for processing messages written to the memory region by the producer, further comprising:
storing the information stored in the memory area into a third cache area shared by the information acquisition thread and the information cache length monitoring thread through the information acquisition thread;
printing the length of the message stored in the third cache region through the message cache length monitoring thread;
performing sleep processing for a preset time length on the message cache length monitoring thread;
and after the message cache length monitoring thread finishes sleeping, monitoring the length of the message stored in the third cache region.
5. A method according to any one of claims 1 to 3, further comprising:
and acquiring the size of the maximum data structure unit of the computing device, and adjusting the size of the memory area to be an integer multiple of the size of the maximum data structure unit.
6. A method of processing a message, performed in a production end of a computing device, the method comprising:
initializing a memory area shared by the production end and the consumption end and a semaphore for interprocess communication between the consumption end and the production end;
acquiring a memory unit of the memory area;
writing a message into the memory unit by multithreading and updating the semaphore for the inter-process communication so that the consumer processes the message according to the method of any of claims 1 to 5.
7. A message processing system comprising a consumer side and a producer side, wherein:
the consumer is adapted to: creating a memory area shared by the consumption end and the production end, and initializing the semaphore of the consumption end and the production end for inter-process communication and the semaphore of the consumption end for inter-thread communication by a plurality of message processing threads; creating and starting a plurality of message processing threads; processing the message written into the memory area by the production end according to the input parameters by the plurality of message processing threads, wherein the input parameters comprise the semaphore of the inter-process communication and the semaphore of the inter-thread communication;
The production end is suitable for: initializing a memory area shared by the production end and the consumption end and a semaphore for interprocess communication between the consumption end and the production end; acquiring a memory unit of the memory area; messages are written to the memory cells by multithreading and the semaphores for the interprocess communication are updated.
8. A computing device, comprising:
at least one processor; and
a memory storing program instructions, wherein the program instructions are configured to be adapted to be executed by the at least one processor, the program instructions comprising instructions for performing the method of any one of claims 1 to 6.
9. A readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform the method of any one of claims 1 to 6.
CN202310103617.5A 2023-01-30 2023-01-30 Message processing method, system, computing device and readable storage medium Active CN115840654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310103617.5A CN115840654B (en) 2023-01-30 2023-01-30 Message processing method, system, computing device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310103617.5A CN115840654B (en) 2023-01-30 2023-01-30 Message processing method, system, computing device and readable storage medium

Publications (2)

Publication Number Publication Date
CN115840654A CN115840654A (en) 2023-03-24
CN115840654B true CN115840654B (en) 2023-05-12

Family

ID=85579629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310103617.5A Active CN115840654B (en) 2023-01-30 2023-01-30 Message processing method, system, computing device and readable storage medium

Country Status (1)

Country Link
CN (1) CN115840654B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117076139B (en) * 2023-10-17 2024-04-02 北京融为科技有限公司 Data processing method and related equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352743B (en) * 2018-12-24 2023-12-01 北京新媒传信科技有限公司 Process communication method and device
US11726793B2 (en) * 2019-11-15 2023-08-15 Intel Corporation Data locality enhancement for graphics processing units
CN111651286A (en) * 2020-05-27 2020-09-11 泰康保险集团股份有限公司 Data communication method, device, computing equipment and storage medium
CN113778700A (en) * 2020-10-27 2021-12-10 北京沃东天骏信息技术有限公司 Message processing method, system, medium and computer system
CN113176942A (en) * 2021-04-23 2021-07-27 北京蓝色星云科技发展有限公司 Method and device for sharing cache and electronic equipment
CN114911632B (en) * 2022-07-11 2022-09-13 北京融为科技有限公司 Method and system for controlling interprocess communication

Also Published As

Publication number Publication date
CN115840654A (en) 2023-03-24

Similar Documents

Publication Publication Date Title
US8607003B2 (en) Memory access to a dual in-line memory module form factor flash memory
US20150186068A1 (en) Command queuing using linked list queues
CN111949568B (en) Message processing method, device and network chip
CN109542636B (en) Data updating method and device
US9846626B2 (en) Method and apparatus for computer memory management by monitoring frequency of process access
US20170249255A1 (en) Dynamic tier remapping of data stored in a hybrid storage system
KR20200135718A (en) Method, apparatus, device and storage medium for managing access request
CN115840654B (en) Message processing method, system, computing device and readable storage medium
CN113625973B (en) Data writing method, device, electronic equipment and computer readable storage medium
EP2219114A1 (en) Method and apparatus for allocating storage addresses
CN113836184A (en) Service persistence method and device
CN103136215A (en) Data read-write method and device of storage system
US10817183B2 (en) Information processing apparatus and information processing system
CN109347899B (en) Method for writing log data in distributed storage system
CN109791469B (en) Apparatus and method for setting clock speed/voltage of cache memory
CN116561091A (en) Log storage method, device, equipment and readable storage medium
CN115576685A (en) Container scheduling method and device and computer equipment
US20140379846A1 (en) Technique for coordinating memory access requests from clients in a mobile device
CN113220608A (en) NVMe command processor and processing method thereof
CN111435332B (en) Data processing method and device
CN112068948B (en) Data hashing method, readable storage medium and electronic device
CN112882831A (en) Data processing method and device
CN116670661A (en) Cache access method of graphics processor, graphics processor and electronic device
CN110727405A (en) Data processing method and device, electronic equipment and computer readable medium
US11853593B2 (en) Shared memory protection method for securing MMIO commands

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant