CN112437125A - Information concurrent processing method and device, electronic equipment and storage medium - Google Patents

Information concurrent processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112437125A
CN112437125A CN202011245331.3A CN202011245331A CN112437125A CN 112437125 A CN112437125 A CN 112437125A CN 202011245331 A CN202011245331 A CN 202011245331A CN 112437125 A CN112437125 A CN 112437125A
Authority
CN
China
Prior art keywords
session information
concurrent processing
sub
arrays
segmented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011245331.3A
Other languages
Chinese (zh)
Other versions
CN112437125B (en
Inventor
张建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011245331.3A priority Critical patent/CN112437125B/en
Publication of CN112437125A publication Critical patent/CN112437125A/en
Application granted granted Critical
Publication of CN112437125B publication Critical patent/CN112437125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management

Abstract

The application discloses an information concurrent processing method and device, electronic equipment and a storage medium, and relates to the field of concurrent processing and Internet of vehicles. The specific implementation scheme is as follows: dividing the acquired session information into a plurality of sub-session information; correspondingly storing the plurality of sub-session information into a plurality of configured segment arrays respectively; and starting a plurality of coroutines, and respectively carrying out concurrent processing on the plurality of segmented arrays according to the plurality of coroutines to obtain concurrent processing results aiming at the plurality of sub-session information. By the method and the device, the efficiency of concurrent processing can be improved.

Description

Information concurrent processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of concurrent processing. The application particularly relates to the field of car networking.
Background
In the information interaction or information push service, a long connection needs to be established between a terminal and a background server by sending session information, if a large amount of session information exists, session resources are contended under a high concurrency environment, too frequent resource contention causes a decrease in concurrency processing efficiency, and a system for managing the long connection inevitably has a performance bottleneck, which causes a decrease in system performance. In this regard, no effective solution exists in the related art.
Disclosure of Invention
The application provides an information concurrent processing method and device, electronic equipment and a storage medium.
According to an aspect of the present application, there is provided an information concurrent processing method, including:
dividing the acquired session information into a plurality of sub-session information;
correspondingly storing the plurality of sub-session information into a plurality of configured segment arrays respectively;
and starting a plurality of coroutines, and respectively carrying out concurrent processing on the plurality of segmented arrays according to the plurality of coroutines to obtain concurrent processing results aiming at the plurality of sub-session information.
According to another aspect of the present application, there is provided an information concurrency processing apparatus including:
the segmentation module is used for segmenting the acquired session information into a plurality of sub-session information;
the group storage module is used for correspondingly storing the sub-session information into a plurality of configured segment arrays respectively;
and the concurrent processing module is used for starting a plurality of coroutines and respectively carrying out concurrent processing on the plurality of segmented arrays according to the plurality of coroutines to obtain a concurrent processing result aiming at the plurality of sub-session information.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method as provided by any one of the embodiments of the present application.
According to another aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method provided by any one of the embodiments of the present application.
By adopting the method and the device, the acquired session information can be divided into a plurality of sub-session information, and the plurality of sub-session information can be respectively and correspondingly stored in a plurality of configured segment arrays. And starting a plurality of coroutines, and respectively carrying out concurrent processing on the plurality of segmented arrays according to the plurality of coroutines to obtain concurrent processing results aiming at the plurality of sub-session information. The acquired large amount of session information can be correspondingly stored in a plurality of mutually independent segmented arrays after being divided, and concurrent processing can be carried out based on the plurality of mutually independent segmented arrays after the coroutine is started, so that the concurrent processing efficiency is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flow chart of an information concurrent processing method according to an embodiment of the present application;
FIG. 2 is a flow chart diagram of an information concurrent processing method according to an embodiment of the present application;
FIG. 3 is a flow diagram illustrating concurrent processing of an application example according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a composition of an information concurrent processing apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device for implementing the information concurrent processing method according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The term "at least one" herein means any combination of at least two of any one or more of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C. The terms "first" and "second" used herein refer to and distinguish one from another in the similar art, without necessarily implying a sequence or order, or implying only two, such as first and second, to indicate that there are two types/two, first and second, and first and second may also be one or more.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present application.
In an information interaction or information push service (such as an IM or push service system application), after a long connection is established, a background server can realize information push according to a unique identifier of a terminal or a terminal user. In order to meet the information pushing requirement, a terminal Identification (ID) mapping table may be used to store session information of each terminal ID and corresponding ID, so as to implement long connection management between the backend server and the terminal based on the mapping table (e.g., perform accurate information pushing, manage creation and destruction of connection, implement a heartbeat detection mechanism to implement connection state judgment and cleaning of expired connection for a fixed terminal ID, etc.).
In a high concurrency environment, if a large number of terminals or terminal users are faced with access, the terminal is in a connection state of high concurrency requests, contention for session resources is very frequent at this time, a large number of other operations are blocked by some time-consuming operations, so that a large number of request threads are blocked, which all cause reduction of concurrency processing efficiency, thereby causing a plurality of problems of reduction of system performance such as reduction of system processing throughput, reduction of system performance, system crash and the like when the system performance is serious.
In order to improve the concurrent processing efficiency, a read-write lock or a segmented lock scheme can be adopted. For the read-write lock, the hashmap can be used to store the terminal ID mapping table, and the read-write lock is added for protection, so as to improve the concurrency processing efficiency in a high concurrency environment. However, the problem of blocking reading by writing exists when the read-write lock is adopted, and the performance of the read-write lock is poor under the condition of multiple CPUs or multiple cores. In terms of the segment lock, the segment lock scheme is compared with the read-write lock scheme, a segmentation mechanism is added, although system blocking caused by resource competition can be reduced through effective segmentation, the segment lock is divided into a plurality of segments, a plurality of pieces of information in one segment are processed concurrently, and after the previous segment is executed, the segments after the previous segment is executed are executed continuously, so that the segment lock is a segmented and serial processing mode, and particularly for the segmentation condition of certain storage hot spot data, the problem of write blocking and read still exists.
Although the above-mentioned solution of read-write lock or segment lock can improve the concurrency processing efficiency to some extent, for example, O (1) read-write operation on the fixed terminal ID can be achieved, it still cannot solve the huge access pressure faced by the high concurrency environment, especially the session information for long connection management with higher number level (e.g. tens of thousands or millions) stored in the mapping table. Taking a concurrent processing operation as a heartbeat detection operation as an example, responding to the heartbeat detection operation, when a heartbeat detection mechanism is used for judging a connection state and clearing an overdue connection, all session information needs to be traversed for each heartbeat detection, the time complexity is very high, that is, the time consumption during one heartbeat detection is long, a large number of request operations are easily blocked at a certain moment under a high concurrent scene, and due to the low concurrent processing efficiency, the processing throughput of the system is still reduced, the system performance is reduced, and the system performance such as system crash can be caused when the system is serious.
According to an embodiment of the present application, an information concurrent processing method is provided, and fig. 1 is a flowchart of the information concurrent processing method according to the embodiment of the present application, and the method may be applied to an information concurrent processing apparatus, for example, in a case that the apparatus may be deployed in a terminal or a server or other processing device to execute, session information segmentation, segmentation processing and storage, concurrent processing, and the like may be executed. Among them, the terminal may be a User Equipment (UE), a mobile device, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, and so on. In some possible implementations, the method may also be implemented by a processor calling computer readable instructions stored in a memory. As shown in fig. 1, includes:
s101, dividing the acquired session information into a plurality of sub-session information.
S102, correspondingly storing the plurality of sub-session information into a plurality of configured segment arrays respectively.
S103, starting a plurality of coroutines, and respectively carrying out concurrent processing on the plurality of segmented arrays according to the plurality of coroutines to obtain concurrent processing results aiming at the plurality of sub-session information.
In the above S101, the session information may be obtained by the terminal, or obtained by the background server, or obtained in the process of establishing a long connection between the terminal and the background server. That is to say, the processing logic of the information concurrent processing shown in fig. 1 may be set in the terminal, may also be set in the background server, and may also be set in the intermediate processing device located between the terminal and the background server.
In the above S102, the segment array is only one possibility of the storage unit running independently, for example, when the application is implemented in go language, the storage unit may be implemented based on a segment array structure (e.g., sync. map) in go language, that is, each segment array in the segment array structure is used as the storage unit. The present application is not limited to the storage form of the segmented array, and the storage form that can improve the system performance in the concurrent processing is within the protection scope of the present application.
In the above S103, the coroutine is a non-preemptive or cooperative computer program concurrent scheduling mechanism, which may also be referred to as a lightweight thread, and the coroutine is an encapsulation of thread operations, and can implement a concurrent processing logic through a state machine, a callback, and the like. The multi-coroutine is more efficient than the concurrent processing of the multi-thread, because the multi-coroutine is light-weight, the processing speed is higher, and compared with the multi-thread, the multi-coroutine on the memory consumption, because the bottom layer of the coroutine is in a form similar to a thread pool, the multi-coroutine can be reused, and is more dominant than the multi-thread, the memory is saved, a large amount of occupation of the memory is avoided, the processing speed is further improved, and the blocking problem possibly caused by the adoption of the thread can be avoided.
The application is implemented by using a go language as an example, the coroutine can be a go program, the application is not limited to go voice, but can also be a static programming language (such as Kotlin), which is a static programming language applicable to modern multi-platform applications, wherein Kotlin can be compiled into Java bytecode and JavaScript and Java code to operate with each other, and as long as a code design language capable of improving system performance in concurrent processing is within the protection range of the application.
In an example based on the processing logic of S101-S103, the obtained large amount of session information (e.g., 10 ten thousand pieces of session information) may be divided into a plurality of sub-session information (e.g., 1000 pieces of sub-session information, each of which has 100 sub-session messages, may be divided according to a hash operation of the terminal ID or an MD5 operation), and the piece of information including the 100 sub-session messages may be stored in a corresponding storage unit (e.g., the piece of information including the 100 sub-session messages may be stored in a configured corresponding segment array). A large amount of session information is divided into a plurality of storage units (such as segmented arrays) which operate independently, after a plurality of coroutines are started, the plurality of segmented arrays can be respectively subjected to concurrent processing according to the plurality of coroutines, and therefore concurrent processing results for a plurality of pieces of sub-session information are obtained. The number of the sub-session information, the number of the coroutines, and the number of the segmented arrays may be in one-to-one correspondence, for example, the number of the coroutines is N, the number of the segmented arrays is also N, and correspondingly, the session information may be divided into N pieces of sub-session information and stored in the N segmented arrays correspondingly, and the segmented arrays are read by the N coroutines to implement the above concurrent processing. N is an integer greater than 1, and for a large amount of session information, in order to improve the concurrency processing efficiency, N may be 500-1000 parts in practical application.
By the method and the device, the acquired session information can be divided into the plurality of sub-session information, the plurality of sub-session information can be respectively and correspondingly stored in the plurality of configured segmented arrays, and after the plurality of coroutines are started, the plurality of segmented arrays can be respectively subjected to concurrent processing according to the plurality of coroutines, so that a concurrent processing result for the large-quantum session information is obtained. The acquired large amount of session information can be correspondingly stored in the plurality of mutually independent segmented arrays after being divided, concurrent processing can be carried out based on the plurality of mutually independent segmented arrays after the corotation is started, and the processing of the sub-session information stored in one segmented array is relatively independent of the sub-session information stored in other segmented arrays, so that the problems of system blockage, resource contention and the like caused by low concurrent processing efficiency of the large amount of session information in a high concurrent environment are avoided, the concurrent processing efficiency is improved, the performance bottleneck of a system which is easy to appear in the high concurrent environment is overcome, and the system performance is improved.
According to an embodiment of the present application, an information concurrent processing method is provided, and fig. 2 is a schematic flow diagram of the information concurrent processing method according to the embodiment of the present application, as shown in fig. 2, including:
s201, dividing the session information into N parts of sub-session information based on N configured segment numbers, wherein each part of sub-session information is the sub-session information, and N is an integer greater than 2.
S202, correspondingly storing the N pieces of sub-session information into N segmented arrays respectively, wherein N is an integer larger than 2.
S203, starting a plurality of coroutines, and binding each coroutine with the corresponding segmented array respectively to obtain N segmented arrays bound by the N coroutines.
S204, responding to the concurrent processing operation, respectively reading the bound N segmented arrays based on the N coroutines, and executing the concurrent processing.
In the above S201, the N number of segments may be 0, …, N-1 number of segments; in the above S202, the N segment arrays may be 0, …, N-1 segment arrays; in S203, the number of coroutines bound to the corresponding segment array may also be N, which may be 0, …, N-1 coroutines. That is, the number of sub-session information copies containing the sub-session information corresponds to the number of N segments, and the number of N segments corresponds to the number of N segment arrays and the number of N coroutines, respectively.
Based on the one-to-one binding relationship formed by S201-S203, the acquired session information can be divided into N parts (each part includes a plurality of sub-session information) according to the number of N segments, and then stored in the configured N segment arrays respectively. After the concurrent processing operation is responded in S204, the bound N segment arrays are read based on the N coroutines, concurrent processing is performed on the N segment arrays at the same time, and the stored multiple sub-session information in the N segment arrays may also be concurrently processed.
In an example, the session information may be obtained by the terminal, or obtained by the background server, or obtained in a process of establishing a long connection between the terminal and the background server. That is to say, the processing logic of the information concurrent processing shown in fig. 1 may be set in the terminal, may also be set in the background server, and may also be set in the intermediate processing device located between the terminal and the background server.
In an example, the segmented array is only one possibility of the storage unit running independently, for example, the storage unit is implemented in go language, and the storage unit can be implemented based on the segmented array structure (e.g., sync. map) in go language, that is, each segmented array in the segmented array structure is used as the storage unit. The present application is not limited to the storage form of the segmented array, and the storage form that can improve the system performance in the concurrent processing is within the protection scope of the present application.
In an example, the coroutine is a non-preemptive or cooperative computer program concurrent scheduling mechanism, which may also be referred to as a lightweight thread, and the coroutine is an encapsulation of thread operations, and may implement a concurrent processing logic through a state machine, a callback, and the like. The multi-coroutine is more efficient than the concurrent processing of the multi-thread, because the multi-coroutine is light-weight, the processing speed is higher, and compared with the multi-thread, the multi-coroutine on the memory consumption, because the bottom layer of the coroutine is in a form similar to a thread pool, the multi-coroutine can be reused, and is more dominant than the multi-thread, the memory is saved, a large amount of occupation of the memory is avoided, the processing speed is further improved, and the blocking problem possibly caused by the adoption of the thread can be avoided.
In an example, the application is implemented in a go language, the coroutine may be a go program, the application is not limited to go voice, and the application may also be a static programming language (e.g., Kotlin), which is a static programming language that can be used in modern multi-platform applications, where Kotlin can be compiled into Java bytecode and JavaScript and Java code operate with each other, as long as a code design language that can solve the problem of improving system performance in concurrent processing is within the protection scope of the application.
By adopting the method and the device, the acquired session information can be divided into N parts (each part comprises a plurality of pieces of sub-session information) according to the N sections and then respectively and correspondingly stored in the configured N section arrays through the binding relationship that the number of the sub-session information parts comprising the plurality of pieces of sub-session information corresponds to the N sections and the N sections respectively correspond to the N section arrays and the N coordination processes one by one. After the N coroutines are started, the N segmented arrays can be simultaneously subjected to concurrent processing according to the N coroutines, so that a concurrent processing result aiming at the large quantum session information is obtained. The acquired massive session information can be correspondingly stored in a plurality of N independent segmented arrays after being divided, concurrent processing can be carried out based on the N independent segmented arrays after N coroutines are started, and the processing of the sub-session information stored in one segmented array is relatively independent of the sub-session information stored in other segmented arrays, so that the problems of system blockage, resource contention and the like caused by low concurrent processing efficiency of massive session information in a high concurrent environment are avoided, the concurrent processing efficiency is improved, the performance bottleneck of a system which is easy to appear in the high concurrent environment is overcome, and the system performance is improved.
In one embodiment, the method further comprises: and extracting terminal identifications corresponding to the session information respectively, and performing hash operation or MD5 operation according to the terminal identifications to obtain identification codes for verifying the session information. By adopting Hash operation or MD5 operation, the original terminal identification can be converted into a unique identification code and cannot be tampered, so that the system security of long connection management is improved.
In one embodiment, the concurrent processing operation includes: read-write operation or heartbeat detection operation, including at least two of the following embodiments;
the first embodiment is as follows: the responding concurrent processing operation respectively reads the bound N segmented arrays based on N coroutines, and executes the concurrent processing, and comprises the following steps: and under the condition that the concurrent processing operation is the read-write operation, inquiring a read field and a write field in the N segmented arrays, accessing the N segmented arrays correspondingly bound based on the N coroutines in parallel, reading data based on the read field in the N segmented arrays, and writing data based on the write field in the N segmented arrays. By adopting the embodiment, the concurrent processing efficiency of the read-write operation can be improved, and the phenomenon that the read operation blocks the write or the write operation blocks the read can be avoided.
Embodiment two: the responding concurrent processing operation respectively reads the bound N segmented arrays based on N coroutines, and executes the concurrent processing, and comprises the following steps: in a case where the concurrent processing operation is the heartbeat detection operation, querying the stored plurality of sub-session information in the N segment arrays,
and accessing the correspondingly bound N segmented arrays in parallel based on the N routines, and performing traversal detection on the plurality of sub-session information in the N segmented arrays until traversal is finished. By adopting the embodiment, the concurrent processing efficiency of heartbeat detection can be improved, a large amount of time consumed by traversal processing is avoided, and the time cost is saved.
In an example, the session information may be correspondingly stored in a manner of configuring a segment number (for a large amount of session information, the session information may be segmented according to the terminal ID corresponding to the session information, the session information may be divided into a plurality of sub-session information, and the plurality of sub-session information may correspond to an ID range formed by N IDs) and a segment array (N arrays with sync. Then, a coroutine in go language (or other similar languages) is created, and a timer is started after an ID range (which may correspond to the ID range obtained by segmentation) of the coroutine is identified. After the timer is started, triggering N coroutines (each coroutine can bind one segment array in a sync. map segment array structure) to start executing heartbeat detection processing logic or read-write operation processing logic. Under the condition of responding to the heartbeat detection processing logic, the N pieces of session information which are well divided and correspondingly stored in the N pieces of sync. In response to the read-write operation processing logic, hash operation may be performed according to the terminal ID to obtain a hash value, and a corresponding read field "or write field" dirty field "is queried in the segment array (N arrays with sync.map as an element) according to the hash value to read or write data. When data is read, a read field can be inquired first, and if the read field does not exist, a dirty field is inquired; when writing data, only the dirty field is written. Reading the read field may not require locking; and a lock is required to either read or write the dirty field. In addition, the misses field may be set to count the number of times the read field is punctured (where "punctured" refers to a situation where the dirty field needs to be read), and if the number of times exceeds a preset number, the session information in the dirty field is synchronized to the read field.
Application example:
the processing flow of the embodiment of the application comprises the following contents:
the application example is explained by taking a coroutine created by a go language as an example, and the coroutine created by the go language is simply called the go program. A packet data structure (e.g., a sync.map structure) can be implemented based on the go language. By adopting the sync.map structure, the read-write separation can be realized aiming at the asynchronous hashmap structure of a multi-CPU or multi-core high-concurrency scene, namely: the value of the session information is updated in a mode that the CAS at the CPU level is not locked, so that the concurrency processing efficiency is greatly improved.
Fig. 3 is a flowchart illustrating a concurrent processing of an application example according to an embodiment of the present application, and as shown in fig. 3, the concurrent processing includes: setting a segment number slot _ num to be N; establishing a segmented array client _ map as sync.map (slot _ num); creating a go procedure and identifying a go _ id range (0, slot _ num-1) of the go procedure; after starting the timer to time, waiting for operation (namely waiting for response to concurrent processing operation); then judging whether the concurrent processing operation is read-write operation, if so, obtaining a terminal client _ id, calculating a hash value 'hash _ value' of the client _ id, performing corresponding read-write operation on a sync _ map object of the client _ map (hash _ value% slot _ num), and continuously waiting for operation (namely waiting for responding to the next round of request and initiated concurrent processing operation); if not, under the condition that the concurrent processing operation is a heartbeat detection event, all the go programs are notified, traversal and corresponding logic operation are respectively carried out on the client _ map (go _ id) object, and the waiting operation is continued (namely, the response to the concurrent processing operation which is requested and initiated in the next round is waited for).
For example, as for the current situation, a high-performance server rarely realizes 100 thousands of connections, and the current mainstream technologies are all to improve concurrency by means of clustering and the like. But we take a single 100 ten thousand connections as an example, if 1000 segments are allocated to manage these 100 ten thousand connections. According to the existing scheme, a heartbeat operation needs to traverse 100 ten thousand memory cells at a time, and if the operation is a write operation, all other current operations need to be blocked, which is a dangerous behavior. In this patent method, 1000 independent memory cells exist. First, there is no resource contention among each other, reducing the risk of blocking other operations. Second, each go pass is on average on the order of 100 ten thousand/1000 to 1000. There is little risk of blocking for the 1000 element traversal process. The system performance is greatly improved.
By adopting the application example, a segmentation mechanism of a go program + sync.map structure is utilized to divide a large amount of session information managed by long connection into N preset segment array structures (sync.map structures) with id of 0-N-1 and sync.map as segment array elements by utilizing Hash or MD5 operation to store the id of the terminal. By timing of the timer, after N go programs (each go program is bound with a sync.map structure) are triggered to a fixed time, concurrent processing logics (read-write logic and heartbeat detection logic) are processed, so that each go program can simultaneously perform parallel processing on the respective corresponding sync.map. Because a large amount of session information can be divided into N independent sync.maps for storage, the contention problem of session resources can be greatly reduced; because of the light weight characteristic of the go process, the access of parallel processing on each divided sync.map can be ensured, and especially, the time complexity of traversal operation in the heartbeat detection logic can be greatly reduced, so that the concurrent processing efficiency is improved, the probability of request blocking in a high concurrent environment is greatly reduced, and the problems that the processing throughput of a system is reduced, the system performance is reduced, and a plurality of system performances such as system crash can be reduced when the system is serious even if a read-write lock and a segment lock are adopted in the related technology are solved.
According to an embodiment of the present application, there is provided an information concurrency processing apparatus, and fig. 4 is a schematic structural diagram of the information concurrency processing apparatus according to the embodiment of the present application, and as shown in fig. 4, the information concurrency processing apparatus includes: a dividing module 41, configured to divide the acquired session information into a plurality of sub-session information; a grouping storage module 42, configured to correspondingly store the multiple pieces of sub-session information into multiple configured segment arrays respectively; and the concurrency processing module 43 is configured to start multiple coroutines, and perform concurrency processing on the multiple segment arrays according to the multiple coroutines, to obtain a concurrency processing result for the multiple sub-session information.
In an embodiment, the dividing module is configured to divide the session information into N pieces of sub-session information based on the configured N numbers of segments, where each piece of sub-session information is the plurality of sub-session information, and N is an integer greater than 2.
In one embodiment, the system further includes a verification module, configured to extract terminal identifiers corresponding to the session information respectively; and carrying out hash operation or MD5 operation according to the terminal identification to obtain an identification code for verifying the session information.
In an embodiment, the packet storage module is configured to correspondingly store the N pieces of sub-session information into N segment arrays, respectively.
In one embodiment, the concurrent processing module is configured to bind each coroutine with a corresponding segment array, so as to obtain N segment arrays to which N coroutines are bound; and responding to the concurrent processing operation, respectively reading the bound N segmented arrays based on the N coroutines, and executing the concurrent processing. Wherein the concurrent processing operations comprise: read-write operations or heartbeat detection operations.
In one embodiment, the concurrent processing module is configured to query a read field and a write field in the N segment arrays when the concurrent processing operation is the read-write operation; accessing the N segmented arrays correspondingly bound based on the N protocols in parallel, reading data based on the read fields in the N segmented arrays, writing data based on the write fields in the N segmented arrays.
In an embodiment, the concurrent processing module is configured to, in a case that the concurrent processing operation is the heartbeat detection operation, query the stored multiple pieces of sub-session information in the N segment arrays; and accessing the correspondingly bound N segmented arrays in parallel based on the N routines, and performing traversal detection on the plurality of sub-session information in the N segmented arrays until traversal is finished.
The functions of each module in each apparatus in the embodiment of the present application may refer to corresponding descriptions in the above method, and are not described herein again.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 5 is a block diagram of an electronic device for implementing the information concurrent processing method according to the embodiment of the present application. The electronic device may be the aforementioned deployment device or proxy device. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 5, the electronic apparatus includes: one or more processors 501, memory 502, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 5, one processor 501 is taken as an example.
Memory 502 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor, so that the at least one processor executes the information concurrent processing method provided by the application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the information concurrent processing method provided by the present application.
The memory 502, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the segmentation module, the grouping storage module, the concurrency processing module, etc. shown in fig. 4) corresponding to the information concurrency processing method in the embodiments of the present application. The processor 501 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 502, that is, implements the information concurrent processing method in the above method embodiment.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 502 optionally includes memory located remotely from processor 501, which may be connected to an electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the information concurrent processing method may further include: an input device 503 and an output device 504. The processor 501, the memory 502, the input device 503 and the output device 504 may be connected by a bus or other means, and fig. 5 illustrates the connection by a bus as an example.
The input device 503 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 504 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
By adopting the method and the device, the acquired session information can be divided into a plurality of sub-session information, and the plurality of sub-session information can be respectively and correspondingly stored in a plurality of configured segment arrays. And starting a plurality of coroutines, and respectively carrying out concurrent processing on the plurality of segmented arrays according to the plurality of coroutines to obtain concurrent processing results aiming at the plurality of sub-session information. The acquired large amount of session information can be correspondingly stored in a plurality of mutually independent segmented arrays after being divided, and concurrent processing can be carried out based on the plurality of mutually independent segmented arrays after the coroutine is started, so that the concurrent processing efficiency is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (16)

1. An information concurrent processing method, the method comprising:
dividing the acquired session information into a plurality of sub-session information;
correspondingly storing the plurality of sub-session information into a plurality of configured segment arrays respectively;
and starting a plurality of coroutines, and respectively carrying out concurrent processing on the plurality of segmented arrays according to the plurality of coroutines to obtain concurrent processing results aiming at the plurality of sub-session information.
2. The method of claim 1, wherein the segmenting the acquired session information into a plurality of sub-session information comprises:
dividing the session information into N parts of sub-session information based on the configured N sections, wherein each part of sub-session information is the plurality of sub-session information, and N is an integer greater than 2.
3. The method of claim 2, further comprising:
extracting terminal identifications respectively corresponding to the session information;
and carrying out hash operation or MD5 operation according to the terminal identification to obtain an identification code for verifying the session information.
4. The method according to claim 2 or 3, wherein the correspondingly storing the plurality of sub-session information into a plurality of configured segment arrays respectively comprises:
and correspondingly storing the N parts of sub-session information into N segmented arrays respectively.
5. The method of claim 4, wherein the starting of the plurality of coroutines and the concurrent processing of the plurality of segment arrays according to the plurality of coroutines respectively comprises:
binding each coroutine with the corresponding segmented array respectively to obtain N segmented arrays bound by the N coroutines;
responding to concurrent processing operation, respectively reading the bound N segmented arrays based on N coroutines, and executing the concurrent processing;
wherein the concurrent processing operations comprise: read-write operations or heartbeat detection operations.
6. The method of claim 5, wherein said performing, in response to a concurrent processing operation, the concurrent processing based on the respective N bound sets of segments being read by N coroutines comprises:
under the condition that the concurrent processing operation is the read-write operation, inquiring a read field and a write field in the N segmented arrays;
accessing the N segmented arrays correspondingly bound based on the N protocols in parallel, reading data based on the read fields in the N segmented arrays, writing data based on the write fields in the N segmented arrays.
7. The method of claim 5, wherein said performing, in response to a concurrent processing operation, the concurrent processing based on the respective N bound sets of segments being read by N coroutines comprises:
under the condition that the concurrent processing operation is the heartbeat detection operation, inquiring the stored sub-session information in the N segmented arrays;
and accessing the correspondingly bound N segmented arrays in parallel based on the N routines, and performing traversal detection on the plurality of sub-session information in the N segmented arrays until traversal is finished.
8. An information concurrency processing apparatus, the apparatus comprising:
the segmentation module is used for segmenting the acquired session information into a plurality of sub-session information;
the group storage module is used for correspondingly storing the sub-session information into a plurality of configured segment arrays respectively;
and the concurrent processing module is used for starting a plurality of coroutines and respectively carrying out concurrent processing on the plurality of segmented arrays according to the plurality of coroutines to obtain a concurrent processing result aiming at the plurality of sub-session information.
9. The apparatus of claim 8, wherein the means for segmenting is configured to:
dividing the session information into N parts of sub-session information based on the configured N sections, wherein each part of sub-session information is the plurality of sub-session information, and N is an integer greater than 2.
10. The apparatus of claim 9, further comprising a verification module to:
extracting terminal identifications respectively corresponding to the session information;
and carrying out hash operation or MD5 operation according to the terminal identification to obtain an identification code for verifying the session information.
11. The apparatus of claim 9 or 10, wherein the packet storage module is to:
and correspondingly storing the N parts of sub-session information into N segmented arrays respectively.
12. The apparatus of claim 11, wherein the concurrency processing module is to:
binding each coroutine with the corresponding segmented array respectively to obtain N segmented arrays bound by the N coroutines;
responding to concurrent processing operation, respectively reading the bound N segmented arrays based on N coroutines, and executing the concurrent processing;
wherein the concurrent processing operations comprise: read-write operations or heartbeat detection operations.
13. The apparatus of claim 12, wherein the concurrency processing module is to:
under the condition that the concurrent processing operation is the read-write operation, inquiring a read field and a write field in the N segmented arrays;
accessing the N segmented arrays correspondingly bound based on the N protocols in parallel, reading data based on the read fields in the N segmented arrays, writing data based on the write fields in the N segmented arrays.
14. The apparatus of claim 12, wherein the concurrency processing module is to:
under the condition that the concurrent processing operation is the heartbeat detection operation, inquiring the stored sub-session information in the N segmented arrays;
and accessing the correspondingly bound N segmented arrays in parallel based on the N routines, and performing traversal detection on the plurality of sub-session information in the N segmented arrays until traversal is finished.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1-7.
CN202011245331.3A 2020-11-10 2020-11-10 Information concurrent processing method and device, electronic equipment and storage medium Active CN112437125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011245331.3A CN112437125B (en) 2020-11-10 2020-11-10 Information concurrent processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011245331.3A CN112437125B (en) 2020-11-10 2020-11-10 Information concurrent processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112437125A true CN112437125A (en) 2021-03-02
CN112437125B CN112437125B (en) 2022-05-03

Family

ID=74700756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011245331.3A Active CN112437125B (en) 2020-11-10 2020-11-10 Information concurrent processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112437125B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114157500A (en) * 2021-12-07 2022-03-08 北京天融信网络安全技术有限公司 Data packet processing method, electronic device and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103997514A (en) * 2014-04-23 2014-08-20 汉柏科技有限公司 File parallel transmission method and system
US20140241341A1 (en) * 2013-02-28 2014-08-28 Level 3 Communications, Llc Registration of sip-based communications in a hosted voip network
CN104243417A (en) * 2013-06-18 2014-12-24 上海博达数据通信有限公司 PPPOE implementation method based on multi-core processor
US20170214720A1 (en) * 2016-01-22 2017-07-27 Cisco Technology, Inc. Selective redundancy for media sessions
CN107113223A (en) * 2014-12-19 2017-08-29 瑞典爱立信有限公司 Negotiation for the message block size of message session trunk protocol session
US20180052887A1 (en) * 2016-08-16 2018-02-22 Netscout Systems Texas, Llc Optimized merge-sorting of data retrieved from parallel storage units
CN109408468A (en) * 2018-08-24 2019-03-01 阿里巴巴集团控股有限公司 Document handling method and device calculate equipment and storage medium
US20200004861A1 (en) * 2018-06-29 2020-01-02 Oracle International Corporation Method and system for implementing parallel database queries
CN111383037A (en) * 2018-12-27 2020-07-07 北京奇虎科技有限公司 Method and device for constructing advertisement material
CN111556058A (en) * 2020-04-29 2020-08-18 杭州迪普信息技术有限公司 Session processing method and device
CN111583906A (en) * 2019-02-18 2020-08-25 中国移动通信有限公司研究院 Role recognition method, device and terminal for voice conversation
CN111629074A (en) * 2020-07-29 2020-09-04 武汉思普崚技术有限公司 Session sequencing method and device of gateway equipment
CN111708866A (en) * 2020-08-24 2020-09-25 北京世纪好未来教育科技有限公司 Session segmentation method and device, electronic equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140241341A1 (en) * 2013-02-28 2014-08-28 Level 3 Communications, Llc Registration of sip-based communications in a hosted voip network
US20190349243A1 (en) * 2013-02-28 2019-11-14 Level 3 Communications, Llc Registration of sip-based communications in a hosted voip network
CN104243417A (en) * 2013-06-18 2014-12-24 上海博达数据通信有限公司 PPPOE implementation method based on multi-core processor
CN103997514A (en) * 2014-04-23 2014-08-20 汉柏科技有限公司 File parallel transmission method and system
CN107113223A (en) * 2014-12-19 2017-08-29 瑞典爱立信有限公司 Negotiation for the message block size of message session trunk protocol session
US20170214720A1 (en) * 2016-01-22 2017-07-27 Cisco Technology, Inc. Selective redundancy for media sessions
US20180052887A1 (en) * 2016-08-16 2018-02-22 Netscout Systems Texas, Llc Optimized merge-sorting of data retrieved from parallel storage units
US20200004861A1 (en) * 2018-06-29 2020-01-02 Oracle International Corporation Method and system for implementing parallel database queries
CN109408468A (en) * 2018-08-24 2019-03-01 阿里巴巴集团控股有限公司 Document handling method and device calculate equipment and storage medium
CN111383037A (en) * 2018-12-27 2020-07-07 北京奇虎科技有限公司 Method and device for constructing advertisement material
CN111583906A (en) * 2019-02-18 2020-08-25 中国移动通信有限公司研究院 Role recognition method, device and terminal for voice conversation
CN111556058A (en) * 2020-04-29 2020-08-18 杭州迪普信息技术有限公司 Session processing method and device
CN111629074A (en) * 2020-07-29 2020-09-04 武汉思普崚技术有限公司 Session sequencing method and device of gateway equipment
CN111708866A (en) * 2020-08-24 2020-09-25 北京世纪好未来教育科技有限公司 Session segmentation method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
曾毅等: "基于多效用阈值的分布式高效用序列模式挖掘", 《计算机工程与设计》 *
潘乐等: "一种高并发服务处理的优化方法", 《信息技术与信息化》 *
熊兵等: "流级别的高速网络流量动态划分算法", 《小型微型计算机系统》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114157500A (en) * 2021-12-07 2022-03-08 北京天融信网络安全技术有限公司 Data packet processing method, electronic device and storage medium

Also Published As

Publication number Publication date
CN112437125B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
EP2898655B1 (en) System and method for small batching processing of usage requests
WO2014206289A1 (en) Method and apparatus for outputting log information
CN111259205B (en) Graph database traversal method, device, equipment and storage medium
CN109032796B (en) Data processing method and device
CN111985906A (en) Remote office system, method, device and storage medium
CN111258957A (en) Method, device, equipment and medium for updating directory of distributed file system
CN112437125B (en) Information concurrent processing method and device, electronic equipment and storage medium
CN114667506A (en) Management of multi-physical function non-volatile memory devices
US20170212846A1 (en) Analyzing lock contention within a system
US9473565B2 (en) Data transmission for transaction processing in a networked environment
CN110545324A (en) Data processing method, device, system, network equipment and storage medium
CN111966471B (en) Access method, device, electronic equipment and computer storage medium
CN111290842A (en) Task execution method and device
CN103577604B (en) A kind of image index structure for Hadoop distributed environments
CN111263930A (en) Preventing long-running transaction holding record locking
CN112565356A (en) Data storage method and device and electronic equipment
CN111782357A (en) Label control method and device, electronic equipment and readable storage medium
CN111966877A (en) Front-end service method, device, equipment and storage medium
CN111475424B (en) Method, apparatus, and computer readable storage medium for managing a storage system
CN111767149A (en) Scheduling method, device, equipment and storage equipment
CN111832070A (en) Data mask method and device, electronic equipment and storage medium
US20210306269A1 (en) Method and apparatus for adjusting network flow
CN111901254B (en) Bandwidth allocation method and device for all nodes, electronic equipment and storage medium
CN110716814B (en) Performance optimization method and device for inter-process large-data-volume communication
CN113342270A (en) Volume unloading method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211014

Address after: 100176 Room 101, 1st floor, building 1, yard 7, Ruihe West 2nd Road, economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Apollo Zhilian (Beijing) Technology Co.,Ltd.

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant