CN110231983B - Data concurrent processing method, device and system, computer equipment and readable medium - Google Patents

Data concurrent processing method, device and system, computer equipment and readable medium Download PDF

Info

Publication number
CN110231983B
CN110231983B CN201910394225.2A CN201910394225A CN110231983B CN 110231983 B CN110231983 B CN 110231983B CN 201910394225 A CN201910394225 A CN 201910394225A CN 110231983 B CN110231983 B CN 110231983B
Authority
CN
China
Prior art keywords
data
memory space
information
event
target event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910394225.2A
Other languages
Chinese (zh)
Other versions
CN110231983A (en
Inventor
杨松
刘涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910394225.2A priority Critical patent/CN110231983B/en
Publication of CN110231983A publication Critical patent/CN110231983A/en
Application granted granted Critical
Publication of CN110231983B publication Critical patent/CN110231983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention provides a data concurrent processing method, a device and a system, computer equipment and a readable medium. The method comprises the following steps: setting parameters of an atom type corresponding to a target event to be transmitted, and initializing the parameters to 0; establishing a data queue, storing relevant information of N data included by a target event, wherein the relevant information of each data carries address information of an atom type parameter, sending the corresponding N data to a remote server in parallel by a plurality of data sending devices according to the relevant information of the N data in the data queue, and accumulating 1 for the atom type parameter according to the address information of the atom type parameter after each data sending device sends one data; monitoring whether the numerical value of the atom type parameter is equal to N; and if so, determining that all data in the target event is sent completely. The technical scheme of the invention can effectively improve the concurrent processing efficiency of the data, shorten the time length of data transmission and meet the requirements of scenes with higher real-time requirements.

Description

Data concurrent processing method, device and system, computer equipment and readable medium
[ technical field ] A method for producing a semiconductor device
The present invention relates to the field of computer application technologies, and in particular, to a data concurrent processing method, apparatus and system, a computer device, and a readable medium.
[ background of the invention ]
The data concurrent processing and transmitting technology refers to a technology of processing multiple copies of data in a concurrent manner and then transmitting the processed data to other services.
In the existing internet technology, some data needs to be processed through some steps and then sent to other services. A further feature of this type of data is that the receiving end can only form a meaningful unit of data for its own use if it receives all of the processed data. For example, a json packet containing a list of picture names, it is of little use to send this packet only to the other party, since the other party gets only the list of picture names. In this case, the counterpart can really organize meaningful data only by simply processing the picture corresponding to the picture name and then transmitting the picture to the counterpart, and then transmitting the json file to the counterpart. In the prior art, the most traditional method is as follows: and processing the pictures corresponding to the picture name list in the json in sequence, and packaging the pictures with the json one by one and sending the pictures to the other. This way all tasks are processed sequentially in one process or thread. And the existing technical scheme has simple logic and is easy to realize.
However, when the above technical solutions are implemented, all tasks need to be put into one process or thread for sequential processing, the transmission time of data is long, and the requirement of a scene with high real-time requirement cannot be met.
[ summary of the invention ]
The invention provides a data concurrent processing method, a data concurrent processing device, a data concurrent processing system, computer equipment and a readable medium, which are used for shortening the time length of data transmission and meeting the requirements of scenes with higher real-time requirements.
The invention provides a data concurrent processing method, which comprises the following steps:
setting parameters of an atom type corresponding to a target event to be transmitted, and initializing the parameters to 0;
establishing a data queue, wherein the data queue is used for storing relevant information of N data included in the target event, the relevant information of each data carries address information of the parameter of the atom type, so that a plurality of data sending devices send the corresponding N data to a remote server in parallel according to the relevant information of the N data in the data queue, and after each data sending device sends one data, the parameter of the atom type is accumulated by 1 according to the address information of the parameter of the atom type;
monitoring whether the value of the parameter of the atom type is equal to N;
and if so, determining that all data in the target event are sent completely.
The invention also provides a data concurrent processing method, which comprises the following steps:
acquiring related information of data from a data queue of a target event to be transmitted; the data queue is established for the target event by the data management device and used for storing the related information of the N data included in the target event, and the related information of each data carries the address information of the parameter of the atomic type; the parameter of the atom type is set for the target event by the data management device and is initialized to 0;
acquiring corresponding data according to the related information of the data;
sending the acquired data to a remote server;
and accumulating 1 for the parameters of the atom type according to the address information of the parameters of the atom type.
The present invention provides a data management apparatus, the apparatus comprising:
the setting module is used for setting the parameters of the atom type corresponding to the target event to be transmitted and initializing the parameters to 0;
the establishing module is used for establishing a data queue, the data queue is used for storing relevant information of N data included in the target event, the relevant information of the data carries address information of the parameter of the atom type, a plurality of data sending devices send the corresponding N data to a remote server in parallel according to the relevant information of the N data in the data queue, and after each data sending device sends one piece of data, the N data are accumulated by 1 according to the address information of the parameter of the atom type;
the monitoring module is used for monitoring whether the numerical value of the parameter of the atom type is equal to N or not;
and the determining module is used for determining that all data in the target event are sent completely if the monitored numerical value of the parameter of the atom type is equal to N.
The present invention also provides a data transmission apparatus, comprising:
the acquisition module is used for acquiring relevant information of data from a data queue of a target event to be transmitted; the data queue is established for the target event by the data management device and used for storing the related information of the N data included in the target event, and the related information of each data carries the address information of the parameter of the atomic type; the parameter of the atom type is set for the target event by the data management device and is initialized to 0;
the acquisition module is further used for acquiring corresponding data according to the relevant information of the data;
the sending module is used for sending the acquired data to a remote server;
and the modification module is used for accumulating 1 for the parameters of the atom type according to the address information of the parameters of the atom type.
The invention also provides a data concurrent processing system, which comprises a data management device and a plurality of data sending devices, wherein the plurality of data sending devices adopt a multithreading concurrent working mode; each data sending device is in communication connection with the data management device; the data management device adopts the data management device; each of the data transmission apparatuses employs the data transmission apparatus described above.
The present invention also provides a computer apparatus, the apparatus comprising:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the data concurrent processing method as described above.
The present invention also provides a computer-readable medium, on which a computer program is stored, which program, when executed by a processor, implements a data concurrent processing method as described above.
According to the data concurrent processing method, the data concurrent processing device and the data concurrent processing system, the computer equipment and the readable medium, the data are transmitted in parallel by adopting the plurality of data transmitting devices, so that the data concurrent processing efficiency can be effectively improved, the data transmission duration is shortened, and the requirements of scenes with high real-time requirements are met; in addition, in the embodiment, by setting the parameter of the atomic type corresponding to the target event to be transmitted, the mutual exclusion of multiple threads sent in parallel can be realized, and the accuracy and the transmission efficiency of the concurrent transmission of multiple data can be effectively ensured.
[ description of the drawings ]
Fig. 1 is a flowchart of a data concurrent processing method according to a first embodiment of the present invention.
Fig. 2 is a flowchart of a second embodiment of a data concurrent processing method according to the present invention.
Fig. 3 is an exemplary diagram of a data concurrent processing method according to the present invention.
Fig. 4 is a block diagram of an embodiment of a data management apparatus of the present invention.
Fig. 5 is a structural diagram of an embodiment of a data transmission device of the present invention.
FIG. 6 is a block diagram of an embodiment of a data concurrency processing system of the present invention.
FIG. 7 is a block diagram of an embodiment of a computer device of the present invention.
Fig. 8 is an exemplary diagram of a computer device provided by the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flowchart of a data concurrent processing method according to a first embodiment of the present invention. As shown in fig. 1, the data concurrent processing method of this embodiment may specifically include the following steps:
s100, setting parameters of atom types corresponding to target events to be transmitted, and initializing the parameters to be 0;
the data concurrent processing method of the embodiment can be applied to a scene with a large amount of data to be transmitted in an event. For example, in a new retail scenario, where a large number of monitoring events need to be transmitted to a remote server. And each monitoring event comprises parameters and confidence degrees of various algorithms and also comprises a large number of picture file names. It is finally necessary to ensure that on the remote server, the content of the corresponding picture can be viewed using these filenames. So in this case, in addition to sending the monitoring events themselves, the corresponding pictures need to be transmitted to a remote server. The target event to be transmitted in this embodiment may be a monitoring event in the above scenario. In practical application, the event information may also be event information including a large amount of data in other similar scenarios, and each data in the large amount of data included in the event information is independent. For example, the target event to be transmitted in this embodiment may specifically include N pieces of data, and each piece of data may be an independent individual.
The data in this embodiment may be data in the form of pictures, voice, or text.
It should be noted that, in addition to N pieces of data, the target event of the present embodiment may also include other information besides the data, such as parameters and confidence levels of various included algorithms, and identification information of each piece of data, such as a name. In the scenario of this embodiment, the data sending end may send N data in the target event first, and after it is determined that the N data are sent, send other information except the data in the target event, so that the remote server receives the other information and then has the N data as a support, so that the received target event is complete, and then the target event may be subjected to corresponding subsequent operations.
The execution main body of the data concurrent processing method in this embodiment may be a data management device located at a data sending end, and the data management device may be an independent entity or may also be a main thread, and is used to manage data to be sent, so that a plurality of data sending devices operating in a multithreading manner can send N data in a target event.
In the data concurrent processing method of this embodiment, first, a parameter of an atom type corresponding to a target event needs to be set, and is initialized to 0. The parameter of the atom type in this embodiment may specifically be a field of an atom type. The representation of the fields of an atomic type may not be the same for different programming languages. For example, for a field of an atom type in C + +, it can be expressed as std: : atomic types, among other languages, can be represented in a corresponding manner. The parameter of the atomic type in this embodiment is used to record the number of successfully sent data. Therefore, at the time of initial setting, data transmission has not yet started, and it is necessary to initialize the parameter of the atom type corresponding to the target event to 0.
S101, establishing a data queue, wherein the data queue is used for storing relevant information of N data included in a target event, the relevant information of each data carries address information of an atom type parameter, a plurality of data sending devices send the corresponding N data to a remote server in parallel according to the relevant information of the N data in the data queue, and after each data sending device sends one data, the atom type parameters are accumulated by 1 according to the address information of the atom type parameters;
in this embodiment, a data management device establishes a corresponding data queue for each target event. The data queue stores the related information of N data in the target event in sequence. In this embodiment, the relevant information of the N data in the data queue may be the data itself, but the space occupied by the data itself is large, and in practical application, the data itself may not be the data itself, but the relevant parameters of the data. However, the data transmission device can acquire the corresponding data according to the related information of each data. For example, the related information of the data may be address information of the data, or address information stored with a data address, or may also be a pointer capable of pointing to a data address, or a pointer capable of pointing to an address stored with a data address, or may also be other information capable of finding the data, which is not described in detail herein.
The plurality of data transmission devices of the present embodiment can operate in parallel, corresponding to a plurality of threads. When each data sending device works, the relevant information of one data is obtained from the data queue, then the corresponding data is obtained according to the relevant information of the data and sent, and after the data sending devices finish sending, each data sending device can also access the parameter of the atom type corresponding to the target event according to the address information of the parameter of the atom type carried in the relevant information of the data, and the parameter of the atom type corresponding to the target event is accumulated by 1.
The parameter of the atomic type of this embodiment has a characteristic, and can implement mutual exclusion of multiple threads sent in parallel, so that different data sending devices cannot modify the parameter of the atomic type at the same time. The data transmission device of the present embodiment corresponds to a mode in which multiple threads operate in parallel when transmitting N data. If one data sending device finishes sending a data and is accumulating the parameters of the atom type by 1, and if another data sending device finishes sending a data, the other data sending device needs to wait for the previous data sending device to finish modifying the parameters of the atom type and can only accumulate the parameters of the atom type by 1, and the two data sending devices can not simultaneously accumulate the parameters of the atom type by 1, so that the accurate monitoring of the sent data can be ensured, and the accuracy of data sending is ensured.
For example, one specific implementation manner of step S100 may be: after receiving a target event, the data management device may allocate an event memory space for the target event, store information of the target event in the event memory space, set a parameter of an atomic type in the event memory space, and initialize the parameter to 0; the information of the target event comprises N data and other information except the data. At this time, the address information of the parameter of the atomic type is also the address information of the event memory space.
Correspondingly, a specific implementation manner of step S101 may include the following steps:
(a) distributing a data memory space for each data in the N data of the target event according to the information of the target event, and storing identification information of the corresponding data and address information of the event memory space in the data memory space;
wherein the identification information of the data can be the name or id of the data, etc.
(b) Storing the address information of the data memory space of N data into a queue as a data queue, acquiring corresponding data by a plurality of data sending devices according to the address information of each data memory space in the data queue in sequence, sending the data to a remote server in parallel, and after each data sending device sends one data, accumulating 1 for the stored parameters of the atom type according to the address information of the event memory space.
In this implementation, step S100 is equivalent to a receiving task, and after a target event is received, an event memory space may be allocated for the target event, and target event information may be stored in the event memory space, where the stored information includes N pieces of data included in the target event information and other information besides the data.
Whereas step S101 corresponds to a distribution task because the target event includes N data, step S101 corresponds to a transmission task of distributing the target event into N data. The data management apparatus allocates a data memory space to each data, but stores identification information of data corresponding to the data, such as a name and address information of an event memory space, in the data memory space, instead of storing the data itself, and stores the address information of the data memory spaces of N data together in a queue as a data queue. That is, in the present embodiment, the information related to the data in the data queue is address information of the data memory space of the data. In this way, after sequentially acquiring the address information of a data memory space from the data queue, each data transmitting device first acquires the identification information of the corresponding data from the corresponding data memory space according to the address information of the data memory space, then acquires the data corresponding to the identification information of the data from the corresponding event memory space according to the address information of the event memory space, and transmits the data to the remote server. And after the transmission is finished, acquiring the parameters of the atom types stored in the event memory space according to the address information of the event memory space, and accumulating the parameters by 1.
S102, monitoring whether the numerical value of the parameter of the atom type is equal to N; if yes, go to step S103; otherwise, returning to the step S102 to continue monitoring;
s103, determining that all data in the target event are sent completely.
After a plurality of data sending devices start sending data, the data management device needs to continuously monitor whether the numerical value of the parameter of the atomic type is equal to N; if the number of the data in the target event is equal to N, all data in the target event can be determined to be sent completely, and otherwise, monitoring is continued.
Correspondingly, after step S103, the method may further include: and sending other information except the data in the target event, such as parameters, confidence degrees and the like of various contained algorithms, and identification information of each data, such as a name and the like, to the remote server, so that the remote server can receive all the information of the target event. That is, the data management device directly sends other information except the data in the target event to the cloud server.
Alternatively, when the data management apparatus of the present embodiment is implemented by using threads, step S100 may also be implemented by using a receiving thread for receiving tasks, and steps S101 to S103 are implemented by using a main thread for managing data. At this time, the receiving thread is used for setting the parameter of the atom type corresponding to the target event to be transmitted for each target event to be transmitted and initializing to 0 for receiving the main task of the target event to be transmitted. When the number of tasks is large, that is, the number of target events to be transmitted is large, the receiving thread may correspondingly establish a task queue, place each task in the queue, so that the main thread sequentially obtains the target event to be transmitted corresponding to each task from the task queue, and then perform data concurrent processing on each target event according to steps S101 to S103 of the above embodiment.
According to the data concurrent processing method, the data are transmitted in parallel by the multiple data transmitting devices, so that the data concurrent processing efficiency can be effectively improved, the data transmission duration is shortened, and the requirements of scenes with high real-time requirements are met; in addition, in the embodiment, by setting the parameter of the atomic type corresponding to the target event to be transmitted, the mutual exclusion of multiple threads sent in parallel can be realized, and the accuracy and the transmission efficiency of the concurrent transmission of multiple data can be effectively ensured.
Fig. 2 is a flowchart of a second embodiment of a data concurrent processing method according to the present invention. As shown in fig. 2, the data concurrent processing method of this embodiment may specifically include the following steps:
s200, acquiring related information of data from a data queue of a target event to be transmitted; the data queue is established for a target event by a data management device and used for storing the related information of N data included in the target event, and the related information of each data carries the address information of the parameter of the atomic type; the parameter of the atom type is set for the target event by the data management device and is initialized to 0;
s201, acquiring corresponding data according to the related information of the data;
s202, sending the acquired data to a remote server;
and S203, accumulating 1 for the parameters of the atom type according to the address information of the parameters of the atom type.
The data concurrent processing method according to this embodiment describes the technical solution of the present invention on the side of any one of a plurality of data transmission apparatuses at a data transmission end.
The data queue of the embodiment is used for storing related information of N data included in a target event; the plurality of data transmission devices in this embodiment operate in a multi-threaded operating mode. When each data sending device sends data, the relevant information of one data is obtained from a data queue of a target event to be transmitted, and then the corresponding data is obtained according to the obtained relevant information of the data; and transmits the acquired data to a remote server. Finally, after the data is successfully sent, adding 1 to the parameter of the atom type according to the address information of the parameter of the atom type, so that when a plurality of data sending devices send N data, the value of the parameter of the atom type is equal to N. At this time, the data management device may determine whether all N data in the target event are successfully transmitted by monitoring the value of the parameter of the atomic type.
For example, the relevant information of the N data in the data queue in this embodiment is not the data itself, but the corresponding data may be acquired according to the relevant information of each data. For example, the related information of the data may be address information of the data, or address information stored with a data address, or may also be a pointer capable of pointing to a data address, or a pointer capable of pointing to an address stored with a data address, or may also be other information capable of finding the data, which is not described in detail herein.
For example, in an implementation manner of step S200 in this embodiment, the following steps may be specifically performed: the method comprises the steps of obtaining address information of a data memory space of data from a data queue, wherein the data memory space of the data is created by a data management device for each data in a target event and is used for storing identification information of the data and address information of an event memory space of the target event, the event memory space of the target event is created by the data management device and is used for storing information of the target event, and the information of the target event comprises N data and other information except the data. Since the parameter of the atomic type is set in the event memory space of the target event by the data management device, the address information of the parameter of the atomic type coincides with the address information of the event memory space.
Correspondingly, in an implementation manner of step S201, the following steps may be specifically included:
(1) acquiring identification information of the data and address information of an event memory space of a target event from a corresponding data memory space according to the address information of the data memory space of the data;
(2) and acquiring data corresponding to the identification information of the data from the event memory space corresponding to the address information of the event memory space.
Correspondingly, before step S203 in this embodiment, the method may further include: according to the address information of the event memory space, the parameters of the atom type stored in the corresponding event memory space are obtained, and subsequently, 1 may be added to the parameters of the atom type according to step S203.
Further optionally, in the embodiment shown in fig. 2, the step S201 "acquire corresponding data according to related information of the data," and the step S202 "send the acquired data to the remote server" is to directly send the acquired data to the remote server. For example, if the acquired data is very large, the acquired data may be compressed at this time. Or if the cloud server only needs to acquire part of the feature data in the data, part of the feature data can be extracted from the acquired data. In practical application, other preset processing rules can be configured in the data sending device according to practical requirements, corresponding algorithms are called according to the preset processing rules after the obtained data are obtained, the data are processed, and the processed data are sent to the cloud server.
Similarly, the data of the embodiment may be picture, voice or text data.
The data concurrent processing method of the present embodiment is different from the embodiment shown in fig. 1 in that: the embodiment shown in fig. 1 described above describes the technical solution of the present invention on the data management apparatus side, and the present embodiment describes the technical solution of the present invention on the side of any one of a plurality of data transmission apparatuses. The plurality of data transmission devices in this embodiment transmit data corresponding to the information on the N data in the data queue in the multi-thread parallel data transmission mode. The specific implementation manner of the data concurrent processing method of this embodiment is the same as that of the embodiment shown in fig. 1, and reference may be made to the description of the embodiment shown in fig. 1 for details, which are not repeated herein.
According to the data concurrent processing method, the data are transmitted in parallel by the multiple data transmitting devices, so that the data concurrent processing efficiency can be effectively improved, the data transmission duration is shortened, and the requirements of scenes with high real-time requirements are met; in addition, in the embodiment, by setting the parameter of the atomic type corresponding to the target event to be transmitted, the mutual exclusion of multiple threads sent in parallel can be realized, and the accuracy and the transmission efficiency of the concurrent transmission of multiple data can be effectively ensured.
In addition, the prior art also provides a method for realizing concurrent processing and sending of data by using a lock. Specifically, a mutex lock and a status flag are applied for the entire data to be processed and transmitted. And distributing the tasks needing to be processed into multiple threads for processing and sending. After the transmission of each thread is completed, the mutual exclusion lock is applied, and then the mark bit is updated. When all threads finish sending, the main thread sends out the last data. Although the above technical solution uses the concurrency function, each data needs a lock resource for controlling mutual exclusion, and after the data transmission is completed, the lock resource needs to be destroyed. When a large amount of data needs to be sent, the resource consumption of the system is large.
Relatively speaking, by adopting the technical scheme of the embodiment, the data processing and sending efficiency is improved by the technology similar to the multithreading concurrency, the mutual exclusion of the multithreading sent in parallel can be realized by utilizing the atomic type, and the consumption of system resources can be effectively reduced while the data processing and sending efficiency is improved.
The following describes the technical solution of the present invention by taking an example of transmitting a monitoring event in a new retail scenario. Specifically, an exemplary diagram of the data concurrent processing method of the present invention is shown in fig. 3.
Specifically, in the present embodiment, the monitoring event in the new retail scenario may include a large number of pictures, parameters of various involved algorithms, confidence, a large number of picture filenames, and the like. Since the locally stored pictures are relatively large, one picture has about 300 KB. Each monitoring event comprises hundreds of pictures, and if all the pictures are transmitted to the picture, the transmission amount of data is too large. Therefore, in this embodiment, in order to reduce the amount of data transferred, each picture needs to be compressed to an appropriate size through a round of compression before being transferred.
The overall scheme of the data concurrent processing method of this embodiment is to distribute all pictures of a monitoring event to different threads for necessary compression by using a multithreading technique, and then send the compressed pictures to a remote server for reception. And after all the pictures are sent, sending other information except the picture data of the monitoring event to a remote server. When the remote server receives the monitoring event, because the pictures are sent in advance, the picture file name in the monitoring event is meaningful, the whole monitoring event is complete, and the subsequent operation can be carried out.
As shown in fig. 3, in this embodiment, the receiving thread is used to receive the monitoring event that needs to be transmitted, and then the receiving thread applies for a new memory space for storing the received monitoring event. In this block memory space of the application, a field succsend pic count of an atom type (e.g., std:: atomic type in C + + is available) is set at the same time, and an initial value is set to 0. And then the receiving thread puts the memory pointer corresponding to the received monitoring event into an independent thread-safe monitoring event queue. Optionally, after receiving the monitoring event to be transmitted, the receiving thread may also directly give the main thread without any processing, and the main thread applies for a new memory space for each monitoring event to be transmitted, and stores the received monitoring event and applies for the field of the atomic type.
The main thread acquires a pointer monitor _ event _ ptr of a monitor event to be sent from the monitor event queue, analyzes picture file names in the pointer monitor _ event _ ptr, and applies for a block of memory for each picture file name, wherein the pointer name is the monitor _ event _ pic _ ptr. This block memory stores not only the picture file name but also a pointer monitor _ event _ ptr pointing to the monitoring event. And then the main thread puts the pointer monitor _ event _ pic _ ptr corresponding to each picture file into a pending picture queue. Meanwhile, the main thread acquires the number of all pictures to be sent, all _ pic _ count. And after the information exists in the picture queue to be processed, the picture processing thread starts to acquire the information from the picture queue to be processed and starts to process. The picture processing thread continuously sends out a pointer monitor _ event _ pic _ ptr of a new picture to be sent from the picture queue to be sent. Firstly, the file name is utilized to obtain the corresponding content of the picture, and then the corresponding algorithm is called to compress the picture to obtain a picture smaller than the original picture. The specific compression ratio can be determined according to requirements. After the compression is completed, the picture processing thread directly communicates with a far-end server to send out the picture. After the transmission is completed, its corresponding field is modified by the pointer monitor _ event _ ptr that has been stored before: succ _ send _ pic _ count, to which 1 is accumulated. As the sending of the picture succeeds, the succ _ send _ pic _ count gets larger and larger, eventually as large as the all _ pic _ count.
At the same time, the main thread starts to constantly compare succ _ send _ pic _ count with all _ pic _ count. As the pictures are sent successfully, the succ _ send _ pic _ count gradually increases and is finally as large as the all _ pic _ count, and at this time, the main thread sends other information besides the data of the monitoring event to the remote server. At this point, the transmission of the monitoring event is completed.
Fig. 4 is a block diagram of an embodiment of a data management apparatus of the present invention. As shown in fig. 4, the data management apparatus of this embodiment may specifically include:
the setting module 10 is configured to set a parameter of an atom type corresponding to a target event to be transmitted, and initialize the parameter to 0;
the establishing module 11 is configured to establish a data queue, where the data queue is configured to store relevant information of N data included in a target event, where the relevant information of each data carries address information of a parameter of an atomic type set by the setting module 10, so that a plurality of data sending devices send the corresponding N data to a remote server in parallel according to the relevant information of the N data in the data queue, and after each data sending device sends one data, the atomic type parameter is accumulated by 1 according to the address information of the atomic type parameter;
the monitoring module 12 is configured to monitor whether a value of the parameter of the atom type set by the setting module 10 is equal to N;
the determining module 13 is configured to determine that all data in the target event is sent completely if the monitoring module 12 monitors that the value of the parameter of the atomic type is equal to N.
The data management apparatus of this embodiment, the implementation principle and technical effect of implementing data concurrent processing by using the modules are the same as those of the related method embodiment, and details of the related method embodiment may be referred to and are not repeated herein.
Further optionally, as shown in fig. 4, the data management apparatus of this embodiment may further include a sending module 14, configured to trigger the sending module 14 to send, to the remote server, information other than the data in the target event after the determining module 13 determines that all data in the target event is sent completely.
Further optionally, in the data management apparatus of this embodiment, the setting module 10 is specifically configured to:
allocating an event memory space for a target event, storing information of the target event in the event memory space, setting parameters of an atom type in the event memory space, and initializing the parameters to be 0; the information of the target event includes N pieces of data and other information than the data.
Further optionally, in the data management apparatus of this embodiment, the establishing module 11 is specifically configured to:
distributing a data memory space for each data in the N data of the target event according to the information of the target event, and storing identification information of the corresponding data and address information of the event memory space in the data memory space;
storing the address information of the data memory space of N data into a queue as a data queue, acquiring corresponding data by a plurality of data sending devices according to the address information of each data memory space in the data queue in sequence, sending the data to a remote server in parallel, and after each data sending device sends one data, accumulating 1 for the stored parameters of the atom type according to the address information of the event memory space.
Further optionally, in the data management apparatus of this embodiment, the data is picture, voice, or text data.
Fig. 5 is a structural diagram of an embodiment of a data transmission device of the present invention. As shown in fig. 5, the data transmitting apparatus of this embodiment may specifically include:
the obtaining module 20 is configured to obtain information related to data from a data queue of a target event to be transmitted; the data queue is established for the target event by the data management device and used for storing the related information of N data included in the target event, and the related information of each data carries the address information of the parameter of the atomic type; the parameter of the atom type is set for the target event by the data management device and is initialized to 0;
the obtaining module 20 is further configured to obtain corresponding data according to the relevant information of the data;
the sending module 21 is configured to send the data acquired by the acquiring module 20 to a remote server;
after the sending module 21 sends out the data successfully, the modification module 22 is triggered to start, and the modification module 22 is configured to accumulate 1 for the parameter of the atom type according to the address information of the parameter of the atom type carried in the related information of the acquired data by the acquisition module 20.
The data sending apparatus of this embodiment implements the data concurrent processing by using the modules, and the implementation principle and technical effect are the same as those of the related method embodiment, and reference may be made to the description of the related method embodiment in detail, which is not described herein again.
Further optionally, in the data sending apparatus of this embodiment, the obtaining module 20 is specifically configured to:
the method comprises the steps of obtaining address information of a data memory space of data from a data queue, wherein the data memory space of the data is created by a data management device for each data in a target event and is used for storing identification information of the data and address information of an event memory space of the target event, the event memory space of the target event is created by the data management device and is used for storing information of the target event, and the information of the target event comprises N data and other information except the data.
Further optionally, in the data sending apparatus of this embodiment, the obtaining module 20 is further specifically configured to:
acquiring identification information of the data and address information of an event memory space of a target event from a corresponding data memory space according to the address information of the data memory space of the data;
and acquiring data corresponding to the identification information of the data from the event memory space corresponding to the address information of the event memory space.
Further optionally, the data transmitting apparatus of this embodiment further includes:
and the data processing module is used for processing the data according to a preset processing rule.
At this time, correspondingly, the sending module 21 is configured to send the data processed by the data processing module to the remote server.
Further optionally, in the data sending apparatus of this embodiment, the data is picture, voice or text data.
FIG. 6 is a block diagram of an embodiment of a data concurrency processing system of the present invention. As shown in fig. 6, the data concurrent processing system of the present embodiment includes a data management apparatus 100 and a plurality of data transmission apparatuses 200, wherein the plurality of data transmission apparatuses 200 adopt a multi-thread concurrent operation mode; each data transmission device 200 is connected to the data management device 100 in communication; the data management apparatus 100 employs the data management apparatus shown in fig. 4 above; each data transmission device 200 is the data transmission device shown in fig. 5. And specifically, the data concurrent processing method described in any of fig. 1 to fig. 3 may be adopted to implement the data concurrent processing.
FIG. 7 is a block diagram of an embodiment of a computer device of the present invention. As shown in fig. 7, the computer device of the present embodiment includes: one or more processors 30, and a memory 40, the memory 40 being configured to store one or more programs, when the one or more programs stored in the memory 40 are executed by the one or more processors 30, the one or more processors 30 are enabled to implement the data concurrent processing method of the embodiment shown in fig. 1-3 above. The embodiment shown in fig. 7 is exemplified by including a plurality of processors 30.
For example, fig. 8 is an exemplary diagram of a computer device provided by the present invention. FIG. 8 illustrates a block diagram of an exemplary computer device 12a suitable for use in implementing embodiments of the present invention. The computer device 12a shown in fig. 8 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present invention.
As shown in FIG. 8, computer device 12a is in the form of a general purpose computing device. The components of computer device 12a may include, but are not limited to: one or more processors 16a, a system memory 28a, and a bus 18a that connects the various system components (including the system memory 28a and the processors 16 a).
Bus 18a represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12a typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12a and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28a may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30a and/or cache memory 32 a. Computer device 12a may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34a may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 8, and commonly referred to as a "hard drive"). Although not shown in FIG. 8, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18a by one or more data media interfaces. System memory 28a may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of the various embodiments of the invention described above in fig. 1-6.
A program/utility 40a having a set (at least one) of program modules 42a may be stored, for example, in system memory 28a, such program modules 42a including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may include an implementation of a network environment. Program modules 42a generally perform the functions and/or methodologies described above in connection with the various embodiments of fig. 1-6 of the present invention.
Computer device 12a may also communicate with one or more external devices 14a (e.g., keyboard, pointing device, display 24a, etc.), with one or more devices that enable a user to interact with computer device 12a, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12a to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22 a. Also, computer device 12a may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) through network adapter 20 a. As shown, network adapter 20a communicates with the other modules of computer device 12a via bus 18 a. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computer device 12a, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 16a executes various functional applications and data processing by executing programs stored in the system memory 28a, for example, to implement the data concurrent processing method shown in the above-described embodiment.
The present invention also provides a computer-readable medium on which a computer program is stored, which when executed by a processor implements the data concurrent processing method as shown in the above embodiments.
The computer-readable media of this embodiment may include RAM30a, and/or cache memory 32a, and/or storage system 34a in system memory 28a in the embodiment illustrated in fig. 8 described above.
With the development of technology, the propagation path of computer programs is no longer limited to tangible media, and the computer programs can be directly downloaded from a network or acquired by other methods. Accordingly, the computer-readable medium in the present embodiment may include not only tangible media but also intangible media.
The computer-readable medium of the present embodiments may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (23)

1. A method for concurrent processing of data, the method comprising:
setting parameters of an atom type corresponding to a target event to be transmitted, and initializing the parameters to 0;
storing address information of a data memory space allocated to each data of the N data of the target event into a queue to establish a data queue, where the data queue is used to store related information of the N data included in the target event, the related information of each data carries address information of the parameter of the atomic type, and a plurality of data sending devices send the corresponding N data to a remote server in parallel according to the related information of the N data in the data queue, and after each data sending device sends one data, the atomic type parameter is accumulated by 1 according to the address information of the atomic type parameter, where mutual exclusion of multiple threads sent in parallel is implemented by the atomic type parameter, and the data memory space stores corresponding identification information of the data and address information of the event memory space, storing the N data in the event memory space;
monitoring whether the value of the parameter of the atom type is equal to N;
and if so, determining that all data in the target event are sent completely.
2. The method of claim 1, wherein after determining that all data transmissions in the target event are complete, the method further comprises:
and sending other information except the data in the target event to the remote server.
3. The method of claim 1, wherein setting a parameter of an atomic type corresponding to the target event and initializing to 0 comprises:
allocating an event memory space for the target event, storing the information of the target event in the event memory space, setting parameters of an atom type in the event memory space, and initializing the parameters to be 0; the information of the target event comprises the N data and other information except the data.
4. The method of claim 3, wherein establishing a data queue comprises:
distributing a data memory space for each data in the N data of the target event according to the information of the target event;
storing the address information of the data memory space of the N data into a queue as the data queue, acquiring the corresponding data by the plurality of data sending devices according to the address information of each data memory space in the data queue in sequence, sending the data to the remote server in parallel, and after each data sending device sends one data, accumulating the stored parameters of the atom type by 1 according to the address information of the event memory space.
5. The method of any one of claims 1-4, wherein the data is picture, voice, or text data.
6. A method for concurrent processing of data, the method comprising:
acquiring related information of data from a data queue of a target event to be transmitted; the data queue is established by storing address information of a data memory space allocated to each data in the N data of the target event into a queue by a data management device, and is used for storing relevant information of the N data included in the target event, wherein the relevant information of each data carries address information of a parameter of an atomic type; the parameter of the atom type is set for the target event by the data management device and is initialized to 0, the data memory space stores the corresponding identification information of the data and the address information of the event memory space, and the event memory space stores the N data;
acquiring corresponding data according to the related information of the data;
sending the acquired data to a remote server;
and accumulating 1 for the parameters of the atom type according to the address information of the parameters of the atom type, wherein the mutual exclusion of the multithread which is sent in parallel is realized through the parameters of the atom type.
7. The method of claim 6, wherein obtaining information about data from a data queue of target events to be transmitted comprises:
and acquiring address information of a data memory space of the data from the data queue, wherein the data memory space of the data is created by the data management device for each data in the target event, the event memory space of the target event is created by the data management device and is used for storing information of the target event, and the information of the target event comprises the N data and other information except the data.
8. The method according to claim 6, wherein obtaining the corresponding data according to the related information of the data comprises:
acquiring identification information of the data and address information of an event memory space of the target event from the corresponding data memory space according to the address information of the data memory space of the data;
and acquiring the data corresponding to the identification information of the data from the event memory space corresponding to the address information of the event memory space.
9. The method according to claim 6, wherein after acquiring the corresponding data according to the related information of the data and before sending the acquired data to the remote server, the method further comprises:
and processing the data according to a preset processing rule.
10. The method according to any one of claims 6 to 9, wherein the data is picture, voice or text data.
11. A data management apparatus, characterized in that the apparatus comprises:
the setting module is used for setting the parameters of the atom type corresponding to the target event to be transmitted and initializing the parameters to 0;
an establishing module, configured to store address information of a data memory space allocated to each of N data of the target event into a queue to establish a data queue, where the data queue is configured to store relevant information of the N data included in the target event, the relevant information of each data carries address information of the parameter of the atomic type, and multiple data sending devices send the corresponding N data to a remote server in parallel according to the relevant information of the N data in the data queue, and after each data sending device sends one data, add 1 to the parameter of the atomic type according to the address information of the parameter of the atomic type, where mutual exclusion of multithreading sent in parallel is implemented by the parameter of the atomic type, and the data memory space stores corresponding identification information of the data and address information of an event memory space, storing the N data in the event memory space;
the monitoring module is used for monitoring whether the numerical value of the parameter of the atom type is equal to N or not;
and the determining module is used for determining that all data in the target event are sent completely if the monitored numerical value of the parameter of the atom type is equal to N.
12. The apparatus of claim 11, further comprising:
and the sending module is used for sending other information except the data in the target event to the remote server.
13. The apparatus according to claim 11, wherein the setting module is specifically configured to:
allocating an event memory space for the target event, storing the information of the target event in the event memory space, setting parameters of an atom type in the event memory space, and initializing the parameters to be 0; the information of the target event comprises the N data and other information except the data.
14. The apparatus of claim 13, wherein the establishing module is specifically configured to:
distributing a data memory space for each data in the N data of the target event according to the information of the target event;
storing the address information of the data memory space of the N data into a queue as the data queue, acquiring the corresponding data by the plurality of data sending devices according to the address information of each data memory space in the data queue in sequence, sending the data to the remote server in parallel, and after each data sending device sends one data, accumulating the stored parameters of the atom type by 1 according to the address information of the event memory space.
15. The apparatus according to any of claims 11-14, wherein the data is picture, voice or text data.
16. A data transmission apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring relevant information of data from a data queue of a target event to be transmitted; the data queue is established by storing address information of a data memory space allocated to each data in the N data of the target event into a queue by a data management device, and is used for storing relevant information of the N data included in the target event, wherein the relevant information of each data carries address information of a parameter of an atomic type; the parameter of the atom type is set for the target event by the data management device and is initialized to 0, the data memory space stores the corresponding identification information of the data and the address information of the event memory space, and the event memory space stores the N data;
the acquisition module is further used for acquiring corresponding data according to the relevant information of the data;
the sending module is used for sending the acquired data to a remote server;
and the modification module is used for accumulating 1 for the parameters of the atom type according to the address information of the parameters of the atom type, wherein the mutual exclusion of the multithread which is sent in parallel is realized through the parameters of the atom type.
17. The apparatus of claim 16, wherein the obtaining module is specifically configured to:
and acquiring address information of a data memory space of the data from the data queue, wherein the data memory space of the data is created by the data management device for each data in the target event, the event memory space of the target event is created by the data management device and is used for storing information of the target event, and the information of the target event comprises the N data and other information except the data.
18. The apparatus of claim 16, wherein the obtaining module is specifically configured to:
acquiring identification information of the data and address information of an event memory space of the target event from the corresponding data memory space according to the address information of the data memory space of the data;
and acquiring the data corresponding to the identification information of the data from the event memory space corresponding to the address information of the event memory space.
19. The apparatus of claim 16, further comprising:
and the data processing module is used for processing the data according to a preset processing rule.
20. The apparatus of any one of claims 16-19, wherein the data is picture, voice, or text data.
21. A data concurrent processing system is characterized by comprising a data management device and a plurality of data sending devices, wherein the plurality of data sending devices adopt a multithread concurrent working mode; each data sending device is in communication connection with the data management device; the data management device adopts the data management device as claimed in any one of the claims 11-15; each of the data transmission apparatuses employs the data transmission apparatus as claimed in any one of claims 16 to 20.
22. A computer device, the device comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method as claimed in any one of claims 1-5, or to implement a method as claimed in any one of claims 6-9.
23. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 5 or carries out the method of any one of claims 6 to 9.
CN201910394225.2A 2019-05-13 2019-05-13 Data concurrent processing method, device and system, computer equipment and readable medium Active CN110231983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910394225.2A CN110231983B (en) 2019-05-13 2019-05-13 Data concurrent processing method, device and system, computer equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910394225.2A CN110231983B (en) 2019-05-13 2019-05-13 Data concurrent processing method, device and system, computer equipment and readable medium

Publications (2)

Publication Number Publication Date
CN110231983A CN110231983A (en) 2019-09-13
CN110231983B true CN110231983B (en) 2022-01-28

Family

ID=67860545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910394225.2A Active CN110231983B (en) 2019-05-13 2019-05-13 Data concurrent processing method, device and system, computer equipment and readable medium

Country Status (1)

Country Link
CN (1) CN110231983B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411365A (en) * 2020-03-17 2021-09-17 中国移动通信集团山东有限公司 Data processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1774699A (en) * 2003-04-24 2006-05-17 国际商业机器公司 Concurrent access of shared resources
CN102486740A (en) * 2010-12-03 2012-06-06 中国科学院沈阳自动化研究所 Multithreading real-time data processing device and method
CN105786973A (en) * 2016-02-02 2016-07-20 重庆秒盈电子商务有限公司 Concurrent data processing method and system based on big data technology
CN107241281A (en) * 2017-05-27 2017-10-10 上海东土远景工业科技有限公司 A kind of data processing method and its device
CN109617833A (en) * 2018-12-25 2019-04-12 深圳市任子行科技开发有限公司 The NAT Data Audit method and system of multithreading user mode network protocol stack system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100367218C (en) * 2006-08-03 2008-02-06 迈普(四川)通信技术有限公司 Multi-kernel parallel first-in first-out queue processing system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1774699A (en) * 2003-04-24 2006-05-17 国际商业机器公司 Concurrent access of shared resources
CN102486740A (en) * 2010-12-03 2012-06-06 中国科学院沈阳自动化研究所 Multithreading real-time data processing device and method
CN105786973A (en) * 2016-02-02 2016-07-20 重庆秒盈电子商务有限公司 Concurrent data processing method and system based on big data technology
CN107241281A (en) * 2017-05-27 2017-10-10 上海东土远景工业科技有限公司 A kind of data processing method and its device
CN109617833A (en) * 2018-12-25 2019-04-12 深圳市任子行科技开发有限公司 The NAT Data Audit method and system of multithreading user mode network protocol stack system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
An adaptive data chunk scheduling for concurrent multipath transfer;Lal PratapVerma;《Computer Standards & Interfaces》;20170531;第52卷;第97-104页 *

Also Published As

Publication number Publication date
CN110231983A (en) 2019-09-13

Similar Documents

Publication Publication Date Title
CN107766148B (en) Heterogeneous cluster and task processing method and device
CN109951547B (en) Transaction request parallel processing method, device, equipment and medium
CN111258744A (en) Task processing method based on heterogeneous computation and software and hardware framework system
US11175940B2 (en) Scheduling framework for tightly coupled jobs
CN110825440B (en) Instruction execution method and device
CN113641457A (en) Container creation method, device, apparatus, medium, and program product
US9389997B2 (en) Heap management using dynamic memory allocation
CN113037529B (en) Reserved bandwidth allocation method, device, equipment and storage medium
CN110673959A (en) System, method and apparatus for processing tasks
CN110851276A (en) Service request processing method, device, server and storage medium
US9513660B2 (en) Calibrated timeout interval on a configuration value, shared timer value, and shared calibration factor
US9473561B2 (en) Data transmission for transaction processing in a networked environment
CN110231983B (en) Data concurrent processing method, device and system, computer equipment and readable medium
CN109819674B (en) Computer storage medium, embedded scheduling method and system
US9990303B2 (en) Sharing data structures between processes by semi-invasive hybrid approach
CN110083357B (en) Interface construction method, device, server and storage medium
CN112241324B (en) Memory management method and device
US9703601B2 (en) Assigning levels of pools of resources to a super process having sub-processes
CN110187987B (en) Method and apparatus for processing requests
CN113791876A (en) System, method and apparatus for processing tasks
CN110825461B (en) Data processing method and device
CN116804915B (en) Data interaction method, processor, device and medium based on memory
CN111782426B (en) Method and device for processing client tasks and electronic equipment
CN111782482B (en) Interface pressure testing method and related equipment
CN109918209B (en) Method and equipment for communication between threads

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant