CN109753479B - Data issuing method, device, equipment and medium - Google Patents
Data issuing method, device, equipment and medium Download PDFInfo
- Publication number
- CN109753479B CN109753479B CN201811628680.6A CN201811628680A CN109753479B CN 109753479 B CN109753479 B CN 109753479B CN 201811628680 A CN201811628680 A CN 201811628680A CN 109753479 B CN109753479 B CN 109753479B
- Authority
- CN
- China
- Prior art keywords
- data
- enqueue
- virtual processor
- circular queue
- thread
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000005540 biological transmission Effects 0.000 claims description 15
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000012840 feeding operation Methods 0.000 claims description 4
- 230000007246 mechanism Effects 0.000 claims description 4
- 230000006872 improvement Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 5
- 230000007958 sleep Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Landscapes
- Hardware Redundancy (AREA)
- Multi Processors (AREA)
Abstract
The application provides a data issuing method, a data issuing device, equipment and a medium. A circular queue of pre-allocated memory for data distribution is constructed, the pre-allocated memory is divided into a plurality of segments, so that data are correspondingly enqueued in blocks, the method at least comprises the following steps: acquiring an enqueue interface of a circular queue; the enqueue thread utilizes the spin lock to control the enqueue of the data to be transmitted from the enqueue interface through the first virtual processor; and the dequeue thread controls dequeuing of the data in the circular queue through the second virtual processor by using another lock independent of the spin lock to finish data issuing. By applying the embodiment of the application, different locks and different virtual processors which are independent from each other can be used for enqueuing and dequeuing respectively, the enqueuing thread and the dequeuing thread are decoupled, a queue memory is divided into a plurality of segments, dequeuing according to the size of enqueuing is enabled to be possible, dequeuing data are convenient to analyze in blocks, and improvement is helpful for improving reliability and efficiency of configuration data issuing.
Description
Technical Field
The present application relates to the field of computer network technologies, and in particular, to a data distribution method, apparatus, device, and medium.
Background
With the development of network technology, high concurrency, high throughput and low latency become important indexes of network device performance, and compared with a traditional Central Processing Unit (CPU), a Field-Programmable Gate Array (FPGA) has high relative performance for Processing messages, but has shortcomings in the aspects of complex and complicated configuration of Processing devices.
At present, in order to exert respective advantages of an FPGA and a CPU, a scheme of "FPGA + CPU" may be adopted in a hardware architecture, so that on one hand, characteristics of the FPGA may be utilized to improve product performance, on the other hand, characteristics of the CPU may be utilized to process and store various configuration data, and then, a driving interface is directly called, and the configuration data is issued to the FPGA through a Peripheral Component Interconnect Express (PCIE) channel.
Based on this, a more reliable and efficient configuration data delivery scheme is needed.
Disclosure of Invention
In view of this, the present application provides a data issuing method, apparatus, device, and medium to issue configuration data more reliably and efficiently.
Specifically, the method is realized through the following technical scheme:
a data issuing method, which constructs a circular queue with pre-allocated memory for data issuing, the pre-allocated memory is divided into a plurality of segments so that data are enqueued in blocks correspondingly, the method includes:
acquiring an enqueue interface of the circular queue;
the enqueue thread utilizes the spin lock to control the enqueue of the data to be transmitted from the enqueue interface through the first virtual processor;
and controlling the dequeue of the data in the circular queue by the dequeue thread through a second virtual processor by using another lock independent of the spin lock to finish data transmission.
A data distribution apparatus that constructs a circular queue of pre-allocated memory for data distribution, the pre-allocated memory being partitioned into a plurality of segments for respective block enqueue of data, the apparatus comprising:
the acquisition module acquires an enqueue interface of the circular queue;
the first control module is used for controlling the data to be transmitted to be enqueued from the enqueue interface through the first virtual processor by using a spin lock by an enqueue thread;
and the second control module controls dequeuing of data in the circular queue through a second virtual processor by a dequeue thread by using another lock independent of the spin lock to finish data issuing.
A data delivery apparatus that constructs a circular queue of pre-allocated memory for data delivery, the pre-allocated memory being partitioned into a plurality of segments for respective block enqueue of data, the apparatus comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring an enqueue interface of the circular queue;
the enqueue thread utilizes the spin lock to control the enqueue of the data to be transmitted from the enqueue interface through the first virtual processor;
and controlling the dequeue of the data in the circular queue by the dequeue thread through a second virtual processor by using another lock independent of the spin lock to finish data transmission.
A non-volatile computer storage medium for data distribution, storing computer-executable instructions configured to construct a circular queue of pre-allocated memory for data distribution, the pre-allocated memory being partitioned into a plurality of segments for respective block enqueuing of data, the computer-executable instructions being configured to:
acquiring an enqueue interface of the circular queue;
the enqueue thread utilizes the spin lock to control the enqueue of the data to be transmitted from the enqueue interface through the first virtual processor;
and controlling the dequeue of the data in the circular queue by the dequeue thread through a second virtual processor by using another lock independent of the spin lock to finish data transmission.
According to the technical scheme, different locks and different virtual processors which are independent from each other can be used for enqueuing and dequeuing respectively, the enqueuing thread and the dequeuing thread are decoupled, a queue memory is divided into a plurality of segments, dequeuing according to the size of enqueuing is enabled to be possible, dequeuing data are convenient to analyze in blocks, and the improvement is beneficial to improving reliability and efficiency of configuration data issuing.
Drawings
FIG. 1 is a diagram illustrating memory partitioning for circular queue pre-allocation according to some embodiments of the present disclosure;
fig. 2 is a schematic flow chart of a data distribution method according to some embodiments of the present application;
FIG. 3 is a schematic diagram of an enqueue process provided by some embodiments of the present application;
FIG. 4 is a schematic diagram of a dequeue process provided by some embodiments of the present application;
fig. 5 is a schematic structural diagram of a data issuing apparatus corresponding to fig. 2 according to some embodiments of the present application;
fig. 6 is a schematic structural diagram of a data issuing device corresponding to fig. 2 according to some embodiments of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Aiming at the problems in the background art, the data issuing scheme provided by the application can be used for reliably and efficiently issuing configuration data to one or more FPGAs (field programmable gate arrays) by a CPU (central processing unit), and can still obtain a good effect even when the configuration data volume is large. The data issuing method and the data issuing system achieve data issuing based on the queues and corresponding queue access operation. The scheme of the present application is explained in detail below.
From the aspect of functional structure, the scheme can be divided into two parts, namely a data enqueue part and a data dequeue part. In terms of data structure, the queue is a circular queue of pre-allocated memory, the circular queue can be divided in a similar array form, a fixed memory is divided into a plurality of segments with the same or different lengths, and therefore small data are dispersedly enqueued (entering the circular queue) and large data are aggregated and dequeued (exiting from the circular queue), and data transmission performance is improved.
Taking the example of pre-allocated memory being partitioned into multiple segments of the same length, some embodiments of the present application provide a schematic diagram of memory partitioning for circular queue pre-allocation, as shown in fig. 1.
In fig. 1, the total size of the allocated memory (i.e., the total length of the queue) is 8MB, and the allocated memory can be divided into 32768 segments with the same length, each segment is 256B, and during enqueuing, 256B data blocks can be enqueued, and during dequeuing, a plurality of 256B data blocks can be aggregated and dequeued together.
Fig. 2 is a flowchart illustrating a data distribution method according to some embodiments of the present application. In this flow, from a device perspective, the execution main body may include one or more computing devices, specifically, may include, for example, CPUs on the computing devices, and from a program perspective, the execution main body may accordingly include a plurality of processes or threads running on the computing devices, specifically, may include at least one enqueue thread and one dequeue thread. The execution sequence of the steps of the flow in fig. 2 is not limited in the present application, and may have an execution sequence or may be executed in parallel.
The flow in fig. 2 may include the following steps:
s201: and acquiring an enqueue interface of the circular queue.
In some embodiments of the present application, enqueue operations and dequeue operations may be controlled separately using separate, different threads to improve efficiency.
S202: and controlling the data to be transmitted to be enqueued from the enqueue interface through the first virtual processor by using the spin lock by the enqueue thread.
In some embodiments of the present application, the data to be issued may include configuration data to be issued to the FPGA. The spin lock is used for protecting enqueue data, preventing data errors caused by concurrency and improving reliability. The spin lock can not cause the caller to sleep, and if the spin lock is already kept by other execution units, the caller always circularly checks whether the keeper of the spin lock releases the lock, so that the enqueue operation is favorably executed in time, and meaningless waiting is reduced.
In some embodiments of the present application, before the enqueue operation, the data to be issued may be analyzed, for example, the size of the statistical data, the data type, and the like may be counted, and the data to be issued may be enqueued in blocks according to the analysis result, for example, the data of the same data type is divided into the same data block as much as possible, a plurality of small data are divided into the same data block, and the like, thereby facilitating the enqueue more efficiently. The results of the analysis can also be used to: if at least part of the data to be transmitted is lost due to the enqueue failure, corresponding lost information can be generated according to the analysis result and used for positioning problems or retransmitting the data, so that the reliability of data transmission is improved. The loss information includes, for example, packet loss statistics and detailed logs, and the type and size of the data that is lost are recorded.
S203: and controlling the dequeue of the data in the circular queue by the dequeue thread through a second virtual processor by using another lock independent of the spin lock to finish data transmission.
In some embodiments of the present application, enqueue threads and dequeue threads each use separate different lock guarantee timing. Furthermore, the enqueue thread and the dequeue thread each use a different virtual processor. For example, an enqueue thread may call a virtual processor of an application to perform an enqueue operation, while a dequeue thread may call its dedicated real-time, high-power virtual processor to perform a dequeue operation.
It should be noted that the enqueuing operation and the dequeuing operation are not necessarily performed continuously, and may be performed intermittently.
By the method of fig. 1, enqueuing and dequeuing can use different locks and different virtual processors which are independent of each other, an enqueue thread and a dequeue thread are decoupled, a queue memory is divided into a plurality of segments, dequeuing according to the size of enqueue is possible, dequeue data is convenient to resolve in blocks, and improvement of the reliability and efficiency of configuration data issuing is facilitated.
Based on the method of fig. 1, some embodiments of the present application also provide some specific embodiments of the method, and further embodiments, which are explained below.
In some embodiments of the present application, in the case of an intermittent operation of a dequeue thread, there may be a case that there is insufficient space currently in the circular queue, but the dequeue thread is not currently performing a dequeue operation, in which case, in order not to delay the dequeue operation, the dequeue operation may be indirectly triggered or actively performed by the enqueue thread, for example, the enqueue thread may implement the dequeue operation by calling a predetermined dequeue function and executing the dequeue function. Based on this, for step S102, when the first virtual processor controls the data to be issued to be enqueued from the enqueue interface, the following steps may be further performed: judging whether the current space in the circular queue is insufficient or not, and if the current space in the circular queue is insufficient, controlling the dequeuing of the data in the circular queue through the first virtual processor by the enqueue thread.
Further, if it is determined that the current space in the circular queue is insufficient, when the first virtual processor controls dequeuing of data in the circular queue, the method may further perform: judging whether dequeue operation controlled by a dequeue thread is currently executed; if so, performing the enqueuing operation controlled by the enqueuing thread immediately; otherwise, after the dequeue operation controlled by the first virtual processor is executed, a phase of not releasing the first virtual processor is entered, in which the enqueue operation is attempted by the first virtual processor one or more times. Therefore, data can be enqueued efficiently, and the utilization rate of the circular queue is improved.
In some embodiments of the present application, the first virtual processor may apply a watchdog mechanism, in which case, in the phase of not releasing the first virtual processor described above, it may further perform: if the number of times of failure of the first virtual processor to attempt to perform enqueue operation exceeds a set threshold, at least a part of data to be enqueued can be lost, and a dog feeding operation is performed, so that the first virtual processor is prevented from being incapable of scheduling for a long time and causing a dog call to restart. Of course, it is also possible not to actively lose the data, but to wait until enqueuing is possible.
More intuitively, some embodiments of the present application further provide a schematic diagram of an enqueuing flow of configuration data, as shown in fig. 3.
The process of FIG. 3 may be performed by an enqueue thread via a first virtual processor, and may include the steps of: acquiring configuration data to be issued; analyzing the size and the type of the configuration data; when the enqueue operation is executed, firstly judging whether the queue space is enough currently; if so, finishing the process by copying the configuration data to the queue; otherwise, actively executing dequeue operation through the first virtual processor; when the dequeue operation is actively executed through the first virtual processor, judging whether the dequeue thread is currently executing the dequeue operation; if yes, entering a waiting stage of not releasing the first virtual processor, correspondingly increasing the waiting times, judging the waiting times, if the waiting times do not exceed a set threshold, continuing to try to execute enqueuing operation, if the waiting times exceed the set threshold, discarding at least part of configuration data, counting packet loss types and times, sending an alarm log, and ending the process; otherwise, the enqueue operation continues to be attempted.
The scheme of the dequeue portion is further described below.
In some embodiments of the present application, a dequeue aspect may start a special high-consumption thread to perform queue monitoring, perform a dequeue operation in time if there is data in a queue, and the dequeue thread may sleep if there is no data, add a dequeue lock as long as dequeue is performed, and help reduce the mutual influence between the enqueue operation and the dequeue operation because different locks that are independent from the enqueue operation are used.
In some embodiments of the present application, for step S203, controlling dequeuing of data in the circular queue may specifically include: if the circular queue has data currently, judging whether the length of the data is not less than the total length of the segments with the set number according to each segment of the pre-allocated memory, wherein the set number is greater than 1, and if so, controlling the dequeue of the data with the length of the segments with the set number. Therefore, the aggregation dequeuing of a plurality of enqueued data blocks is realized, and the dequeuing efficiency is improved.
In some embodiments of the present application, for a scenario in which a CPU issues configuration data to an FPGA, the processing of data replication dequeuing may be performed according to the number of FPGAs that issue configuration data as needed currently, which has an advantage of reducing the number of times of enqueuing for the same data, but may also have a problem: even if some configuration data is private data exclusive to the designated FPGA, the private data is still distributed to each FPGA, so that the data issuing efficiency is influenced, and the data privacy is not protected. For example, the data in the queue may be analyzed when dequeuing, if a large number of pieces of continuous data are private data, the private data is only issued to a designated FPGA, after the data is issued, the FPGA waits for the FPGA to confirm whether the issued data is received, if the FPGA does not return a confirmation signal within a specified time, the dequeuing thread may attempt to resend the same data and record the timeout times, if the timeout times exceed a set threshold, the dequeuing thread may unlock and send an alarm log and the FPGA recording the timeout, and if the repeated issuance does not exceed the set threshold, the confirmation signal is received, the dequeuing may be considered to be successful, and the unlocking may be performed.
Based on the analysis in the previous paragraph, for step S203, completing data delivery may include, for example: judging whether the dequeued data contains data which appoints an FPGA; if so, and the contained data continuous length is greater than the set length, the contained data is only issued to the appointed FPGA; and otherwise, copying the dequeued data, and respectively issuing the dequeued data to all the currently issued FPGAs.
More intuitively, some embodiments of the present application also provide a dequeue flow diagram of configuration data, as shown in fig. 4.
The process of FIG. 4 may be performed by a dequeue thread via a second virtual processor, and may include the steps of: the dequeue operation begins; judging whether data exist in the queue or not; if not, the dequeue thread sleeps, and the process is finished; if so, judging the data length, if the data length is smaller than a set length (if the queue segment is larger than an integral multiple of 1, if the queue segment is 256B, the set length is an integral multiple of 256B, and is assumed to be 1 MB), then completely dequeuing the data, and if the data length is not smaller than the set length, selecting 1MB data to dequeue; analyzing the dequeue data; according to the analysis result, if the data of the FPGA is not continuously specified, copying the data, issuing configuration data to all the FPGAs needing to be configured, and ending the process; if the data of the FPGA is continuously appointed, configuration data are issued to the appointed FPGA; after sending, waiting for the FPGA to determine receiving, if the receiving is successful, considering that the dequeuing is finished, ending the flow of the time, if the receiving is failed, recording the failure times, judging whether the failure times exceed a set threshold value, if not, retrying the sending, if so, sending an alarm log, considering that the dequeuing is finished, and ending the flow of the time.
Based on the same idea, some embodiments of the present application further provide an apparatus, a device, and a non-volatile computer storage medium corresponding to the above method.
Fig. 5 is a schematic structural diagram of a data issuing apparatus corresponding to fig. 2 according to some embodiments of the present application, where a dashed square represents an optional module, and a circular queue of pre-allocated memory for data issuing is constructed, where the pre-allocated memory is divided into a plurality of segments, so that data is queued in blocks correspondingly, and the apparatus includes:
an obtaining module 501, configured to obtain an enqueue interface of the circular queue;
a first control module 502, which uses spin lock by enqueue thread to control enqueue of data to be issued from the enqueue interface through a first virtual processor;
the second control module 503 controls dequeuing of data in the circular queue by the dequeue thread through the second virtual processor using another lock independent from the spin lock, thereby completing data issue.
Optionally, when the first control module 502 controls the enqueue of the data to be transmitted from the enqueue interface through the first virtual processor, the method further performs:
and if the current space in the circular queue is determined to be insufficient, the enqueue thread controls the dequeue of the data in the circular queue through the first virtual processor.
Optionally, if it is determined that the current space in the circular queue is insufficient, the first control module 502 further performs, when controlling dequeuing of data in the circular queue through the first virtual processor:
judging whether the dequeuing operation controlled by the dequeuing thread is currently executed or not;
if so, executing the enqueuing operation controlled by the enqueuing thread immediately;
otherwise, after the dequeue operation controlled by the first virtual processor is executed, a stage of not releasing the first virtual processor is entered, in which stage the enqueue operation is attempted by the first virtual processor one or more times.
Optionally, the first virtual processor applies a watchdog mechanism;
in the stage of not releasing the first virtual processor, the first control module 502 further performs:
and if the number of times of the first virtual processor failing to attempt to perform the enqueue operation exceeds a set threshold, losing at least a part of data to be enqueued and performing the dog feeding operation.
Optionally, the apparatus further comprises:
an analysis module 504, configured to count the size and type of the data to be issued before the data to be issued is controlled by the enqueue thread to enqueue from the enqueue interface through the first virtual processor using the spin lock, so that if at least part of the data to be issued is lost due to enqueue failure, corresponding lost information is generated to be used for locating a problem or retransmitting the data.
Optionally, the controlling the dequeuing of the data in the circular queue by the second control module 503 specifically includes:
if there is data currently in the circular queue, the second control module 503 determines whether the length of the data is not less than the total length of a set number of the segments according to each segment of the pre-allocated memory, where the set number is greater than 1;
and if so, controlling the dequeuing of the data with the set number of the segment lengths.
Optionally, the second control module 503 completes data transmission, and specifically includes:
the second control module 503 determines whether the dequeued data includes data that designates a field programmable gate array FPGA;
if so, and the contained data continuous length is greater than the set length, the contained data is only issued to the appointed FPGA;
and otherwise, copying the dequeued data and respectively issuing the dequeued data to all the FPGAs which can be issued currently.
Fig. 6 is a schematic structural diagram of a data issuing apparatus corresponding to fig. 2 according to some embodiments of the present application, where a circular queue of pre-allocated memory for data issuing is constructed, the pre-allocated memory is divided into a plurality of segments, so that data is correspondingly block-queued, and the apparatus includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring an enqueue interface of the circular queue;
the enqueue thread utilizes the spin lock to control the enqueue of the data to be transmitted from the enqueue interface through the first virtual processor;
and controlling the dequeue of the data in the circular queue by the dequeue thread through a second virtual processor by using another lock independent of the spin lock to finish data transmission.
Some embodiments of the present application provide a non-volatile computer storage medium for data distribution corresponding to fig. 2, storing computer-executable instructions, configured to construct a circular queue of pre-allocated memory for data distribution, the pre-allocated memory being partitioned into a plurality of segments, so that data is correspondingly block-queued, the computer-executable instructions being configured to:
acquiring an enqueue interface of the circular queue;
the enqueue thread utilizes the spin lock to control the enqueue of the data to be transmitted from the enqueue interface through the first virtual processor;
and controlling the dequeue of the data in the circular queue by the dequeue thread through a second virtual processor by using another lock independent of the spin lock to finish data transmission.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, device and media embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.
The apparatus, the device, the apparatus, and the medium provided in the embodiment of the present application correspond to the method one to one, and therefore, the apparatus, the device, and the medium also have beneficial technical effects similar to those of the corresponding method.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.
Claims (14)
1. A data downloading method, wherein a circular queue of pre-allocated memory for data transmission is constructed, and the pre-allocated memory is divided into a plurality of segments, so that data are enqueued in blocks, respectively, the method comprising:
acquiring an enqueue interface of the circular queue;
the enqueue thread utilizes the spin lock to control the enqueue of the data to be transmitted from the enqueue interface through the first virtual processor; if the current space in the circular queue is determined to be insufficient, the enqueue thread controls the dequeue of the data in the circular queue through the first virtual processor;
and controlling the dequeue of the data in the circular queue by the dequeue thread through a second virtual processor by using another lock independent of the spin lock to finish data transmission.
2. The method of claim 1, wherein if it is determined that there is insufficient space currently in the circular queue, said controlling dequeuing of data in the circular queue by the first virtual processor further comprises:
judging whether the dequeuing operation controlled by the dequeuing thread is currently executed or not;
if so, executing the enqueuing operation controlled by the enqueuing thread immediately;
otherwise, after the dequeue operation controlled by the first virtual processor is executed, a stage of not releasing the first virtual processor is entered, in which stage the enqueue operation is attempted by the first virtual processor one or more times.
3. The method of claim 2, wherein the first virtual processor applies a watchdog mechanism;
during the stage of not releasing the first virtual processor, the method further comprises:
and if the number of times of the first virtual processor failing to attempt to perform the enqueue operation exceeds a set threshold, losing at least a part of data to be enqueued and performing the dog feeding operation.
4. The method of claim 1, wherein before the enqueuing of the pending data from the enqueuing interface by the enqueuing thread using spin lock is controlled by the first virtual processor, the method further comprises:
and counting the size and the type of the data to be transmitted so as to generate corresponding lost information for positioning problems or retransmitting the data if at least part of the data to be transmitted is lost due to enqueuing failure.
5. The method according to claim 1, wherein the controlling dequeuing of data in the circular queue specifically comprises:
if the circular queue has data currently, judging whether the length of the data is not less than the total length of a set number of the segments according to each segment of the pre-allocated memory, wherein the set number is more than 1;
and if so, controlling the dequeue of the data with the total length of the set number of the segments.
6. The method of claim 5, wherein the completing data transmission specifically comprises:
judging whether the dequeued data contains data of a Field Programmable Gate Array (FPGA);
if so, and the contained data continuous length is greater than the set length, the contained data is only issued to the appointed FPGA;
and otherwise, copying the dequeued data and respectively issuing the dequeued data to all the FPGAs which can be issued currently.
7. A data issuing apparatus, wherein a circular queue of pre-allocated memory for data issuing is constructed, the pre-allocated memory is divided into a plurality of segments so that data is enqueued in blocks accordingly, the apparatus comprising:
the acquisition module acquires an enqueue interface of the circular queue;
the first control module is used for controlling the data to be transmitted to be enqueued from the enqueue interface through the first virtual processor by using a spin lock by an enqueue thread; if the current space in the circular queue is determined to be insufficient, the enqueue thread controls the dequeue of the data in the circular queue through the first virtual processor;
and the second control module controls dequeuing of data in the circular queue through a second virtual processor by a dequeue thread by using another lock independent of the spin lock to finish data issuing.
8. The apparatus as claimed in claim 7, wherein if it is determined that the current space in the circular queue is insufficient, the first control module further performs, when controlling dequeuing of data in the circular queue through the first virtual processor:
judging whether the dequeuing operation controlled by the dequeuing thread is currently executed or not;
if so, executing the enqueuing operation controlled by the enqueuing thread immediately;
otherwise, after the dequeue operation controlled by the first virtual processor is executed, a stage of not releasing the first virtual processor is entered, in which stage the enqueue operation is attempted by the first virtual processor one or more times.
9. The apparatus of claim 8, wherein the first virtual processor applies a watchdog mechanism;
in the stage of not releasing the first virtual processor, the first control module further performs:
and if the number of times of the first virtual processor failing to attempt to perform the enqueue operation exceeds a set threshold, losing at least a part of data to be enqueued and performing the dog feeding operation.
10. The apparatus of claim 7, further comprising:
and the analysis module is used for counting the size and the type of the data to be issued before the data to be issued is controlled to be enqueued from the enqueue interface through the first virtual processor by the enqueue thread by using the spin lock, so that if at least part of the data to be issued is lost due to enqueue failure, corresponding lost information is generated and used for positioning problems or retransmitting the data.
11. The apparatus according to claim 7, wherein the second control module controls dequeuing of data in the circular queue, specifically comprising:
if the circular queue currently has data, the second control module judges whether the length of the data is not less than the total length of a set number of the segments according to each segment of the pre-allocated memory, wherein the set number is greater than 1;
and if so, controlling the dequeue of the data with the total length of the set number of the segments.
12. The apparatus according to claim 11, wherein the second control module completes data transmission, and specifically includes:
the second control module judges whether the dequeued data contains data which appoints a Field Programmable Gate Array (FPGA);
if so, and the contained data continuous length is greater than the set length, the contained data is only issued to the appointed FPGA;
and otherwise, copying the dequeued data and respectively issuing the dequeued data to all the FPGAs which can be issued currently.
13. A data distribution device, wherein a circular queue of pre-allocated memory for data distribution is constructed, and the pre-allocated memory is divided into a plurality of segments so that data are correspondingly block-queued, the device comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring an enqueue interface of the circular queue;
the enqueue thread utilizes the spin lock to control the enqueue of the data to be transmitted from the enqueue interface through the first virtual processor; if the current space in the circular queue is determined to be insufficient, the enqueue thread controls the dequeue of the data in the circular queue through the first virtual processor;
and controlling the dequeue of the data in the circular queue by the dequeue thread through a second virtual processor by using another lock independent of the spin lock to finish data transmission.
14. A non-volatile computer storage medium for data distribution, storing computer-executable instructions, wherein a circular queue of pre-allocated memory for data distribution is constructed, the pre-allocated memory being partitioned into a plurality of segments for respective block enqueuing of data, the computer-executable instructions being configured to:
acquiring an enqueue interface of the circular queue;
the enqueue thread utilizes the spin lock to control the enqueue of the data to be transmitted from the enqueue interface through the first virtual processor; if the current space in the circular queue is determined to be insufficient, the enqueue thread controls the dequeue of the data in the circular queue through the first virtual processor;
and controlling the dequeue of the data in the circular queue by the dequeue thread through a second virtual processor by using another lock independent of the spin lock to finish data transmission.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811628680.6A CN109753479B (en) | 2018-12-28 | 2018-12-28 | Data issuing method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811628680.6A CN109753479B (en) | 2018-12-28 | 2018-12-28 | Data issuing method, device, equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109753479A CN109753479A (en) | 2019-05-14 |
CN109753479B true CN109753479B (en) | 2021-05-25 |
Family
ID=66404272
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811628680.6A Active CN109753479B (en) | 2018-12-28 | 2018-12-28 | Data issuing method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109753479B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112732448A (en) * | 2021-01-18 | 2021-04-30 | 国汽智控(北京)科技有限公司 | Memory space allocation method and device and computer equipment |
CN113411392B (en) * | 2021-06-16 | 2022-05-10 | 中移(杭州)信息技术有限公司 | Resource issuing method, device, equipment and computer program product |
CN118433085B (en) * | 2024-07-05 | 2024-10-08 | 成都玖锦科技有限公司 | Excitation issuing data processing method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103530130A (en) * | 2013-10-28 | 2014-01-22 | 迈普通信技术股份有限公司 | Method and equipment for implementing multiple-input and multiple-output queues |
CN107122457A (en) * | 2017-04-26 | 2017-09-01 | 努比亚技术有限公司 | Record the method and its device, computer-readable medium of networks congestion control data |
CN108733344A (en) * | 2018-05-28 | 2018-11-02 | 深圳市道通智能航空技术有限公司 | Data read-write method, device and circle queue |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853149A (en) * | 2009-03-31 | 2010-10-06 | 张力 | Method and device for processing single-producer/single-consumer queue in multi-core system |
US8848723B2 (en) * | 2010-05-18 | 2014-09-30 | Lsi Corporation | Scheduling hierarchy in a traffic manager of a network processor |
CN102591843B (en) * | 2011-12-30 | 2014-07-16 | 中国科学技术大学苏州研究院 | Inter-core communication method for multi-core processor |
CN104158685B (en) * | 2014-08-20 | 2017-09-19 | 深圳市顺恒利科技工程有限公司 | Method and apparatus for the integrated linkage of facility information |
CN105824780A (en) * | 2016-04-01 | 2016-08-03 | 浪潮电子信息产业股份有限公司 | Parallel development method based on single machine and multiple FPGA |
CN106648933A (en) * | 2016-12-26 | 2017-05-10 | 北京奇虎科技有限公司 | Consuming method and device of message queue |
CN108632171B (en) * | 2017-09-07 | 2020-03-31 | 视联动力信息技术股份有限公司 | Data processing method and device based on video network |
CN108848006B (en) * | 2018-08-24 | 2020-11-06 | 杭州迪普科技股份有限公司 | Port state monitoring method and device |
-
2018
- 2018-12-28 CN CN201811628680.6A patent/CN109753479B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103530130A (en) * | 2013-10-28 | 2014-01-22 | 迈普通信技术股份有限公司 | Method and equipment for implementing multiple-input and multiple-output queues |
CN107122457A (en) * | 2017-04-26 | 2017-09-01 | 努比亚技术有限公司 | Record the method and its device, computer-readable medium of networks congestion control data |
CN108733344A (en) * | 2018-05-28 | 2018-11-02 | 深圳市道通智能航空技术有限公司 | Data read-write method, device and circle queue |
Also Published As
Publication number | Publication date |
---|---|
CN109753479A (en) | 2019-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109753479B (en) | Data issuing method, device, equipment and medium | |
US11467769B2 (en) | Managed fetching and execution of commands from submission queues | |
US20130160028A1 (en) | Method and apparatus for low latency communication and synchronization for multi-thread applications | |
US8423722B1 (en) | System and method for high performance command processing in solid state drives | |
US20180150326A1 (en) | Method and apparatus for executing task in cluster | |
US9830189B2 (en) | Multi-threaded queuing system for pattern matching | |
US10325219B2 (en) | Parallel retrieval of training data from multiple producers for machine learning systems | |
US20120047514A1 (en) | Scheduling system and method of efficiently processing applications | |
CN110188110B (en) | Method and device for constructing distributed lock | |
WO2011137815A1 (en) | Method, message receiving parser and system for data access | |
CN105094981B (en) | A kind of method and device of data processing | |
CN103345429B (en) | High concurrent memory access accelerated method, accelerator and CPU based on RAM on piece | |
US11429315B1 (en) | Flash queue status polling | |
CN109842621A (en) | A kind of method and terminal reducing token storage quantity | |
CN111190541B (en) | Flow control method of storage system and computer readable storage medium | |
US7840725B2 (en) | Capture of data in a computer network | |
US7266650B2 (en) | Method, apparatus, and computer program product for implementing enhanced circular queue using loop counts | |
CN117472597B (en) | Input/output request processing method, system, electronic device and storage medium | |
WO2017201693A1 (en) | Scheduling method and device for memory access instruction, and computer system | |
US10248331B2 (en) | Delayed read indication | |
Denis | Scalability of the NewMadeleine communication library for large numbers of MPI point-to-point requests | |
US20230030672A1 (en) | Die-based high and low priority error queues | |
CN110537174B (en) | Data locking method based on alternate row lock and column lock | |
CN115981893A (en) | Message queue task processing method and device, server and storage medium | |
CN109933547A (en) | The passive accelerator of RAID and accelerated method in a kind of SSD master control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |