CN117076139B - Data processing method and related equipment - Google Patents

Data processing method and related equipment Download PDF

Info

Publication number
CN117076139B
CN117076139B CN202311337936.9A CN202311337936A CN117076139B CN 117076139 B CN117076139 B CN 117076139B CN 202311337936 A CN202311337936 A CN 202311337936A CN 117076139 B CN117076139 B CN 117076139B
Authority
CN
China
Prior art keywords
thread
target buffer
data
buffer area
semaphore
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311337936.9A
Other languages
Chinese (zh)
Other versions
CN117076139A (en
Inventor
李洋
徐玲
韩云飞
杨浩晗
陈昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Rongwei Technology Co ltd
Original Assignee
Beijing Rongwei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Rongwei Technology Co ltd filed Critical Beijing Rongwei Technology Co ltd
Priority to CN202311337936.9A priority Critical patent/CN117076139B/en
Publication of CN117076139A publication Critical patent/CN117076139A/en
Application granted granted Critical
Publication of CN117076139B publication Critical patent/CN117076139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The invention discloses a data processing method and related equipment, a plurality of buffer areas with consistent capacity are built in a memory in advance, and a plurality of threads are built, and the method comprises the following steps: receiving a data stream to be processed, and sequentially copying each received data received sequentially to each target buffer area based on a first thread in each thread; the first thread is executed to sequentially process the data in each target buffer area, and after each execution is completed, the first semaphore is set to obtain a set semaphore; if the set semaphore is detected, executing the current target thread to process the data in the target buffer area corresponding to the set semaphore, and setting the first semaphore after the execution is completed; when the unexecuted thread corresponding to the target buffer area does not exist in each thread, the target buffer area is emptied, so that only the first thread is required to copy the memory once, each thread is independently executed in parallel, each data processing operation is completed, and the processing efficiency of data stream processing in the memory is improved.

Description

Data processing method and related equipment
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a data processing method and related devices.
Background
After receiving satellite data, the data needs to be processed such as frame synchronization, de-interleaving, channel output, check data rejection and the like, and the processed data is stored in a local hard disk or distributed to receiving equipment at the back end through a network. The main stream processing mode at present is to complete operations such as frame synchronization, de-interleaving, channel output, check data rejection and the like through an FPGA, transmit the processed data to an upper computer, and complete basic disk storage or forwarding operation by the upper computer. Along with the stronger processing capability of the upper computer, the development trend of future data processing mainly develops towards the direction of completing data processing by the upper computer so as to reduce the development difficulty of the FPGA.
At present, when each data processing operation is to be completed by the upper computer, a plurality of ring buffer areas are needed to be implemented, namely, the processed ring buffer area data is transferred into the next ring buffer area in a memory copying mode, then the next data processing operation is carried out, and each data processing operation needs to copy and sort the memory. In the case of receiving a high-speed data stream (typically at a rate of 10 Gbps), the memory copy introduces processing delay, which affects the data processing efficiency, and thus cannot efficiently perform complex data processing on the high-speed data stream.
Therefore, how to improve the processing efficiency of data stream processing in the memory is a technical problem to be solved at present.
Disclosure of Invention
The embodiment of the application provides a data processing method and related equipment, which are used for improving the processing efficiency of data stream processing in a memory.
In a first aspect, a data processing method is provided, in which a plurality of buffers with consistent capacities are built in advance in a memory, and a plurality of threads are built, each thread is used for executing a corresponding preset data processing operation, and an execution sequence of each thread is determined by an execution sequence of each preset data processing operation, where the method includes: receiving a data stream to be processed, and sequentially copying each received data sequentially received from the data stream to each target buffer area based on a first thread in each thread, wherein the data volume of the received data is a preset data volume, and the target buffer area is an unused buffer area selected from each buffer area; executing the first thread to sequentially process the data in each target buffer area, and setting the first semaphore after each execution is completed to obtain a set semaphore; if the set semaphore is detected, executing a current target thread to process data in a target buffer zone corresponding to the set semaphore, and setting the first semaphore after execution is completed, wherein the current target thread is the next thread of the recently executed thread of the target buffer zone; and when the unexecuted threads corresponding to the target buffer area do not exist in the threads, the target buffer area is emptied.
In a second aspect, a data processing apparatus is provided, in which a plurality of buffers with identical capacities are built in advance in a memory, and a plurality of threads are built, each of the threads being configured to execute a corresponding preset data processing operation, an execution order of each of the threads being determined by an execution order of each of the preset data processing operations, the apparatus comprising: the receiving module is used for receiving a data stream to be processed, and sequentially copying each received data received from the data stream to each target buffer area based on the first thread in each thread, wherein the data volume of the received data is a preset data volume, and the target buffer area is an unused buffer area selected from the buffer areas; the first execution module is used for executing the first thread to process the data in each target buffer area in sequence, and setting the first semaphore after each execution is completed to obtain a set semaphore; the second execution module is used for executing the current target thread to process the data in the target buffer area corresponding to the set semaphore if the set semaphore is detected, and setting the first semaphore after the execution is completed, wherein the current target thread is the next thread of the recently executed thread of the target buffer area; and the emptying module is used for emptying the target buffer zone when the unexecuted thread corresponding to the target buffer zone does not exist in each thread.
In a third aspect, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the data processing method of the first aspect via execution of the executable instructions.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, implements the data processing method according to the first aspect.
By applying the technical scheme, a plurality of buffer areas with consistent capacity are built in a memory in advance, and a plurality of threads are built, wherein each thread is used for executing a corresponding preset data processing operation, the execution sequence of each thread is determined by the execution sequence of each preset data processing operation, and the method comprises the following steps: receiving a data stream to be processed, and sequentially copying each received data sequentially received from the data stream to each target buffer area based on a first thread in each thread, wherein the data volume of the received data is a preset data volume, and the target buffer area is one unused buffer area selected from each buffer area; the first thread is executed to sequentially process the data in each target buffer area, and after each execution is completed, the first semaphore is set to obtain a set semaphore; if the set semaphore is detected, executing the current target thread to process the data in the target buffer area corresponding to the set semaphore, and setting the first semaphore after the execution is completed, wherein the current target thread is the next thread of the recently executed thread of the target buffer area; when the unexecuted thread corresponding to the target buffer area does not exist in each thread, the target buffer area is emptied, so that only the first thread is required to perform memory copying once, the processing time delay is reduced, each thread is independently executed in parallel, each data processing operation is completed, and the processing efficiency of data stream processing in the memory is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a flow chart of a data processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a data processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It is noted that other embodiments of the present application will be readily apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise construction set forth herein below and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The subject application is operational with numerous general purpose or special purpose computing device environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor devices, distributed computing environments that include any of the above devices or devices, and the like.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiment of the application provides a data processing method, a plurality of buffer areas with consistent capacity are built in a memory in advance, and a plurality of threads are built, wherein each thread is used for executing a corresponding preset data processing operation, the execution sequence of each thread is determined by the execution sequence of each preset data processing operation, as shown in fig. 1, and the method comprises the following steps:
step S101, receiving a data stream to be processed, and sequentially copying each received data sequentially received from the data stream to each target buffer area based on a first thread in each thread, where the data amount of the received data is a preset data amount, and the target buffer area is an unused buffer area selected from the buffer areas.
In this embodiment, a plurality of buffers with consistent capacity are built in the memory in advance, specifically, a buffer structure may be built in the memory first, and then the buffer structure is divided into a plurality of buffers according to the preset capacity, where each buffer corresponds to one memory block. For example, the number of buffers may be 16, and the capacity of each buffer corresponding to a memory block may be 4MB. According to a plurality of preset data processing operations in a data stream processing flow, a plurality of threads are established, each thread is used for executing a corresponding preset data processing operation, and the execution sequence of each thread is determined by the execution sequence of each preset data processing operation. For example, the preset data processing operation includes an operation 1, an operation 2 and an operation 3 which are sequentially executed, and the execution sequence of each thread is as follows: operation 1 is performed by execution thread 1, operation 2 is performed by execution thread 2, and operation 3 is performed by execution thread 3.
The data stream to be processed is received, and the data stream may be a satellite data stream or a data stream of other communication types (such as mobile communication, etc.), and the satellite data stream may be a satellite data stream transmitted between satellites or a satellite data stream transmitted between satellites. If the received data reaches the preset data amount, the received data is copied to a preset target buffer zone based on the first thread in each thread, and the received data is continuously received from the data stream, so that a plurality of received data are sequentially received, the first thread can copy each received data to each target buffer zone in turn, and then the data processing operation on the data in the target buffer zone can be realized by sequentially executing each thread. Wherein each target buffer is an unused buffer selected from the buffers, i.e., the target buffer is an idle buffer. To ensure that each preset data processing operation is performed accurately, in some embodiments, the preset amount of data is an integer multiple of the frame length.
It will be appreciated that the first thread is the thread that needs to be executed first among the threads, for example, thread 1, thread 2 and thread 3 are executed sequentially, and then thread 1 is the first thread.
Optionally, if the data stream is a satellite data stream, the preset data processing operation includes at least two data processing operations of frame synchronization, deinterlacing, channel output, check data rejection, time code addition, frame format arrangement, and data packing, and a person skilled in the art can flexibly select the corresponding preset data processing operation according to different data processing flows.
Step S102, executing the first thread to process the data in each target buffer area sequentially, and setting the first semaphore after each execution is completed, so as to obtain a set semaphore.
And executing the first thread to sequentially process the data in each target buffer, for example, when the first thread is a frame synchronization thread, performing frame synchronization on the data in the target buffer by executing the first thread. After the first thread is completed in each execution, the first semaphore is set to determine that the next thread can be executed on the data in the target buffer, and processing of the data in the target buffer is continued.
The Semaphore (Semaphore) is a mechanism for realizing communication between tasks, and can realize synchronization between tasks or mutual exclusion access of critical resources. Semaphores can be used to ensure that two or more critical code segments are not invoked concurrently in a multi-threaded environment. Before entering a critical code segment, the thread must acquire a semaphore; once the critical code segment is complete, the thread must release the semaphore, i.e., set the semaphore.
It will be appreciated that after the first thread copies the first received data to the target buffer, steps S101 and S102 proceed in parallel.
Step S103, if the set semaphore is detected, executing a current target thread to process data in a target buffer corresponding to the set semaphore, and setting the first semaphore after execution is completed, where the current target thread is a next thread of a recently executed thread of the target buffer.
In this embodiment, the current target thread is the next thread of the most recently executed threads in the target buffer, that is, the current target thread is each thread after the first thread in sequence. If the set semaphore is detected, determining and executing the current target thread, processing data in a target buffer area corresponding to the set semaphore, and setting the first semaphore after the execution is completed. And continuing to execute a new current target thread, processing data in the target buffer area, and setting the first semaphore after each execution is completed to obtain a new set semaphore.
For example, if each thread is thread 1, thread 2 and thread 3 that sequentially execute, executing thread 1 processes data in the target buffer, setting the first semaphore after execution is completed to obtain a set semaphore, and thread 2 is the next thread of thread 1, then thread 2 is the current target thread, executing thread 2 processes data in the target buffer corresponding to the set semaphore when the set semaphore is detected, setting the first semaphore after execution is completed to obtain a new set semaphore, at this time, thread 3 is the current target thread, continuing to execute thread 3, processing data in the target buffer corresponding to the set semaphore, and setting the first semaphore after execution is completed.
It can be understood that step S103 is repeated until all threads corresponding to the current target buffer are executed, and after each thread is started, steps S101 to S103 are simultaneously executed, and data of each target buffer are continuously processed, so that parallel execution of each thread is realized.
In some embodiments of the present application, each thread corresponds to a linked list, and after executing a thread previous to the current target thread and setting the first semaphore, the method further includes:
the first address of the memory block of the target buffer area corresponding to the set semaphore is issued to the first empty node in the linked list of the current target thread;
the current target thread determines the target buffer area according to the memory block head address in the head node of the corresponding linked list.
In this embodiment, a linked list is set in advance for each thread, where the linked list is a discontinuous, non-sequential storage structure on a physical storage unit, and the logical sequence of data elements is implemented by using the pointer link sequence in the linked list. The linked list is made up of a series of nodes, which can be dynamically generated at runtime. After the previous thread of the current target thread is executed and the first semaphore is set, determining a memory block head address of a target buffer area corresponding to the set semaphore, issuing the memory block head address to a first empty node in a linked list of the current target thread, inquiring the linked list corresponding to the current target thread by the current target thread, and determining the target buffer area according to the memory block head address if the memory block head address exists in the first node of the linked list. If the linked list is empty, continuing to detect whether the set semaphore exists. Therefore, the current target thread can more efficiently determine the target buffer area, and the data processing efficiency is further improved.
In some embodiments of the present application, after the current target thread is executed and the first semaphore is set, the method further includes:
and in the linked list of the current target thread, the node corresponding to the target buffer area is emptied.
In this embodiment, after the current target thread is executed and the first semaphore is set, it is indicated that the data processing operation corresponding to the current target thread has been completed on the data in the target buffer zone, and in the linked list of the current target thread, the node corresponding to the target buffer zone is emptied, so that updating of the linked list is implemented, and thus data processing is performed more accurately.
Step S104, when there is no unexecuted thread corresponding to the target buffer in each thread, the target buffer is emptied.
In this embodiment, when there is no unexecuted thread corresponding to the target buffer in each thread, it is indicated that each preset data processing operation on the data in the target buffer has been completed, and the target buffer is emptied to store new received data based on the target buffer.
It will be appreciated that step S104 is performed for each target buffer in turn, and in some embodiments, the emptied target buffer is reused in step S101. As shown in fig. 2, the thread 1 copies each received data as a data packet such as a packet 1, a packet 2, a packet 3, a packet 4, and the like to each target buffer, when the thread 1 is processing the packet 3, the thread 2 is processing the packet 2, and the thread 3 is processing the packet 1, if all the data in the target buffer have executed each thread, the target buffer is emptied, new packets are continuously entered into the buffer, and processed packets are continuously deleted from the buffer.
In some embodiments of the present application, after sequentially copying each received data sequentially received from the data stream to each target buffer based on a first thread of each of the threads, the method further comprises:
and adding the memory block head address of the target buffer area into a first preset set based on the first thread.
In this embodiment, a first preset set is pre-established, where the first preset set is used to store the memory block head address of each buffer in use. After sequentially copying the received data to the target buffers based on the first thread of the threads, the target buffers are in a use state, and the memory block head addresses of the target buffers are added to the first preset set based on the first thread, so that the buffer in use can be determined more efficiently.
In some embodiments of the present application, after the target buffer is emptied, the method further comprises:
moving the memory block head address of the target buffer area from the first preset set to a second preset set;
the target buffer area is determined from the memory block head address selected from the second preset set based on the first thread.
In this embodiment, a second preset set is pre-established, where the second preset set is used to store the memory block first address of each unused buffer, after the target buffer is emptied, the target buffer is changed into an unused buffer, and the memory block first address of the target buffer is moved from the first preset set to the second preset set, in step S101, the first thread may select the memory block first address from the second preset set, and determine the target buffer, so that each unused buffer can be determined more efficiently.
In some embodiments of the present application, the moving the memory block head address of the target buffer from the first preset set to the second preset set includes:
and moving the memory block head address of the target buffer zone from the first preset set to the second preset set based on a preset callback function.
In this embodiment, the memory block head address of the target buffer is moved based on the preset callback function, so that the memory block head address of the target buffer is more efficiently moved from the first preset set to the second preset set.
In some embodiments of the present application, after the last target buffer is emptied, the method further comprises:
and if the data stream is not received, setting a second semaphore to release each buffer region.
In this embodiment, after the last target buffer area is emptied, if the data stream is not received any more, it indicates that the data processing operation on the data stream is completed, and the buffer areas are released by setting the second semaphore, so that efficient recovery of the memory resource is achieved.
By applying the technical scheme, a plurality of buffer areas with consistent capacity are built in a memory in advance, and a plurality of threads are built, wherein each thread is used for executing a corresponding preset data processing operation, the execution sequence of each thread is determined by the execution sequence of each preset data processing operation, and the method comprises the following steps: receiving a data stream to be processed, and sequentially copying each received data sequentially received from the data stream to each target buffer area based on a first thread in each thread, wherein the data volume of the received data is a preset data volume, and the target buffer area is one unused buffer area selected from each buffer area; the first thread is executed to sequentially process the data in each target buffer area, and after each execution is completed, the first semaphore is set to obtain a set semaphore; if the set semaphore is detected, executing the current target thread to process the data in the target buffer area corresponding to the set semaphore, and setting the first semaphore after the execution is completed, wherein the current target thread is the next thread of the recently executed thread of the target buffer area; when the unexecuted thread corresponding to the target buffer area does not exist in each thread, the target buffer area is emptied, so that only the first thread is required to perform memory copying once, the processing time delay is reduced, each thread is independently executed in parallel, each data processing operation is completed, and the processing efficiency of data stream processing in the memory is improved.
In order to further explain the technical idea of the invention, the technical scheme of the invention is described with specific application scenarios.
The embodiment of the application provides a data processing method, which constructs 16 buffer areas with consistent capacity in a memory in advance, wherein each buffer area comprises a memory block with the size of 4MB. And simultaneously establishing a first preset set m_processing and a second preset set m_uninuse, wherein the first preset set m_processing is used for storing the head address of the buffer memory block in use, and the second preset set m_uninuse is used for storing the head address of the buffer memory block in non-use. And 4 threads are established, wherein the 4 threads comprise a thread 1 for performing frame synchronization, a thread 2 for performing deinterlacing, a thread 3 for performing channel output and a thread 4 for performing check data rejection, and each thread is correspondingly provided with a linked list. And constructs a first semaphore m_semreturn and a second semaphore m_semexit.
When a data stream to be processed is received, a target buffer zone is selected from a second preset set m_Unused based on the thread 1, sequentially received data are copied to each target buffer zone based on the thread 1, and a memory block head address of the target buffer zone is added into a first preset set m_processing based on the thread 1, wherein the data volume of the received data is a preset data volume which is an integral multiple of a frame length, and the data stream is a satellite data stream.
And the execution thread 1 sequentially performs frame synchronization on the data in the target buffer, sets a first semaphore m_SemReturn after each execution is completed, and issues the memory block head address of the target buffer to the first empty node in the linked list of the thread 2.
After the set first semaphore m_SemReturn is detected, the thread 2 determines a target buffer zone according to the first address of a memory block in a first node in a corresponding linked list of the thread 2, the execution thread 2 de-interleaves data in the target buffer zone, after the execution is finished, the first semaphore m_SemReturn is set, the first address of the memory block in the target buffer zone is issued to the first empty node in the linked list of the thread 3, and the node corresponding to the target buffer zone is emptied in the corresponding linked list of the thread 2;
after the set first semaphore m_SemReturn is detected, the thread 3 determines a target buffer zone according to the first address of a memory block in a first node in a corresponding linked list of the thread 3, the execution thread 3 carries out channel output on data in the target buffer zone, after the execution is finished, the first semaphore m_SemReturn is set, the first address of the memory block in the target buffer zone is issued to the first empty node in the linked list of the thread 4, and the node corresponding to the target buffer zone is emptied in the corresponding linked list of the thread 3;
after the set first semaphore m_SemReturn is detected, the thread 4 determines a target buffer zone according to the memory block head address in the first node in the corresponding linked list, the execution thread 4 performs verification data rejection on data in the target buffer zone, after execution is completed, the first semaphore m_SemReturn is set, the node corresponding to the target buffer zone is emptied in the corresponding linked list, the target buffer zone is emptied, and the execution of the threads 1-4 is repeated until the data processing operation on all received data is completed. After each time the target buffer is emptied, the memory block head address of the target buffer is moved from the first preset set m_processing to the second preset set m_uninused based on a preset callback function.
After the last target buffer is emptied, if the data stream is no longer received, a second semaphore m_semexit is set to free each buffer.
By applying the above technical scheme, if the processing delays corresponding to the threads 1 to 4 are τ1, τ2, τ3 and τ4 respectively, since the threads 1 to 4 are executed similarly in parallel, the overall delay is τt is less than or equal to MAX (τ1, τ2, τ3, τ4), whereas in the prior art, when the threads 1 to 4 are executed in series, the overall delay is τt=τ1+τ2+τ3+τ4. Compared with the prior art, the data processing method in the embodiment of the application effectively reduces the data processing time delay and improves the data processing efficiency.
Corresponding to a data processing method in the embodiment of the present application, the embodiment of the present application further provides a data processing apparatus, in which a plurality of buffers with consistent capacities are built in a memory in advance, and a plurality of threads are built, each of the threads is configured to execute a corresponding preset data processing operation, and an execution sequence of each of the threads is determined by an execution sequence of each of the preset data processing operations, as shown in fig. 3, where the apparatus includes: a receiving module 301, configured to receive a data stream to be processed, and copy, in sequence, each received data sequentially received from the data stream to each target buffer based on a first thread in each thread, where a data amount of the received data is a preset data amount, and the target buffer is an unused buffer selected from each buffer; the first execution module 302 is configured to execute the first thread to process the data in each target buffer area in sequence, and set the first semaphore after each execution is completed, so as to obtain a set semaphore; the second execution module 303 is configured to execute a current target thread to process data in a target buffer area corresponding to the set semaphore if the set semaphore is detected, and set the first semaphore after the execution is completed, where the current target thread is a next thread of a recently executed thread of the target buffer area; and a flushing module 304, configured to flush the target buffer when no unexecuted thread corresponding to the target buffer exists in each thread.
In a specific application scenario, each thread corresponds to a linked list, and the device further comprises a issuing module, configured to: the first address of the memory block of the target buffer area corresponding to the set semaphore is issued to the first empty node in the linked list of the current target thread; the current target thread determines the target buffer area according to the memory block head address in the head node of the corresponding linked list.
In a specific application scenario, the emptying module 304 is further configured to: and in the linked list of the current target thread, the node corresponding to the target buffer area is emptied.
In a specific application scenario, the device further includes a joining module, configured to: and adding the memory block head address of the target buffer area into a first preset set based on the first thread.
In a specific application scenario, the device further includes a mobile module configured to: moving the memory block head address of the target buffer area from the first preset set to a second preset set; the target buffer area is determined from the memory block head address selected from the second preset set based on the first thread.
In a specific application scenario, the mobile module is specifically configured to: and moving the memory block head address of the target buffer zone from the first preset set to the second preset set based on a preset callback function.
In a specific application scenario, the emptying module 304 is specifically configured to: and if the data stream is not received, setting a second semaphore to release each buffer region.
The embodiment of the invention also provides an electronic device, as shown in fig. 4, which comprises a processor 401, a communication interface 402, a memory 403 and a communication bus 404, wherein the processor 401, the communication interface 402 and the memory 403 complete communication with each other through the communication bus 404,
a memory 403 for storing executable instructions of the processor;
a processor 401 configured to execute via execution of the executable instructions:
a plurality of buffer areas with consistent capacity are built in a memory in advance, a plurality of threads are built, each thread is used for executing a corresponding preset data processing operation, the execution sequence of each thread is determined by the execution sequence of each preset data processing operation, and the method comprises the following steps: receiving a data stream to be processed, and sequentially copying each received data sequentially received from the data stream to each target buffer area based on a first thread in each thread, wherein the data volume of the received data is a preset data volume, and the target buffer area is an unused buffer area selected from each buffer area; executing the first thread to sequentially process the data in each target buffer area, and setting the first semaphore after each execution is completed to obtain a set semaphore; if the set semaphore is detected, executing a current target thread to process data in a target buffer zone corresponding to the set semaphore, and setting the first semaphore after execution is completed, wherein the current target thread is the next thread of the recently executed thread of the target buffer zone; and when the unexecuted threads corresponding to the target buffer area do not exist in the threads, the target buffer area is emptied.
The communication bus may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the terminal and other devices.
The memory may include RAM (Random Access Memory ) or may include non-volatile memory, such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but also DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In a further embodiment of the present invention, there is also provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the data processing method as described above.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the data processing method as described above.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (9)

1. A data processing method, wherein a plurality of buffers with consistent capacities are built in a memory in advance, and a plurality of threads are built, each of the threads being configured to execute a corresponding preset data processing operation, an execution order of the threads being determined by an execution order of the preset data processing operations, the method comprising:
receiving a data stream to be processed, and sequentially copying each received data sequentially received from the data stream to each target buffer area based on a first thread in each thread, wherein the data volume of the received data is a preset data volume, and the target buffer area is an unused buffer area selected from each buffer area;
executing the first thread to sequentially process the data in each target buffer area, and setting the first semaphore after each execution is completed to obtain a set semaphore;
if the set semaphore is detected, executing a current target thread to process data in a target buffer zone corresponding to the set semaphore, and setting the first semaphore after execution is completed, wherein the current target thread is the next thread of the recently executed thread of the target buffer zone;
when the threads do not have the unexecuted threads corresponding to the target buffer area, the target buffer area is emptied;
each thread corresponds to a linked list, and after the previous thread of the current target thread is executed and the first semaphore is set, the method further comprises:
the first address of the memory block of the target buffer area corresponding to the set semaphore is issued to the first empty node in the linked list of the current target thread;
the current target thread determines the target buffer area according to the memory block head address in the head node of the corresponding linked list.
2. The method of claim 1, wherein after completing execution of the current target thread and setting the first semaphore, the method further comprises:
and in the linked list of the current target thread, the node corresponding to the target buffer area is emptied.
3. The method of claim 1, wherein after sequentially copying each received data sequentially received from the data stream to each target buffer based on a first thread of each of the threads, the method further comprises:
and adding the memory block head address of the target buffer area into a first preset set based on the first thread.
4. The method of claim 3, wherein after flushing the target buffer, the method further comprises:
moving the memory block head address of the target buffer area from the first preset set to a second preset set;
the target buffer area is determined from the memory block head address selected from the second preset set based on the first thread.
5. The method of claim 4, wherein moving the memory block head address of the target buffer from the first preset set to a second preset set comprises:
and moving the memory block head address of the target buffer zone from the first preset set to the second preset set based on a preset callback function.
6. The method of claim 1, wherein after the last target buffer is emptied, the method further comprises:
and if the data stream is not received, setting a second semaphore to release each buffer region.
7. A data processing apparatus in which a plurality of buffers of uniform capacity are built in advance in a memory, and a plurality of threads are built, each of the threads being for executing a corresponding preset data processing operation, an execution order of the threads being determined by an execution order of the preset data processing operations, the apparatus comprising:
the receiving module is used for receiving a data stream to be processed, and sequentially copying each received data received from the data stream to each target buffer area based on the first thread in each thread, wherein the data volume of the received data is a preset data volume, and the target buffer area is an unused buffer area selected from the buffer areas;
the first execution module is used for executing the first thread to process the data in each target buffer area in sequence, and setting the first semaphore after each execution is completed to obtain a set semaphore;
the second execution module is used for executing the current target thread to process the data in the target buffer area corresponding to the set semaphore if the set semaphore is detected, and setting the first semaphore after the execution is completed, wherein the current target thread is the next thread of the recently executed thread of the target buffer area;
the emptying module is used for emptying the target buffer zone when the unexecuted thread corresponding to the target buffer zone does not exist in each thread;
each thread corresponds to a linked list, and the device further comprises a issuing module, which is used for: the first address of the memory block of the target buffer area corresponding to the set semaphore is issued to the first empty node in the linked list of the current target thread; the current target thread determines the target buffer area according to the memory block head address in the head node of the corresponding linked list.
8. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the data processing method of any one of claims 1 to 6 via execution of the executable instructions.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the data processing method of any one of claims 1 to 6.
CN202311337936.9A 2023-10-17 2023-10-17 Data processing method and related equipment Active CN117076139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311337936.9A CN117076139B (en) 2023-10-17 2023-10-17 Data processing method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311337936.9A CN117076139B (en) 2023-10-17 2023-10-17 Data processing method and related equipment

Publications (2)

Publication Number Publication Date
CN117076139A CN117076139A (en) 2023-11-17
CN117076139B true CN117076139B (en) 2024-04-02

Family

ID=88704707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311337936.9A Active CN117076139B (en) 2023-10-17 2023-10-17 Data processing method and related equipment

Country Status (1)

Country Link
CN (1) CN117076139B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622271A (en) * 2003-12-31 2012-08-01 英特尔公司 Method and apparatus for multi-threaded processing and using semaphore
CN102915276A (en) * 2012-09-25 2013-02-06 武汉邮电科学研究院 Memory control method for embedded systems
CN106681836A (en) * 2016-12-28 2017-05-17 华为技术有限公司 Creating method and device of signal quantity
CN110287023A (en) * 2019-06-11 2019-09-27 广州海格通信集团股份有限公司 Message treatment method, device, computer equipment and readable storage medium storing program for executing
CN111813805A (en) * 2019-04-12 2020-10-23 中国移动通信集团河南有限公司 Data processing method and device
CN115840654A (en) * 2023-01-30 2023-03-24 北京万里红科技有限公司 Message processing method, system, computing device and readable storage medium
CN116886286A (en) * 2023-07-13 2023-10-13 新华三大数据技术有限公司 Big data authentication service self-adaption method, device and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7406690B2 (en) * 2001-09-26 2008-07-29 International Business Machines Corporation Flow lookahead in an ordered semaphore management subsystem

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622271A (en) * 2003-12-31 2012-08-01 英特尔公司 Method and apparatus for multi-threaded processing and using semaphore
CN102915276A (en) * 2012-09-25 2013-02-06 武汉邮电科学研究院 Memory control method for embedded systems
CN106681836A (en) * 2016-12-28 2017-05-17 华为技术有限公司 Creating method and device of signal quantity
CN111813805A (en) * 2019-04-12 2020-10-23 中国移动通信集团河南有限公司 Data processing method and device
CN110287023A (en) * 2019-06-11 2019-09-27 广州海格通信集团股份有限公司 Message treatment method, device, computer equipment and readable storage medium storing program for executing
CN115840654A (en) * 2023-01-30 2023-03-24 北京万里红科技有限公司 Message processing method, system, computing device and readable storage medium
CN116886286A (en) * 2023-07-13 2023-10-13 新华三大数据技术有限公司 Big data authentication service self-adaption method, device and equipment

Also Published As

Publication number Publication date
CN117076139A (en) 2023-11-17

Similar Documents

Publication Publication Date Title
US12019652B2 (en) Method and device for synchronizing node data
CN107450979B (en) Block chain consensus method and device
US11651312B2 (en) Combining batch and queueable technologies in a platform for large volume parallel processing
USRE49914E1 (en) Correlation across non-logging components
WO2024051270A1 (en) Task execution method, apparatus, storage medium, and electronic device
USRE49366E1 (en) Correlation across non-logging components
USRE49866E1 (en) Correlation across non-logging components
CN117076139B (en) Data processing method and related equipment
US11381630B2 (en) Transmitting data over a network in representational state transfer (REST) applications
CN110381150B (en) Data processing method and device on block chain, electronic equipment and storage medium
JP7122299B2 (en) Methods, apparatus, devices and storage media for performing processing tasks
JP6458626B2 (en) Debug circuit, semiconductor device, and debugging method
CN115033350A (en) Execution method and device of distributed transaction
CN111428453B (en) Processing method, device and system in annotation synchronization process
CN113360448A (en) Data packet processing method and device
US20200034213A1 (en) Node device, parallel computer system, and method of controlling parallel computer system
US20180267780A1 (en) Methods for providing conditional allocation of build files and related systems and computer program products
US8538977B2 (en) Dynamically switching the serialization method of a data structure
CN110851243B (en) Flow access control method and device, storage medium and electronic equipment
CN111209042B (en) Method, device, medium and electronic equipment for establishing function stack
CN109976675B (en) Data updating and reading method, device, equipment and storage medium
US9201722B1 (en) System-on-chip and method for sending data in a system-on-chip
CN115525684A (en) Data writing method, device, equipment, computer readable medium and program product
CN115373647A (en) Method and device for dynamically adding asynchronization and returning last added asynchronization result
CN115955409A (en) Change arrangement method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant