CN116069528A - Data processing method, device, electronic equipment and storage medium - Google Patents

Data processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116069528A
CN116069528A CN202310196149.0A CN202310196149A CN116069528A CN 116069528 A CN116069528 A CN 116069528A CN 202310196149 A CN202310196149 A CN 202310196149A CN 116069528 A CN116069528 A CN 116069528A
Authority
CN
China
Prior art keywords
data stream
queue
memory buffer
writing
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310196149.0A
Other languages
Chinese (zh)
Inventor
于博杰
马瑞
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202310196149.0A priority Critical patent/CN116069528A/en
Publication of CN116069528A publication Critical patent/CN116069528A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The disclosure provides a data processing method, a device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a first data stream, and writing the first data stream into a first queue tail corresponding to a push memory buffer area; sequentially acquiring first data streams which are arranged at the forefront from a first queue corresponding to the push memory buffer area according to the first-in first-out sequence; and writing the first data stream into a second queue tail corresponding to the pull memory buffer area, so that the first client reads the first data stream from the second queue according to a first-in first-out sequence.

Description

Data processing method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of data processing, and in particular relates to a data processing method, a data processing device, electronic equipment and a storage medium.
Background
When a video server processes video frames, the video frame data are read/written into a memory to perform frequent information processing, and in order to ensure the memory to safely perform read-write lock operation, the video frame data are relatively large, and particularly when multiple paths of video streams are processed simultaneously, the memory input/output (IO) generates a read-write bottleneck due to the existence of the read-write lock, so that the problem of how to improve the memory input/output efficiency is needed to be solved.
Disclosure of Invention
The present disclosure provides a data processing method, apparatus, electronic device, and storage medium, so as to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a data processing method comprising:
acquiring a first data stream, and writing the first data stream into a first queue tail corresponding to a push memory buffer area;
sequentially acquiring first data streams which are arranged at the forefront from a first queue corresponding to the push memory buffer area according to the first-in first-out sequence;
and writing the first data stream into a second queue tail corresponding to the pull memory buffer area, so that the first client reads the first data stream from the second queue according to a first-in first-out sequence.
According to a second aspect of the present disclosure, there is provided a data processing apparatus comprising:
the pushing processing unit is used for acquiring a first data stream, and writing the first data stream into a first queue tail corresponding to the pushing memory buffer area;
the transcoding unit is used for sequentially acquiring first data streams which are arranged at the forefront from the first queues corresponding to the push memory buffer areas according to the first-in first-out sequence;
and the pull processing unit is used for writing the first data stream into the second queue tail corresponding to the pull memory buffer area so that the first client reads the first data stream from the second queue according to the first-in first-out sequence.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods described in the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the present disclosure.
According to the data processing method, a first data stream is obtained and written into a first queue tail corresponding to a push memory buffer area; sequentially acquiring first data streams which are arranged at the forefront from a first queue corresponding to the push memory buffer area according to the first-in first-out sequence; and writing the first data stream into the tail of a second queue corresponding to the pull memory buffer area so that the first client reads the first data stream from the second queue according to the first-in first-out sequence, and thus, the first data stream is processed in a first-in first-out mode, and the efficiency of memory input and output can be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
FIG. 1 is a schematic diagram showing a data processing flow in the related art;
FIG. 2 illustrates an alternative flow diagram of a data processing method provided by an embodiment of the present disclosure;
FIG. 3 illustrates another alternative flow diagram of a data processing method provided by an embodiment of the present disclosure;
FIG. 4 shows a timing diagram of a data processing method provided by an embodiment of the present disclosure;
FIG. 5 illustrates an alternative architecture diagram of a data processing apparatus provided by an embodiment of the present disclosure;
fig. 6 shows a schematic diagram of a composition structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more comprehensible, the technical solutions in the embodiments of the present disclosure will be clearly described in conjunction with the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person skilled in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
Fig. 1 shows a schematic diagram of a data processing flow in the related art.
As shown in fig. 1, in the related art, in the memory management scheme of the video server, in order to ensure the memory security to perform the read-write lock operation for the data, time cost and memory resource waste are very easy to be caused when the high-speed large data is read and written.
Specifically, as shown in fig. 1, before writing the data stream into the push queue, it is necessary to determine whether to allow writing by a read-write lock, if so, writing the data stream into the push queue, and if not, waiting for writing until it is allowed; correspondingly, before writing the data stream into the pull queue, whether the writing is allowed or not needs to be judged by a read-write lock, if so, the data stream is written into the pull queue, and if not, the data stream is written into the pull queue after waiting for the permission.
In view of the drawbacks of the related art, the present disclosure provides a data processing method to solve at least some or all of the above technical problems.
Fig. 2 is a schematic flow chart of an alternative method for processing data according to an embodiment of the disclosure, which will be described according to the respective parts.
In some embodiments, the data processing method provided by the embodiments of the present disclosure is applied to an audio/video transmission scenario, where high-speed large data read-write exists in the scenario, and a read-write bottleneck of memory input and output is easily caused.
Step S101, a first data stream is obtained, and the first data stream is written into a first queue tail corresponding to a push memory buffer.
In some embodiments, after receiving a first data stream sent by a push client, the first data stream is written to the tail of the first queue. Wherein the first queue comprises at least one data stream whose reception time is before the first data stream; the first data stream may include an audio-video data stream, and is any data stream; the first queue may be a push queue.
In some embodiments, the push memory buffer is configured to buffer the data stream read from the first queue and perform buffering, and send the buffered data stream to the transcoding module for transcoding.
In specific implementation, the memory buffer is pushed to read and buffer the data stream from the first queue according to the first-in first-out sequence, and the buffer sends the buffered data stream to the transcoding module for transcoding.
In some optional embodiments, the data stream is stored in a first queue corresponding to the push memory buffer, and the first queue sends the data stream to the push memory buffer according to a first-in first-out sequence; optionally, during the process of writing the first data stream into the first queue, the push memory buffer area is not affected to read the data stream from the first queue; i.e. the writing of the data stream and the reading of the data stream in the first queue may take place simultaneously.
Step S102, sequentially acquiring first data streams arranged at the forefront from the first queues corresponding to the push memory buffer according to a first-in first-out sequence.
In some embodiments, when a first data stream is initially written into a first queue, writing the first data stream into the tail of the first queue, and reading the data stream in the first queue according to the first-in-first-out order, wherein the data stream in the first queue arranged before the first data stream (i.e. the data stream written into the first queue for a time earlier than the time when the first data stream is written into the first queue) is read before the first data stream; and reading the first data flow from the first queue until the first data flow is arranged at the forefront of the first queue.
Step S103, writing the first data stream into a second queue tail corresponding to the pull memory buffer, so that the first client reads the first data stream from the second queue according to a first-in first-out sequence.
In some alternative embodiments, the first data stream is stored in a push memory buffer after being read from the first queue, and is transcoded based on a transcoding module. Writing the transcoded first data stream into the tail of a second queue corresponding to the pull memory buffer; and the first client sequentially acquires the data streams which are arranged before the first data stream from a second queue according to the first-in first-out sequence until the first data stream is arranged at the forefront of the second queue, and reads the first data stream from the second queue.
Wherein, the first client may be a pull client; the second queue is a pull queue.
In some embodiments, pushing the memory buffer reads the data stream from the first queue while pulling the memory buffer writes the data stream to the second queue, the two being performed independently and not affecting each other. The push memory buffer is used for caching the data stream to be transcoded, and the pull memory buffer is used for caching the data stream which is transcoded and is to be pushed to the client.
Thus, compared with the prior art, the data processing method provided by the embodiment of the disclosure removes the read-write lock, does not need to judge before writing the data stream into the queue, but controls the orderly access of the data stream into and out of the memory based on the first-in first-out (FIFO) queue, reduces the situation that the data stream accesses the memory and queues the waiting resource, effectively utilizes the memory resource, and improves the memory processing efficiency.
Fig. 3 shows another alternative flowchart of the data processing method provided by the embodiment of the disclosure, and fig. 4 shows a timing diagram of the data processing method provided by the embodiment of the disclosure.
Step S201, at least one data stream is acquired, and the at least one data stream is written into the first queue according to the acquisition order.
In some embodiments, the Push Client (Push Client in fig. 4) sends at least one data Stream to a Stream Manager (Stream Manager in fig. 4), which retrieves the at least one data Stream and writes the at least one data Stream to the first queue in the order in which the at least one data Stream was retrieved.
In a specific implementation, the stream manager writes the at least one data stream into the first queue according to the sequence of acquiring the time stamp of the at least one data stream, and acquiring the time stamp of the at least one data stream, wherein the first queue has a data stream which is written first and a data stream which is written later, and the data stream which is written first is read by the push memory buffer before the data stream which is written later. The first queue may be a Push queue (Push queue in fig. 4) corresponding to a Push Memory buffer (Push Memory in fig. 4).
Wherein the at least one data stream includes a first data stream.
Step S202, pushing the memory buffer to obtain the at least one data stream from the first queue according to the order of first in first out.
In some embodiments, the push memory buffer reads the data stream that is arranged at the forefront in the first queue and sends the data stream to the transcoding module (the TransCode Mode in fig. 4); correspondingly, the first queue throws the data stream which is arranged at the forefront to the push memory buffer area.
In step S203, the transcoding module transcodes the data stream pushing the memory buffer.
In some embodiments, the transcoding module obtains at least one data stream from the first queue corresponding to the push memory buffer according to a first-in first-out order; specifically, the at least one data stream may be first thrown out to a push memory buffer area by the first queue according to a first-in-first-out sequence, and then the transcoding module reads the at least one data stream from the push memory buffer area; optionally, the first queue may cast 1 data stream only to the push memory buffer area at a time, and when the number of data streams in the push memory buffer area is less than a certain threshold, continue to cast 1 data stream; alternatively, the first queue may cast at least two data streams into the push memory buffer at a time. The threshold value can be set according to actual requirements or experimental results.
In some embodiments, the transcoding module obtains the data stream from the push Memory buffer for stream transcoding and writes the transcoded (stream transcoded) data stream to the Pull Memory buffer (Pull Memory in fig. 4).
Step S204, the transcoded data stream is written in the pull memory buffer area, and the transcoded data stream is written in the second queue according to the first-in-first-out sequence.
In some embodiments, the transcoding module transcodes the data stream according to the order in the first queue, and after transcoding, writes the transcoded data stream into the pull memory buffer according to the order in the first queue.
In some embodiments, the pull memory buffer writes the transcoded data stream to the second queue in the order of writing the data stream to the pull memory buffer first, writing the data stream to the pull memory buffer later, and writing the transcoded data stream to the second queue. The second Queue is a Pull Queue (Pull Queue in fig. 4) corresponding to the Pull memory buffer.
Correspondingly, in the second queue, the data stream written into the pull memory buffer area before the data stream written into the pull memory buffer area after the data stream is written into the pull memory buffer area.
In step S205, the first client reads the first data stream from the second queue in a first-in first-out order.
In some embodiments, the first client sequentially retrieves the data streams from the second queue in a first-in-first-out order; correspondingly, the second queue throws the corresponding data stream towards the first client.
The first Client may be a Pull Client (Pull Client in fig. 4).
In some embodiments, the first queue and the second queue according to embodiments of the present disclosure are FIFO queues.
Thus, compared with the prior art, the data processing method provided by the embodiment of the disclosure removes the read-write lock, and the data stream is not required to be judged before being written into the queue, but the high-efficiency continuity of processing and the order of processing the data stream are ensured through the FIFO queue, so that the performance is ensured, and the safety of information processing is ensured; meanwhile, the time cost and the waste of memory resources can be reduced by facing the high-speed reading and writing of big data of the video stream.
Fig. 5 is a schematic diagram showing an alternative configuration of a data processing apparatus according to an embodiment of the present disclosure, which will be described in terms of the respective parts.
In some embodiments, the data processing apparatus 400 includes a push processing unit 401, a transcoding unit 402, and a pull processing unit 403.
The push processing unit 401 is configured to obtain a first data stream, and write the first data stream into a first queue tail corresponding to a push memory buffer;
the transcoding unit 402 is configured to sequentially obtain, according to a first-in first-out order, first data streams arranged at the forefront from a first queue corresponding to the push memory buffer;
the pull processing unit 403 is configured to write the first data stream to a second queue tail corresponding to the pull memory buffer, so that the first client reads the first data stream from the second queue according to a first-in first-out sequence.
The push processing unit 401 is further configured to, before acquiring the first data stream and writing the first data stream to a first queue tail corresponding to the push memory buffer
At least one data stream is acquired, and the at least one data stream is written into the first queue according to the acquisition sequence.
The transcoding unit 402 is further configured to sequentially obtain, in a first-in first-out order, a first data stream arranged at the forefront from a first queue corresponding to the push memory buffer, and before writing the first data stream into the pull memory buffer, obtain, in a first-in first-out order, at least one data stream arranged at the forefront from the first queue corresponding to the push memory buffer, and write, in the obtaining order, the at least one data stream into the pull memory buffer.
The pull processing unit 403 is further configured to acquire at least one data stream before writing the first data stream to the tail of the second queue, and write the at least one data stream into the second queue according to an acquisition order;
wherein the position of the previously acquired data stream in the second queue precedes the position of the subsequently acquired data stream in the second queue.
The pull processing unit 403 is further configured to, after writing the first data stream to the second queue tail:
and transmitting at least one data stream in the pull memory buffer to the first client in a first-in-first-out order.
The transcoding unit 402 is specifically configured to transcode the first data stream, and obtain the transcoded first data stream.
The pull processing unit 403 is specifically configured to write the transcoded first data stream to the second queue tail according to a first-in first-out order.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
Fig. 6 shows a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the electronic device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in electronic device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the respective methods and processes described above, such as a data processing method. For example, in some embodiments, the data processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 800 via the ROM 802 and/or the communication unit 809. When a computer program is loaded into RAM 803 and executed by computing unit 801, one or more steps of the data processing method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the data processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method of data processing, the method comprising:
acquiring a first data stream, and writing the first data stream into a first queue tail corresponding to a push memory buffer area;
sequentially acquiring first data streams which are arranged at the forefront from a first queue corresponding to the push memory buffer area according to the first-in first-out sequence;
and writing the first data stream into a second queue tail corresponding to the pull memory buffer area, so that the first client reads the first data stream from the second queue according to a first-in first-out sequence.
2. The method of claim 1, wherein the obtaining the first data stream, and writing the first data stream to a push memory buffer corresponding to a first queue tail, the method further comprises:
at least one data stream is acquired, and the at least one data stream is written into the first queue according to the acquisition sequence.
3. The method of claim 2, wherein the transcoding module sequentially obtains first data streams arranged at the forefront from a first queue corresponding to the push memory buffer in a first-in-first-out order, and before writing the first data streams into the pull memory buffer, the method further comprises:
the transcoding module acquires at least one data stream which is arranged in front of the first data stream from a first queue corresponding to the push memory buffer according to a first-in first-out sequence, and writes the at least one data stream into the pull memory buffer according to the acquisition sequence.
4. The method of claim 3, the pull memory buffer writing the first data stream to a second queue tail, the method further comprising:
the pull memory buffer area acquires at least one data stream, and writes the at least one data stream into the second queue according to the acquisition sequence;
wherein the position of the previously acquired data stream in the second queue precedes the position of the subsequently acquired data stream in the second queue.
5. The method of claim 1, the pull memory buffer writing the first data stream to a second queue tail, the method further comprising:
and transmitting at least one data stream in the pull memory buffer to the first client in a first-in-first-out order.
6. The method of claim 1, wherein the transcoding module reading the first data stream arranged at the forefront from the first queue corresponding to the push memory buffer according to a first-in-first-out order comprises:
and the transcoding module carries out transcoding processing on the first data stream to obtain the transcoded first data stream.
7. The method of claim 6, wherein writing the first data stream to the second queue tail corresponding to the pull memory buffer comprises:
and writing the transcoded first data stream into the second queue tail according to the first-in first-out sequence.
8. A data processing apparatus, the apparatus comprising:
the pushing processing unit is used for acquiring a first data stream, and writing the first data stream into a first queue tail corresponding to the pushing memory buffer area;
the transcoding unit is used for sequentially acquiring first data streams which are arranged at the forefront from the first queues corresponding to the push memory buffer areas according to the first-in first-out sequence;
and the pull processing unit is used for writing the first data stream into the second queue tail corresponding to the pull memory buffer area so that the first client reads the first data stream from the second queue according to the first-in first-out sequence.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-7.
CN202310196149.0A 2023-02-28 2023-02-28 Data processing method, device, electronic equipment and storage medium Pending CN116069528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310196149.0A CN116069528A (en) 2023-02-28 2023-02-28 Data processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310196149.0A CN116069528A (en) 2023-02-28 2023-02-28 Data processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116069528A true CN116069528A (en) 2023-05-05

Family

ID=86178564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310196149.0A Pending CN116069528A (en) 2023-02-28 2023-02-28 Data processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116069528A (en)

Similar Documents

Publication Publication Date Title
US9436696B2 (en) Data fragmentation tuning and candidacy persistence
CN111367687A (en) Inter-process data communication method and device
WO2022227895A1 (en) Data transmission method and apparatus, terminal device, and computer-readable storage medium
CN113794909A (en) Video streaming system, method, server, device, and storage medium
US20190146859A1 (en) Timeout processing for messages
CN113407347B (en) Resource scheduling method, device, equipment and computer storage medium
CN110851276A (en) Service request processing method, device, server and storage medium
CN114064431A (en) Stuck detection method and device, readable medium and electronic equipment
CN113051055A (en) Task processing method and device
CN110557341A (en) Method and device for limiting data current
CN110515749B (en) Method, device, server and storage medium for queue scheduling of information transmission
CN116069528A (en) Data processing method, device, electronic equipment and storage medium
CN116248772A (en) Data transmission method, device, equipment and medium under virtualization management
CN116541140A (en) Data acquisition method, device, electronic equipment and storage medium
CN114237755A (en) Application running method and device, electronic equipment and storage medium
CN113627354A (en) Model training method, video processing method, device, equipment and storage medium
CN112988105A (en) Playing state control method and device, electronic equipment and storage medium
CN112783421A (en) Asynchronous consumption method and device based on ring buffer
US10169114B1 (en) Predicting exhausted storage for a blocking API
CN112541472B (en) Target detection method and device and electronic equipment
CN113656618B (en) Picture synchronization method and device, electronic equipment and readable storage medium
US11432303B2 (en) Method and apparatus for maximizing a number of connections that can be executed from a mobile application
CN110300163B (en) Method, device, equipment and storage medium for acquiring network data
CN116089335A (en) Bus conversion device, method and system
US20220191270A1 (en) Method of data interaction, data interaction apparatus, electronic device and non-transitory computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination