CN115129488A - Streaming data processing method, device, equipment and storage medium - Google Patents

Streaming data processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115129488A
CN115129488A CN202210713851.5A CN202210713851A CN115129488A CN 115129488 A CN115129488 A CN 115129488A CN 202210713851 A CN202210713851 A CN 202210713851A CN 115129488 A CN115129488 A CN 115129488A
Authority
CN
China
Prior art keywords
streaming data
packet
data packet
space
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210713851.5A
Other languages
Chinese (zh)
Inventor
潘能超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210713851.5A priority Critical patent/CN115129488A/en
Publication of CN115129488A publication Critical patent/CN115129488A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/545Interprogram communication where tasks reside in different layers, e.g. user- and kernel-space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure relates to the field of computer technologies, and in particular, to the technical fields of voice technologies, video technologies, and the like. The specific implementation scheme is as follows: receiving a target streaming data packet; determining a first graph calculation instruction corresponding to the position of a target streaming data packet in a data stream from preset graph calculation instructions; and sending the first graph calculation instruction to a graph processor, so that the graph processor infers the target streaming data packet based on the space address and the execution sequence of the kernel included in the first graph calculation instruction, and obtains a first processing result of the target streaming data packet. In the embodiment of the disclosure, the CPU only needs to interact with the GPU once to finish reasoning the streaming data packet, so that the interaction times of the CPU and the GPU are reduced, and the processing efficiency of the streaming data is improved.

Description

Streaming data processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to the field of speech technology, video technology, and the like.
Background
With the development of computer technology, the application range of streaming data such as voice and video is wider and wider, which poses a challenge to the processing efficiency of the streaming data.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, and storage medium for streaming data processing.
According to a first aspect of the present disclosure, there is provided a streaming data Processing method applied to a Central Processing Unit (CPU), including:
receiving a target streaming data packet;
determining a first graph calculation instruction corresponding to the position of the target streaming data packet in a data stream from preset graph calculation instructions, wherein the graph calculation instruction comprises a space address required by the streaming data packet and an execution sequence of a kernel for processing the streaming data packet;
sending the first graph computation instruction to a Graphics Processing Unit (GPU) so that the GPU infers the target streaming data packet based on a space address and an execution sequence of the kernel included in the first graph computation instruction, and obtains a first Processing result of the target streaming data packet.
In some embodiments, prior to receiving the targeted streaming data packet, the method further comprises:
determining the space size required for processing streaming data packets at different positions in the data stream;
according to the determined maximum space size, allocating a first space required for processing streaming data packets included in the data stream, wherein the streaming data packets at different positions correspond to spaces with corresponding space sizes in the first space;
and generating preset graphic calculation instructions corresponding to different positions based on a first execution sequence of the kernel for processing the streaming data packets at different positions and the address of the first space.
In some embodiments, the position of the streaming data packet in the data stream includes a header packet and a middle packet with fixed lengths, and the preset graph calculation instruction includes a graph calculation instruction corresponding to the header packet and a graph calculation instruction corresponding to the middle packet; the method further comprises the following steps:
judging whether the position of the target streaming data packet in the data stream is a first packet or a middle packet to obtain a judgment result;
and in response to the judgment result indicating that the position of the target streaming data packet in the data stream is a head packet or a middle packet, executing a step of determining a first graph calculation instruction corresponding to the position of the target streaming data packet in the data stream from preset graph calculation instructions.
In some embodiments, the position of the streaming data packet in the data stream further comprises a tail packet of variable length; the method further comprises the following steps:
responding to the judgment result to indicate that the position of the target streaming data packet in the data stream is a tail packet, acquiring a second space required by processing the target streaming data packet, and reading a second execution sequence of a kernel for processing the target streaming data packet;
and sending the execution command of the kernel to the graphics processor according to the second execution sequence, so that the graphics processor infers the target streaming data packet by using the second space based on the execution command of the kernel to obtain a second processing result of the target streaming data packet.
In some embodiments, the position of the streaming data packet in the data stream further comprises a tail packet of variable length; the method further comprises the following steps:
responding to the judgment result to indicate that the position of the target streaming data packet in the data stream is a tail packet, acquiring a second space required by processing the target streaming data packet, and reading a second execution sequence of a kernel for processing the target streaming data packet;
generating a second graph computation instruction based on the second space and the second execution order;
and sending the second graph calculation instruction to the graph processor, so that the graph processor infers the target streaming data packet based on the second graph calculation instruction, and obtains a third processing result of the target streaming data packet.
In some embodiments, the step of obtaining the second space required for processing the target streaming data packet comprises:
determining a second space required for processing the target streaming data packet from the space indicated by the space address included in the preset graphic calculation instruction; or
And allocating a second space required for processing the target streaming data packet from the space indicated by the space address included in the preset graphic calculation instruction.
According to a second aspect of the present disclosure, there is provided an apparatus for streaming data processing, comprising:
the receiving module is used for receiving the target streaming data packet;
the first determining module is used for determining a first graph computing instruction corresponding to the position of the target streaming data packet in a data stream from preset graph computing instructions, wherein the graph computing instruction comprises a space address required by the streaming data packet and an execution sequence of a kernel for processing the streaming data packet;
and the first reasoning module is used for sending the first graph computing instruction to a graph processor so that the graph processor can reason the target streaming data packet based on the space address and the execution sequence of the kernel included in the first graph computing instruction to obtain a first processing result of the target streaming data packet.
In some embodiments, the apparatus further comprises:
a second determining module, configured to determine a size of a space required for processing streaming packets at different positions in the data stream;
a first allocating module, configured to allocate, according to the determined maximum space size, a first space required for processing streaming data packets included in the data stream, where the streaming data packets at different positions correspond to spaces of corresponding space sizes in the first space;
and the first generating module is used for generating preset graph calculation instructions corresponding to different positions based on a first execution sequence of a kernel for processing the streaming data packets at different positions and the address of the first space.
In some embodiments, a streaming data processing apparatus includes a first packet and a second packet, where the first packet and the second packet have fixed lengths, and the preset graphics calculation instruction includes a graphics calculation instruction corresponding to the first packet and a graphics calculation instruction corresponding to the second packet; the device further comprises:
the judging module is used for judging whether the position of the target streaming data packet in the data stream is a first packet or a middle packet to obtain a judging result;
the first determining module is specifically configured to determine, in response to the determination result indicating that the position of the target streaming data packet in the data stream is a first packet or a middle packet, a first graph calculation instruction corresponding to the position of the target streaming data packet in the data stream from preset graph calculation instructions.
In some embodiments, the position of the streaming data packet in the data stream further comprises a tail packet of variable length; the device further comprises:
the second distribution module is used for responding to the judgment result and indicating that the position of the target streaming data packet in the data stream is a tail packet, acquiring a second space required by processing the target streaming data packet, and reading a second execution sequence of a kernel for processing the target streaming data packet;
and the second inference module is configured to send the execution command of the kernel to the graphics processor according to the second execution order, so that the graphics processor infers the target streaming data packet by using the second space based on the execution command of the kernel, and obtains a second processing result of the target streaming data packet.
In some embodiments, the position of the streaming data packet in the data stream further comprises a tail packet of variable length; the device further comprises:
the second distribution module is used for responding to the judgment result and indicating that the position of the target streaming data packet in the data stream is a tail packet, acquiring a second space required by processing the target streaming data packet, and reading a second execution sequence of a kernel for processing the target streaming data packet;
a second generating module, configured to generate a second graph computation instruction based on the second space and the second execution order;
and the third reasoning module is used for sending the second graph calculation instruction to the graph processor so that the graph processor can reason the target streaming data packet based on the second graph calculation instruction to obtain a third processing result of the target streaming data packet.
In some embodiments, the second allocating module is specifically configured to:
determining a second space required for processing the target streaming data packet from the space indicated by the space address included in the preset graphic calculation instruction; or
And allocating a second space required for processing the target streaming data packet from the space indicated by the space address included in the preset graphic calculation instruction.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor;
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the first aspects.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of the first aspects.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of any one of the first aspects.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic diagram of a head packet, a middle packet, and a tail packet provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an inference model provided by an embodiment of the disclosure;
FIG. 3 is a schematic diagram of space allocation provided by embodiments of the present disclosure;
fig. 4 is a first schematic diagram of a streaming data processing method provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a method for constructing graphics computation instructions according to an embodiment of the present disclosure;
fig. 6 is a second schematic diagram of a streaming data processing method provided by an embodiment of the disclosure;
fig. 7 is a third schematic diagram of a streaming data processing method provided by an embodiment of the present disclosure;
fig. 8 is a fourth schematic diagram of a streaming data processing method provided by the embodiment of the disclosure;
fig. 9 is a schematic diagram of a streaming data processing apparatus provided by an embodiment of the present disclosure;
FIG. 10 is a first block diagram of an electronic device for implementing a streaming data processing method of an embodiment of the present disclosure;
fig. 11 is a second block diagram of an electronic device for implementing a streaming data processing method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
With the development of computer technology, the application range of streaming data such as voice and video is wider and wider, which poses a challenge to the processing efficiency of the streaming data.
The processing of the streaming data includes identification, synthesis, and the like of the streaming data. The position of the stream data packet in the data stream is divided into a head packet, a middle packet and a tail packet. As shown in fig. 1, the first packet, the middle packet and the last packet are schematic diagrams, where the first packet represents a first part of data of a data stream and is one in number; the tail packet represents the last data of the data stream, and the number of the tail packets is one; the number of the intermediate packets is one or more than one part of data between the head packet and the tail packet. A streaming data includes a plurality of data frames, for example, a header packet includes 50 data frames, a middle packet includes 10 data frames, and so on.
The CPU is preset with a reasoning model of the streaming data, the reasoning model designates a calculation mode of the streaming data packet, and the calculation mode comprises an execution sequence and a processing operation of a kernel (kernel) for processing the streaming data packet. The calculation modes of the streaming data packets at different positions may be the same or different, and when the calculation modes of the streaming data packets at different positions are different, the inference model further has branch statements, as shown in fig. 2, the branch statements are used to execute the calculation mode 1 and the calculation mode 2 under different branch conditions, so as to complete the inference task of the streaming data. Fig. 2 illustrates only two calculation modes, which are not limited to the above example.
In the related art, when processing streaming data, a CPU allocates a space required for processing a current packet according to a position of the current packet and a length of the packet, where the space may include: as shown in a schematic diagram of space allocation shown in fig. 3, allocating a space required for processing a current packet includes: IO space comprises IO1-IO3 and the like, kernel 1-3 and the like are the maximum temporary space required by kernel operation, and state space comprises states 1-3 and the like. And the CPU reads the execution sequence of the corresponding kernel according to a preset inference model and sends the execution instruction of the kernel to the GPU. And the GPU performs kernel operation by utilizing the allocated space based on the execution instruction of the kernel to finish reasoning of the streaming data.
When a streaming data packet is processed, a plurality of kernels are required to execute in sequence to finish reasoning of the streaming data packet, which requires that a CPU needs to interact with a GPU for many times, i.e. send execution instructions of the kernels to the GPU for many times, so that the processing efficiency of streaming data is low. With the increase of the streaming data packets, the number of interactions between the CPU and the GPU is multiplied, which further reduces the processing efficiency of the streaming data.
In order to improve the processing efficiency of streaming data, the embodiment of the present disclosure provides a streaming data processing method, which is applied to a CPU, as shown in fig. 4. In the embodiment of the present disclosure, the CPU may be a CPU on a device having a graphics processing function, such as a server and a mobile terminal, which is not limited herein. The streaming data processing method comprises the following steps:
in step S41, the target streaming packet is received.
In the embodiment of the present disclosure, the target streaming data packet is any data packet in a data stream of the streaming data, and the position of the data packet in the data stream may be a head packet, a middle packet, or a tail packet.
When streaming data processing is required, a user can input a requested streaming data packet (i.e., a target streaming data packet) to the CPU, and the CPU receives the streaming data packet requested by the user.
Step S42, determining, from preset graphics computation instructions, a first graphics computation instruction corresponding to the position of the target streaming data packet in the data stream, where the graphics computation instruction includes space required by the streaming data packet and an execution order of a core that processes the streaming data packet.
The CPU is preset with a graphic calculation instruction, and the preset graphic calculation instruction may include graphic calculation instructions corresponding to different positions, for example, a graphic calculation instruction corresponding to a first packet, a graphic calculation instruction corresponding to a middle packet, a graphic calculation instruction corresponding to a last packet, and the like.
After receiving the target streaming data packet, the CPU may determine the position of the target streaming data packet in the data stream according to information and the like carried by a packet header of the target streaming data packet, and further determine a graph calculation instruction corresponding to the determined position from preset graph calculation instructions, where the determined graph calculation instruction is the first graph calculation instruction.
For example, if the CPU determines that the position of the received streaming data packet in the data stream is the head packet, the CPU determines a graphics computation instruction corresponding to the head packet from preset graphics computation instructions; and the CPU determines that the position of the received streaming data packet in the data stream is a middle packet, and determines a graph calculation instruction corresponding to the middle packet from preset graph calculation instructions.
Step S43, sending the first graph computation instruction to the graphics processor, so that the graphics processor performs inference on the target streaming data packet based on the spatial address and the execution sequence of the kernel included in the first graph computation instruction, to obtain a first processing result of the target streaming data packet.
And after the first graph calculation instruction is obtained, the CPU sends the first graph calculation instruction to the GPU. And the GPU performs kernel operation according to the execution sequence of the kernels included by the first graph calculation instruction by using the space corresponding to the space address to finish reasoning of the target streaming data packet and obtain a first processing result of the target streaming data packet.
In the technical scheme provided by the embodiment of the disclosure, a CPU (central processing unit) is preset with a graph calculation instruction, and the graph calculation instruction comprises a space address required by a streaming data packet and an execution sequence of a kernel for processing the streaming data packet. Therefore, when a streaming data packet is processed, the CPU only needs to acquire a corresponding graphics computation instruction and send the graphics computation instruction to the GPU. Subsequently, the GPU may complete inference on the streaming data packet based on the graphics computation instruction. Therefore, in the embodiment of the disclosure, the CPU can finish reasoning the streaming data packet only by interacting with the GPU once, so that the interaction times of the CPU and the GPU are reduced, and the processing efficiency of the streaming data is improved.
Before receiving a target streaming data packet and processing the streaming data packet, the CPU may pre-construct preset graphics computation instructions corresponding to different positions according to the lengths of the streaming data packets at different positions and the execution sequence of the cores that process the corresponding streaming data packets.
In some embodiments, the CPU may determine the size of the space required to process the streaming data packets at different positions according to the lengths of the streaming data packets at different positions; aiming at the streaming data packets at different positions, distributing a space with a required space size for the streaming data packets at the positions, and reading a kernel execution sequence for processing the streaming data packets at the positions in the inference model; and generating a graph calculation instruction corresponding to the position as a preset graph calculation instruction corresponding to the position based on the address of the allocated space and the read kernel execution sequence.
For example, the CPU determines the size of space required for processing the first packet to be k1 according to the length of the first packet, allocates space 1 of k1 to the first packet, and has an address range of d1-d2 for space 1; according to the length of the tundish, determining the size of space required for processing the tundish to be k2, allocating space 2 of k2 to the tundish, wherein the address range of the space 2 is d3-d 4; according to the length of the tail packet, the size of space required for processing the tail packet is determined to be k3, the tail packet is allocated with space 3 of k3, and the address range of the space 3 is d5-d 56.
The CPU reads a preset reasoning model to obtain a calculation mode 1 of the head packet, and if the execution sequence of the kernel for processing the head packet is kernel 1 → kernel 2 → kernel 3, a graphic calculation instruction 1 corresponding to the head packet is generated according to the address range d1-d2 of the space 1 allocated for the head packet and the calculation mode 1.
Similarly, the CPU reads the preset inference model to obtain the calculation mode 2 of the middle packet, and if the execution order of the cores for processing the middle packet is core 2 → core 1 → core 4, the graphic calculation instruction 2 corresponding to the middle packet is generated according to the address range d3-d4 of the space 2 allocated for the middle packet and the calculation mode 2.
The CPU reads a preset reasoning model to obtain a computation mode 3 of the tail packet, and if the kernel execution sequence for processing the tail packet is kernel 1 → kernel 4 → kernel 3, a graph computation instruction 3 corresponding to the middle packet is generated according to the address range d5-d6 of the space 3 allocated for the tail packet and the computation mode 3.
In other embodiments, to save space occupied by processing streaming data and improve efficiency of processing streaming data, embodiments of the present disclosure further provide a method for constructing a graph computation instruction, as shown in fig. 5, which may include steps S51-S53.
At step S51, the size of the space required to process the streaming packets at different locations in the data stream is determined.
The streaming data is processed by the front-end device and then input into the CPU. The front-end device processes the streaming data, and divides the streaming data into a first packet, a middle packet and a last packet, where the length of the first packet and the length of the middle packet are fixed, for example, the length of the first packet is 50 data frames, and the length of the middle packet is 100 data frames; the length of the trailer is variable, but the length of the trailer is smaller than that of the middle packet, for example, the length of the middle packet is 100 data frames, and the length of the trailer is smaller than 100 data frames.
In this embodiment, the CPU may determine, based on the processing flow of the front-end device, the length of the first packet and the length of the middle packet, and further determine the size of the space matched with the length of the first packet, that is, the size of the space required to process the first packet, and determine the size of the space matched with the length of the middle packet, that is, the size of the space required to process the middle packet.
For the tail packet, the CPU may use the maximum length of the tail packet as a basis for subsequently determining the size of the space required for the tail packet, that is, determine the size of the space matched with the length of the middle packet, that is, the size of the space required for processing the tail packet. Because the length of the tail packet is smaller than that of the middle packet, the space size matched with the length of the middle packet is used as the space size required by processing the tail packet, and the problem of insufficient subsequent space distribution can be avoided.
In the embodiment of the present disclosure, the CPU may also determine, through other manners, the size of the space required for processing the streaming data packets at different positions, which is not limited herein.
Step S52, according to the determined maximum space size, allocating a first space required for processing streaming packets included in the data stream, where the streaming packets at different positions correspond to spaces of corresponding space sizes in the first space.
In the embodiment of the present disclosure, the space required for processing the streaming data packet includes an intermediate IO space, a maximum temporary space required for kernel operation of each layer, a state space required for kernel of each layer, and the like. Each space is processed in the same manner, and in the embodiment of the present disclosure, the various spaces required for processing the streaming data packet are collectively referred to as the spaces required for processing the streaming data packet.
For example, the CPU determines that the size of space required to process the leading packet is k1, the size of space required to process the middle packet is k2, and the size of space required to process the trailing packet is k3, where k2 is greater than k1 and k2 is greater than k3, and thus, k2 is the determined maximum size of space. The CPU allocates a space X1 required by processing streaming data packets, the size of the space X1 is k2, the address range of the space X1 is dx1-dx2, the spaces of dx1-dx3 in the space X1 correspond to a head packet, and the space X1 corresponds to a middle packet and also corresponds to a tail packet.
Step S53, generating preset graphics computation instructions corresponding to different locations based on the first execution order of the kernel for processing streaming data packets at different locations and the address of the first space.
In the embodiment of the present disclosure, the CPU may read the inference model, obtain the execution order of the cores of the streaming data packets at different positions, that is, the first execution order of the cores, and generate the graph computation instruction corresponding to different positions based on the first execution order of the cores corresponding to different positions and the address of the first space, so as to obtain the preset graph computation instruction.
Subsequently, the CPU executes the above-described steps S41-S43 based on the obtained preset graphic calculation instruction, and completes the processing of the streaming data.
In the technical scheme provided by the embodiment of the disclosure, the CPU determines the required maximum space size based on the space size required for processing the streaming data packets at different positions, allocates the space according to the maximum space size, generates the graph calculation instruction, reduces multiple times of space allocation to one-time space allocation, multiplexes the allocated space by the streaming data packets at different positions, and saves the space occupied by streaming data processing. In addition, the CPU distributes space according to the maximum required space size, thereby avoiding the problem of insufficient space distribution when processing streaming data packets at different positions.
In addition, the CPU pre-constructs the graphic calculation instruction, and when processing the streaming data packet, the CPU only needs to send the corresponding graphic calculation instruction to the GPU once, so that the GPU can finish reasoning the streaming data packet, and the processing efficiency of the streaming data is improved.
In the embodiment of the disclosure, the length of the head packet and the length of the middle packet are fixed, and the length of the tail packet is variable. In order to further improve the processing efficiency of the streaming data, for the head packet and the middle packet with fixed lengths, the CPU may construct a preset graph calculation instruction in the manner shown in fig. 5, and for the tail with variable lengths, the CPU may complete the processing of the tail packet in other manners.
In this case, the preset graphics calculation instruction includes a graphics calculation instruction corresponding to the first packet and a graphics calculation instruction corresponding to the middle packet. Based on this, the disclosed embodiments provide a streaming data processing method, as shown in fig. 6, which may include steps S61-S64.
In step S61, the target streaming packet is received. See the related description of step S41.
Step S62, determine whether the position of the target streaming data packet in the data stream is the head packet or the middle packet.
In a case where the determination result obtained in step S62 indicates that the target streaming data packet is the head packet, or the determination result obtained in step S62 indicates that the target streaming data packet is the middle packet, the CPU executes step S63.
Step S63, determining, from preset graphics computation instructions, a first graphics computation instruction corresponding to the position of the target streaming data packet in the data stream, where the graphics computation instruction includes space required by the streaming data packet and an execution order of a core that processes the streaming data packet. See the related description of step S42.
Step S64, sending the first graph computation instruction to the graph processor, so that the graph processor performs inference on the target streaming data packet based on the spatial address and the execution order of the kernel included in the first graph computation instruction, to obtain a first processing result of the target streaming data packet. See the related description of step S43.
The CPU calculates the instruction based on the preset graph, and when the streaming data packet is processed, the space indicated by the graph calculation instruction is large, so that the CPU needs to process the streaming data packet according to the large space. The length of the tail packet is variable, and space is allocated to the tail packet according to the maximum length, so that the length of a part of the tail packet is certainly smaller than the space indicated by the graphic calculation instruction. In this case, the CPU needs to perform additional operations such as adding a mask to reach the size of the space indicated by the graphics calculation instruction, which introduces additional calculation consumption.
In the technical scheme provided by the embodiment of the disclosure, a CPU allocates space for a first packet and a middle packet with fixed lengths in advance, constructs a graph calculation instruction, and processes the first packet and the middle packet according to the pre-constructed graph calculation instruction. The CPU does not adopt a pre-constructed graph calculation instruction to process the tail packet, so that the introduction of extra calculation consumption is reduced, and the processing efficiency of the streaming data is improved.
In some embodiments, a streaming data processing method is provided, as shown in FIG. 7, which may include steps S71-S76. The steps S71-S74 are the same as the steps S61-S64. In the case where the determination result obtained in step S72 indicates that the target streaming data packet is the trailer, step S75 is performed.
Step S75, acquiring a second space required for processing the target streaming data packet, and reading a second execution sequence of the kernel for processing the target streaming data packet.
When the CPU acquires the trailer, the length of the trailer may be acquired, and a space required for processing the trailer, that is, a second space required for processing the target streaming data packet may be acquired.
In the embodiment of the present disclosure, the second space may be a space reallocated by the CPU. That is, the step S75 may be: and allocating a second space required for processing the target streaming data packet from the space indicated by the space address included in the preset graphic calculation instruction.
The second space may be a part or all of the allocated first space. That is, the step S75 may be: and determining a second space required for processing the target streaming data packet from the space indicated by the space address included in the preset graphic calculation instruction. Therefore, the occupied space can be reduced, the times of space distribution can be reduced, and the processing efficiency of the streaming data can be further improved.
Step S76, sending the kernel execution command to the graphics processor according to the second execution sequence, so that the graphics processor infers the target streaming data packet by using the address of the second space based on the kernel execution command, and obtains a second processing result of the target streaming data packet.
In the embodiment of the present disclosure, the CPU may read the inference model, obtain an execution order of the cores that process the tail packet, that is, a second execution order of the cores, and send the execution command of the cores to the GPU according to the second execution order of the cores. And the GPU performs kernel operation by utilizing the second space based on the execution command of the kernel, namely reasoning the target streaming data packet to obtain a second processing result of the target streaming data packet.
In the technical solution provided by the embodiment of the present disclosure, the CPU processes the tail packet according to a calculation mode in the related art, that is, sequentially sends execution commands of the kernel to the GPU according to a second execution sequence of the kernel, and interacts with the GPU for multiple times to complete processing of the streaming data. Therefore, the CPU does not need to introduce extra calculation consumption, and the processing efficiency of the streaming data is improved.
In some embodiments, a streaming data processing method is provided, as shown in FIG. 8, which may include steps S81-S87. The steps S81-S84 are the same as the steps S61-S64. In the case where the determination result obtained in step S82 indicates that the target streaming data packet is the trailer, step S85 is performed.
Step S85, obtaining a second space required for processing the target streaming data packet, and reading a second execution order of the kernel for processing the target streaming data packet.
When the CPU acquires the trailer, the length of the trailer may be acquired, and a space required for processing the trailer, that is, a second space required for processing the target streaming data packet may be acquired. The second space may be a space reallocated by the CPU. The second space may also be a part or all of the allocated first space, so that the occupied space can be reduced, the number of times of allocating the space can be reduced, and the processing efficiency of the streaming data can be further improved.
In addition, the CPU may read the inference model to obtain an execution order of the cores that process the tail packet, i.e., a second execution order of the cores.
In step S86, a second graph computation instruction is generated based on the second space and the second execution order.
In the embodiment of the present disclosure, the CPU generates a graphics computation instruction, that is, a second graphics computation instruction, corresponding to the target streaming data packet (that is, the trailer packet), based on the second space and the second execution order.
And step S87, sending the second graph calculation instruction to the graph processor, so that the graph processor deduces the target streaming data packet based on the second graph calculation instruction, and obtaining a third processing result of the target streaming data packet. Similar to step S43, see the related description of step S43.
In the technical scheme provided by the embodiment of the disclosure, when the CPU processes the tail packet, the CPU temporarily generates a second graph calculation instruction corresponding to the tail packet, and the space size indicated by the second graph calculation instruction matches the length of the tail packet, so that the CPU processes the tail packet based on the second graph calculation instruction without introducing extra calculation consumption, thereby improving the processing efficiency of streaming data.
In the embodiment of the present disclosure, after obtaining the processing result of the GPU (such as the first processing result, the second processing result, and the third processing result), the CPU may perform subsequent processing as required, for example, feed back the processing result to the user, or determine the inference effect of the inference model according to the processing result, and update the inference policy based on the inference effect. This is not limitative.
Corresponding to the streaming data processing method, an embodiment of the present disclosure further provides a streaming data processing apparatus, as shown in fig. 9, including:
a receiving module 91, configured to receive a target streaming data packet;
the first determining module 92 is configured to determine, from preset graphics computation instructions, a first graphics computation instruction corresponding to a position of a target streaming data packet in a data stream, where the graphics computation instruction includes a spatial address required for the streaming data packet and an execution order of a core that processes the streaming data packet.
The first inference module 93 is configured to send the first graph computation instruction to the graph processor, so that the graph processor infers the target streaming data packet based on the spatial address and the execution order of the kernel included in the first graph computation instruction, and obtains a first processing result of the target streaming data packet.
In some embodiments, the streaming data processing apparatus may further include:
the second determining module is used for determining the space size required for processing streaming data packets at different positions in the data stream;
a first allocation module, configured to allocate a first space required for processing streaming data packets included in the data stream according to the determined maximum space size, where streaming data packets at different positions correspond to spaces of corresponding space sizes in the first space;
the first generating module is used for generating preset graph calculation instructions corresponding to different positions based on a first execution sequence of a kernel for processing streaming data packets at different positions and an address of a first space.
In some embodiments, the position of the streaming data packet in the data stream includes a first packet and a middle packet with fixed lengths, and the preset graph calculation instruction includes a graph calculation instruction corresponding to the first packet and a graph calculation instruction corresponding to the middle packet; the streaming data processing apparatus described above may further include:
the judging module is used for judging whether the position of the target streaming data packet in the data stream is a first packet or a middle packet to obtain a judging result;
the first determining module is specifically configured to determine, in response to the determination result indicating that the position of the target streaming data packet in the data stream is the first packet or the middle packet, a first graph calculation instruction corresponding to the position of the target streaming data packet in the data stream from preset graph calculation instructions.
In some embodiments, the position of the streaming data packet in the data stream further comprises a variable length trailer packet; the streaming data processing apparatus may further include:
the second distribution module is used for responding to the judgment result and indicating that the position of the target streaming data packet in the data stream is a tail packet, acquiring a second space required by processing the target streaming data packet, and reading a second execution sequence of a kernel for processing the target streaming data packet;
and the second reasoning module is used for sending the execution command of the kernel to the graphics processor according to the second execution sequence, so that the graphics processor performs reasoning on the target streaming data packet by using the second space based on the execution command of the kernel to obtain a second processing result of the target streaming data packet.
In some embodiments, the position of the streaming data packet in the data stream further comprises a variable length trailer; the streaming data processing apparatus may further include:
the second distribution module is used for responding to the judgment result and indicating that the position of the target streaming data packet in the data stream is a tail packet, acquiring a second space required by processing the target streaming data packet, and reading a second execution sequence of a kernel for processing the target streaming data packet;
the second generating module is used for generating a second graph calculating instruction based on the second space and the second execution sequence;
and the third reasoning module is used for sending the second graph calculation instruction to the graph processor so that the graph processor can reason the target streaming data packet based on the second graph calculation instruction to obtain a third processing result of the target streaming data packet.
In the technical solution provided by the embodiment of the present disclosure, a CPU presets a graphics computation instruction, where the graphics computation instruction includes a space address required by a streaming data packet and an execution sequence of a kernel that processes the streaming data packet. Therefore, when a streaming data packet is processed, the CPU only needs to acquire a corresponding graphics computation instruction and send the graphics computation instruction to the GPU. Subsequently, the GPU may complete inference on the streaming data packet based on the graphics computation instruction. Therefore, in the embodiment of the disclosure, the CPU can finish reasoning the streaming data packet only by interacting with the GPU once, so that the interaction times of the CPU and the GPU are reduced, and the processing efficiency of the streaming data is improved.
In the technical scheme of the embodiment of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations, and do not violate the good customs of the public order.
According to an embodiment of the present disclosure, an electronic device, a readable storage medium and a computer program product are also provided.
Fig. 10 shows a schematic block diagram of an electronic device 100 that may be used to implement the streaming data processing method of an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the apparatus 100 includes a computing unit 101 that can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)102 or a computer program loaded from a storage unit 108 into a Random Access Memory (RAM) 103. In the RAM 103, various programs and data necessary for the operation of the device 100 can also be stored. The computing unit 101, the ROM 102, and the RAM 103 are connected to each other via a bus 104. An input/output (I/O) interface 105 is also connected to bus 104.
A number of components in the device 100 are connected to the I/O interface 105, including: an input unit 106 such as a keyboard, a mouse, and the like; an output unit 107 such as various types of displays, speakers, and the like; a storage unit 108, such as a magnetic disk, optical disk, or the like; and a communication unit 109 such as a network card, modem, wireless communication transceiver, etc. The communication unit 109 allows the device 100 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 101 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 101 executes the respective methods and processes described above, such as the streaming data processing method. For example, in some embodiments, the streaming data processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 108. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 100 via the ROM 102 and/or the communication unit 109. When the computer program is loaded into the RAM 103 and executed by the computing unit 101, one or more steps of the streaming data processing method described above may be performed. Alternatively, in other embodiments, the computing unit 101 may be configured to perform the streaming data processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 11, including:
at least one processor 111; and
a memory 112 communicatively coupled to the at least one processor 111; wherein the content of the first and second substances,
the memory 112 stores instructions executable by the at least one processor 111, the instructions being executable by the at least one processor 111 to enable the at least one processor 111 to perform any of the streaming data processing methods described above.
The disclosed embodiments also provide a non-transitory computer readable storage medium storing computer instructions for causing the computer to execute the streaming data processing method according to any one of the above.
An embodiment of the present disclosure also provides a computer program product, including a computer program, which, when executed by a processor, implements the streaming data processing method according to any one of the above.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (15)

1. A streaming data processing method is applied to a central processing unit and comprises the following steps:
receiving a target streaming data packet;
determining a first graph calculation instruction corresponding to the position of the target streaming data packet in a data stream from preset graph calculation instructions, wherein the graph calculation instruction comprises a space address required by the streaming data packet and an execution sequence of a kernel for processing the streaming data packet;
and sending the first graph calculation instruction to a graph processor, so that the graph processor infers the target streaming data packet based on a space address and an execution sequence of a kernel included in the first graph calculation instruction, and obtains a first processing result of the target streaming data packet.
2. The method of claim 1, prior to receiving the target streaming data packet, the method further comprising:
determining the size of space required for processing streaming data packets at different positions in the data stream;
according to the determined maximum space size, allocating a first space required for processing streaming data packets included in the data stream, wherein the streaming data packets at different positions correspond to spaces with corresponding space sizes in the first space;
and generating preset graph calculation instructions corresponding to different positions based on a first execution sequence of a kernel for processing the streaming data packets at different positions and the address of the first space.
3. The method according to claim 1, wherein the position of the streaming data packet in the data stream includes a header packet and a middle packet with fixed lengths, and the preset graphics computation instruction includes a graphics computation instruction corresponding to the header packet and a graphics computation instruction corresponding to the middle packet; the method further comprises the following steps:
judging whether the position of the target streaming data packet in the data stream is a first packet or a middle packet to obtain a judgment result;
and in response to the judgment result indicating that the position of the target streaming data packet in the data stream is a head packet or a middle packet, executing a step of determining a first graph calculation instruction corresponding to the position of the target streaming data packet in the data stream from preset graph calculation instructions.
4. The method of claim 3, wherein the position of the streaming data packet in the data stream further comprises a variable length trailer packet; the method further comprises the following steps:
responding to the judgment result to indicate that the position of the target streaming data packet in the data stream is a tail packet, acquiring a second space required by processing the target streaming data packet, and reading a second execution sequence of a kernel for processing the target streaming data packet;
and sending the execution command of the kernel to the graphics processor according to the second execution sequence, so that the graphics processor infers the target streaming data packet by using the second space based on the execution command of the kernel to obtain a second processing result of the target streaming data packet.
5. The method of claim 3, wherein the position of the streaming data packet in the data stream further comprises a variable length trailer packet; the method further comprises the following steps:
responding to the judgment result to indicate that the position of the target streaming data packet in the data stream is a tail packet, acquiring a second space required for processing the target streaming data packet, and reading a second execution sequence of a kernel for processing the target streaming data packet;
generating a second graphical computation instruction based on the second space and the second execution order;
and sending the second graph calculation instruction to the graph processor, so that the graph processor infers the target streaming data packet based on the second graph calculation instruction, and obtains a third processing result of the target streaming data packet.
6. The method of claim 4 or 5, wherein the step of obtaining the second space required for processing the target streaming data packet comprises:
determining a second space required for processing the target streaming data packet from the space indicated by the space address included in the preset graphic calculation instruction; or
And allocating a second space required for processing the target streaming data packet from the space indicated by the space address included in the preset graphic calculation instruction.
7. An apparatus for streaming data processing, comprising:
the receiving module is used for receiving the target streaming data packet;
the first determining module is used for determining a first graphic computing instruction corresponding to the position of the target streaming data packet in a data stream from preset graphic computing instructions, wherein the graphic computing instruction comprises a space address required by the streaming data packet and an execution sequence of a kernel for processing the streaming data packet;
and the first reasoning module is used for sending the first graph computing instruction to a graph processor so that the graph processor can reason the target streaming data packet based on the space address and the execution sequence of the kernel included in the first graph computing instruction to obtain a first processing result of the target streaming data packet.
8. The apparatus of claim 7, the apparatus further comprising:
a second determining module, configured to determine a size of a space required for processing streaming packets at different positions in the data stream;
a first allocating module, configured to allocate, according to the determined maximum space size, a first space required for processing streaming data packets included in the data stream, where the streaming data packets at different positions correspond to spaces of corresponding space sizes in the first space;
and the first generation module is used for generating preset graph calculation instructions corresponding to different positions based on a first execution sequence of a kernel for processing the streaming data packets at different positions and the address of the first space.
9. The device according to claim 7, wherein the position of the streaming data packet in the data stream includes a header packet and a middle packet with fixed lengths, and the preset graph calculation instruction includes a graph calculation instruction corresponding to the header packet and a graph calculation instruction corresponding to the middle packet; the device further comprises:
the judging module is used for judging whether the position of the target streaming data packet in the data stream is a first packet or a middle packet to obtain a judging result;
the first determining module is specifically configured to determine, in response to the determination result indicating that the position of the target streaming data packet in the data stream is a first packet or a middle packet, a first graph calculation instruction corresponding to the position of the target streaming data packet in the data stream from preset graph calculation instructions.
10. The apparatus of claim 9, wherein the position of the streaming data packet in the data stream further comprises a variable length trailer packet; the device further comprises:
the second distribution module is used for responding to the judgment result and indicating that the position of the target streaming data packet in the data stream is a tail packet, acquiring a second space required by processing the target streaming data packet, and reading a second execution sequence of a kernel for processing the target streaming data packet;
and the second inference module is configured to send the execution command of the kernel to the graphics processor according to the second execution order, so that the graphics processor infers the target streaming data packet by using the second space based on the execution command of the kernel, and obtains a second processing result of the target streaming data packet.
11. The apparatus of claim 9, wherein the position of the streaming data packet in the data stream further comprises a variable length trailer packet; the device further comprises:
the second distribution module is used for responding to the judgment result and indicating that the position of the target streaming data packet in the data stream is a tail packet, acquiring a second space required by processing the target streaming data packet, and reading a second execution sequence of a kernel for processing the target streaming data packet;
a second generating module, configured to generate a second graph computation instruction based on the second space and the second execution order;
and the third reasoning module is used for sending the second graph calculation instruction to the graph processor so that the graph processor can reason the target streaming data packet based on the second graph calculation instruction to obtain a third processing result of the target streaming data packet.
12. The apparatus according to claim 10 or 11, wherein the second allocating module is specifically configured to:
determining a second space required for processing the target streaming data packet from the space indicated by the space address included in the preset graphic calculation instruction; or
And allocating a second space required for processing the target streaming data packet from the space indicated by the space address included in the preset graphic calculation instruction.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
CN202210713851.5A 2022-06-22 2022-06-22 Streaming data processing method, device, equipment and storage medium Pending CN115129488A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210713851.5A CN115129488A (en) 2022-06-22 2022-06-22 Streaming data processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210713851.5A CN115129488A (en) 2022-06-22 2022-06-22 Streaming data processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115129488A true CN115129488A (en) 2022-09-30

Family

ID=83379227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210713851.5A Pending CN115129488A (en) 2022-06-22 2022-06-22 Streaming data processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115129488A (en)

Similar Documents

Publication Publication Date Title
CN113342345A (en) Operator fusion method and device of deep learning framework
CN112948079B (en) Task scheduling method, device, equipment and computer storage medium
US11249811B2 (en) Method, apparatus, and computer program product for processing computing task
CN113849312A (en) Data processing task allocation method and device, electronic equipment and storage medium
CN112508768B (en) Single-operator multi-model pipeline reasoning method, system, electronic equipment and medium
CN112560996A (en) User portrait recognition model training method, device, readable storage medium and product
EP4060496A2 (en) Method, apparatus, device and storage medium for running inference service platform
CN115880132A (en) Graphics processor, matrix multiplication task processing method, device and storage medium
CN114911598A (en) Task scheduling method, device, equipment and storage medium
CN112529202A (en) Quantum entanglement state distribution method, device, equipment, storage medium and product
CN114819084A (en) Model reasoning method, device, equipment and storage medium
CN113961289A (en) Data processing method, device, equipment and storage medium
CN114386577A (en) Method, apparatus, and storage medium for executing deep learning model
CN114937478B (en) Method for training a model, method and apparatus for generating molecules
CN115129488A (en) Streaming data processing method, device, equipment and storage medium
CN113657408B (en) Method and device for determining image characteristics, electronic equipment and storage medium
CN113408304B (en) Text translation method and device, electronic equipment and storage medium
CN113568706A (en) Container adjusting method and device for service, electronic equipment and storage medium
CN113778645A (en) Task scheduling method, device and equipment based on edge calculation and storage medium
CN114222073B (en) Video output method, video output device, electronic equipment and storage medium
CN115495312B (en) Service request processing method and device
CN116560847B (en) Task processing method, device, electronic equipment and storage medium
CN115860077B (en) Method, device, equipment and storage medium for processing state data
CN117472574A (en) Data operation method, device, storage medium and acceleration card
CN115600687A (en) Model training method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination