CN111614758A - Code stream forwarding method and device, readable storage medium and computing device - Google Patents

Code stream forwarding method and device, readable storage medium and computing device Download PDF

Info

Publication number
CN111614758A
CN111614758A CN202010431862.5A CN202010431862A CN111614758A CN 111614758 A CN111614758 A CN 111614758A CN 202010431862 A CN202010431862 A CN 202010431862A CN 111614758 A CN111614758 A CN 111614758A
Authority
CN
China
Prior art keywords
task
code stream
thread
sending
receiving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010431862.5A
Other languages
Chinese (zh)
Other versions
CN111614758B (en
Inventor
李大波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haoyun Technologies Co Ltd
Original Assignee
Haoyun Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haoyun Technologies Co Ltd filed Critical Haoyun Technologies Co Ltd
Priority to CN202010431862.5A priority Critical patent/CN111614758B/en
Publication of CN111614758A publication Critical patent/CN111614758A/en
Application granted granted Critical
Publication of CN111614758B publication Critical patent/CN111614758B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The disclosed embodiment provides a code stream forwarding method, a code stream forwarding device, a readable storage medium and a computing device, which are used for improving code stream forwarding efficiency and improving server bearing capacity, and the method comprises the following steps: a task scheduler receives a receiving task of any code stream; the task scheduler selects a first thread from idle threads of a thread pool; the first thread executes a receiving task of any code stream; the task scheduler receives a sending task of any code stream; the task scheduler selects a second thread from idle threads of the thread pool; the second thread executes a sending task of any code stream; wherein, each task and each thread are in a many-to-many relationship.

Description

Code stream forwarding method and device, readable storage medium and computing device
Technical Field
The present disclosure relates to the field of network communication technologies, and in particular, to a method and an apparatus for forwarding a code stream, a readable storage medium, and a computing device.
Background
With the development of audio and video technologies, various applications related to audio and video are in a hundred of flowers, such as live video, security monitoring, video conferencing and the like. The application has a general problem, as the number of users is more and more, the required hardware server resources are more and more, the number of users that a single hardware server can bear is a serious performance bottleneck, and how to improve the concurrency of the users borne by the single server becomes a difficult problem faced by the industry.
For example, in the industry, the number of stable concurrent streams of a single server is about 500M under a 1G bandwidth, and hardware resources of the server cannot be fully utilized.
Disclosure of Invention
To this end, the present disclosure provides a code stream forwarding method, apparatus, readable storage medium and computing device, in an effort to solve or at least alleviate at least one of the problems presented above.
According to an aspect of the embodiments of the present disclosure, a method for forwarding a code stream is provided, including:
a task scheduler receives a receiving task of any code stream;
the task scheduler selects a first thread from idle threads of a thread pool;
the first thread executes a receiving task of any code stream;
the task scheduler receives a sending task of any code stream;
the task scheduler selects a second thread from idle threads of the thread pool;
the second thread executes a sending task of any code stream;
wherein, each task and each thread are in a many-to-many relationship.
Optionally, after receiving the sending task of any code stream, the task scheduler further includes:
the task scheduler determines a receiving task of any code stream corresponding to the sending task of any code stream according to the parameter information of the sending task of any code stream;
and associating the sending task and the receiving task of any code stream.
Optionally, the executing, by the first thread, a task of receiving any of the code streams includes:
the first thread receives any code stream and writes the code stream into a first cache region;
the second thread executes a sending task of any code stream, and the sending task comprises the following steps:
the second thread sends a code stream of a second cache region;
after associating the sending task and the receiving task of any code stream, the method further comprises the following steps:
the task scheduler receives a copy task of any code stream;
the task scheduler selects a third thread from idle threads of the thread pool;
and writing the data of the first cache region into a second cache region by the third thread according to the associated information of the sending task and the receiving task of any code stream.
Optionally, the length of the first cache region or the second cache region is predefined according to a parameter of the any code stream.
Optionally, the thread schedules the first cache region or the second cache region in a first-in first-out manner.
Optionally, the task scheduler allocates threads for the code stream sending task, the code stream receiving task or the code stream copying task in a first-in first-out mode.
Optionally, the method further comprises:
the task scheduler marks an action parameter and an action type for a code stream sending task, or a code stream receiving task, or a code stream copying task;
the action parameters comprise a transmission protocol, a packet head and a packet tail parameter;
the action types include receive, send, and copy.
According to another aspect of the present disclosure, there is provided a code stream forwarding apparatus, including:
the task scheduler is used for receiving a receiving task of any code stream and selecting a first thread from idle threads of a thread pool, wherein the first thread executes the receiving task of any code stream; receiving a sending task of any code stream, and selecting a second thread from idle threads of the thread pool, wherein the second thread executes the sending task of any code stream; wherein, each task and each thread are in a many-to-many relationship;
a thread pool comprising a plurality of threads;
a receiving unit, configured to receive a code stream;
and the sending unit is used for sending the code stream.
According to still another aspect of the present disclosure, a readable storage medium is provided, which has executable instructions thereon, and when the executable instructions are executed, the computer is caused to execute the operations included in the code stream forwarding method.
According to yet another aspect of the present disclosure, there is provided a computing device comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors to perform operations included in the codestream forwarding method described above.
According to the embodiment of the disclosure, the thread pool is adopted to allocate the threads for the receiving and sending tasks of each code stream, and the receiving and sending tasks of the code streams and the threads are in a many-to-many relationship, that is, any task can be executed by any thread, so that the thread utilization rate can be greatly improved, the idle thread number is reduced, and the code stream forwarding bandwidth is further improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the disclosure and together with the description serve to explain the principles of the disclosure.
FIG. 1 is a block diagram of an exemplary computing device;
fig. 2 is a flowchart of a code stream forwarding method according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a code stream forwarding method according to another embodiment of the present disclosure;
fig. 4 is a structural diagram of a codestream forwarding apparatus according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a block diagram of an example computing device 100 arranged to implement a codestream forwarding method according to the present disclosure. In a basic configuration 102, computing device 100 typically includes system memory 106 and one or more processors 104. A memory bus 108 may be used for communication between the processor 104 and the system memory 106.
Depending on the desired configuration, the processor 104 may be any type of processing, including but not limited to: the processor 104 may include one or more levels of cache, such as a level one cache 110 and a level two cache 112, a processor core 114, and registers 116. the example processor core 114 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof.
Depending on the desired configuration, system memory 106 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 106 may include an operating system 120, one or more programs 122, and program data 124. In some implementations, the program 122 can be configured to execute instructions on an operating system by one or more processors 104 using program data 124.
Computing device 100 may also include an interface bus 140 that facilitates communication from various interface devices (e.g., output devices 142, peripheral interfaces 144, and communication devices 146) to the basic configuration 102 via the bus/interface controller 130. The example output device 142 includes a graphics processing unit 148 and an audio processing unit 150. They may be configured to facilitate communication with various external devices, such as a display terminal or speakers, via one or more a/V ports 152. Example peripheral interfaces 144 may include a serial interface controller 154 and a parallel interface controller 156, which may be configured to facilitate communication with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 158. An example communication device 146 may include a network controller 160, which may be arranged to facilitate communications with one or more other computing devices 162 over a network communication link via one or more communication ports 164.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
Computing device 100 may be implemented as a personal computer or server including both desktop and notebook computer configurations.
Among other things, one or more programs 122 of computing device 100 include instructions for performing a codestream forwarding method according to the present disclosure.
Fig. 2 illustrates a flow chart of a code stream forwarding method 200 according to an embodiment of the present disclosure, the code stream forwarding method starts at step S210.
S210, a task scheduler receives a receiving task of any code stream;
s220, the task scheduler selects a first thread from idle threads of the thread pool;
s230, the first thread executes a receiving task of any code stream;
s240, the task scheduler receives a sending task of the same code stream;
s250, the task scheduler selects a second thread from idle threads of the thread pool;
and S260, executing a sending task of any code stream by the second thread.
In the embodiment of the disclosure, each task and each thread are in a many-to-many relationship, that is, any idle thread resource can be used for code stream forwarding, and the execution of the task does not need to wait for the thread, so that the utilization efficiency of the computing resources is improved compared with a mode of binding the thread and the task.
Fig. 3 illustrates a flowchart of a code stream forwarding method 300 according to another embodiment of the present disclosure, the code stream forwarding method starts at step S300.
S300, receiving a receiving task of any code stream by a task scheduler;
s310, selecting a first thread from idle threads of a thread pool by a task scheduler;
s320, receiving any code stream by the first thread and writing the code stream into a first cache region;
s330, the task scheduler receives a sending task of the same code stream;
s340, the task scheduler determines a receiving task corresponding to the sending task of the same code stream according to the parameter information of the sending task of the same code stream;
s350, associating a sending task and a receiving task of the same code stream;
s360, the task scheduler selects a second thread from idle threads of the thread pool;
s370, the second thread writes the data of the first cache area into the second cache area according to the association information;
s380, the task scheduler selects a third thread from idle threads of the thread pool;
and S390, the third thread sends the code stream of the second cache region.
Optionally, the thread schedules the first cache region or the second cache region in a first-in first-out manner.
Optionally, the length of the first cache region or the second cache region is predefined according to a parameter of any code stream.
Optionally, the task scheduler allocates threads for the code stream processing tasks in a first-in first-out manner.
Optionally, the task scheduling marks the action parameters and the action types for each code stream processing task;
the action parameters comprise a transmission protocol and a packet head and packet tail parameter; the Transmission Protocol may be a Transmission Control Protocol (TCP)/User Datagram Protocol (UDP) Protocol.
The action types include receive, send, and copy.
According to another embodiment of the present disclosure, a method for forwarding a code stream is provided, which includes the following steps:
1. the links passed by the code stream forwarding process comprise: receive input, input buffer, task scheduler, output buffer, send output.
2. The thread model used in the process is that a thread pool is associated with an input module in a many-to-many way, and an output module is associated with the thread pool in a many-to-many way.
3. Adopting a scheduling method of input and output association for a task scheduler;
and 3.1, for the receiving input module, defining each path of receiving input code stream as a task, marking the action parameters and the action types of the task, and putting the task into a task scheduler.
And 3.2, for the sending output module, defining each path of sending output code stream as a task, marking the action parameters and the action types of the task, and putting the task into a task scheduler.
3.3, for the new primary request code stream of the user, finding the corresponding input stream according to the parameters, establishing an output stream task, and associating the output stream task with the input stream task.
4. Designing an input buffer model, defining an input buffer area with fixed time length according to code stream parameters, and adopting a first-in first-out method for data passing through the buffer area.
5. Designing an output buffer model, defining an output buffer area with fixed time length according to code stream parameters, and adopting a first-in first-out method for data passing through the buffer area.
6. And finding all corresponding output code streams of each path of input code stream according to the relevance, and establishing a task to copy the data of the input code streams to a buffer area of the output code streams.
7. The task scheduler acquires a scheduling task from the task queue, and if the scheduling task is a receiving task, the task scheduler receives the scheduling task once; if the task is a sending task, sending the task once; if the task is copying, making a copy.
The embodiment of the invention improves the efficiency of receiving the input code stream and the forwarding efficiency between receiving input and sending output, thereby greatly improving the quantity of the code stream which is concurrently accessed by the software in a single server to a user.
Referring to fig. 4, the present disclosure provides a code stream forwarding apparatus, including:
the task scheduler 410 is configured to receive a receiving task of any code stream, and select a first thread from idle threads in a thread pool, where the first thread executes the receiving task of any code stream; receiving a sending task of any code stream, and selecting a second thread from idle threads of the thread pool, wherein the second thread executes the sending task of any code stream; wherein, each task and each thread are in a many-to-many relationship;
a thread pool 420 comprising a plurality of threads;
a sending unit 430, configured to send a code stream;
the receiving unit 440 is configured to receive a code stream.
For specific limitations of the code stream forwarding apparatus, reference may be made to the above limitations on the code stream forwarding method, which is not described herein again.
It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present disclosure, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the disclosure.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the various methods of the present disclosure according to instructions in the program code stored in the memory.
By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer-readable media includes both computer storage media and communication media. Computer storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
It should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various disclosed aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, disclosed aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this disclosure.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Moreover, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the disclosure and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purposes of this disclosure.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the disclosure has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the disclosure as described herein. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the disclosed subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The disclosure of the present disclosure is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims.

Claims (10)

1. A method for forwarding a code stream is characterized by comprising the following steps:
a task scheduler receives a receiving task of any code stream;
the task scheduler selects a first thread from idle threads of a thread pool;
the first thread executes a receiving task of any code stream;
the task scheduler receives a sending task of any code stream;
the task scheduler selects a second thread from idle threads of the thread pool;
the second thread executes a sending task of any code stream;
wherein, each task and each thread are in a many-to-many relationship.
2. The method of claim 1, wherein after the task scheduler receives the sending task of any of the codestreams, the method further comprises:
the task scheduler determines a receiving task of any code stream corresponding to the sending task of any code stream according to the parameter information of the sending task of any code stream;
and associating the sending task and the receiving task of any code stream.
3. The method of claim 2, wherein the first thread performs a task of receiving any of the codestreams, including:
the first thread receives any code stream and writes the code stream into a first cache region;
the second thread executes a sending task of any code stream, and the sending task comprises the following steps:
the second thread sends a code stream of a second cache region;
after associating the sending task and the receiving task of any code stream, the method further comprises the following steps:
the task scheduler receives a copy task of any code stream;
the task scheduler selects a third thread from idle threads of the thread pool;
and writing the data of the first cache region into a second cache region by the third thread according to the associated information of the sending task and the receiving task of any code stream.
4. The method of claim 3, wherein a length of the first cache region or the second cache region is predefined according to a parameter of the any codestream.
5. The method of claim 3, wherein a thread schedules the first cache region or the second cache region in a first-in-first-out manner.
6. The method of claim 3, wherein the task scheduler allocates threads for the code stream sending task, the code stream receiving task, or the code stream copying task in a first-in first-out mode.
7. The method of claim 3, further comprising:
the task scheduler marks an action parameter and an action type for a code stream sending task, or a code stream receiving task, or a code stream copying task;
the action parameters comprise a transmission protocol, a packet head and a packet tail parameter;
the action types include receive, send, and copy.
8. A stream forwarding apparatus, comprising:
the task scheduler is used for receiving a receiving task of any code stream and selecting a first thread from idle threads of a thread pool, wherein the first thread executes the receiving task of any code stream; receiving a sending task of any code stream, and selecting a second thread from idle threads of the thread pool, wherein the second thread executes the sending task of any code stream; wherein, each task and each thread are in a many-to-many relationship;
a thread pool comprising a plurality of threads;
a receiving unit, configured to receive a code stream;
and the sending unit is used for sending the code stream.
9. A readable storage medium having executable instructions thereon that, when executed, cause a computer to perform the operations included in any of claims 1-7.
10. A computing device, comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors to perform operations included in any of claims 1-7.
CN202010431862.5A 2020-05-20 2020-05-20 Code stream forwarding method and device, readable storage medium and computing equipment Active CN111614758B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010431862.5A CN111614758B (en) 2020-05-20 2020-05-20 Code stream forwarding method and device, readable storage medium and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010431862.5A CN111614758B (en) 2020-05-20 2020-05-20 Code stream forwarding method and device, readable storage medium and computing equipment

Publications (2)

Publication Number Publication Date
CN111614758A true CN111614758A (en) 2020-09-01
CN111614758B CN111614758B (en) 2023-05-02

Family

ID=72202229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010431862.5A Active CN111614758B (en) 2020-05-20 2020-05-20 Code stream forwarding method and device, readable storage medium and computing equipment

Country Status (1)

Country Link
CN (1) CN111614758B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113905273A (en) * 2021-09-29 2022-01-07 上海阵量智能科技有限公司 Task execution method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1677952A (en) * 2004-03-30 2005-10-05 武汉烽火网络有限责任公司 Method and apparatus for wire speed parallel forwarding of packets
CN1952898A (en) * 2005-03-14 2007-04-25 Qnx软件操作系统公司 Adaptive partitioning process scheduler
CN104536827A (en) * 2015-01-27 2015-04-22 浪潮(北京)电子信息产业有限公司 Data dispatching method and device
EP3191973A1 (en) * 2014-09-09 2017-07-19 Intel Corporation Technologies for proxy-based multi-threaded message passing communication
CN110018892A (en) * 2019-03-12 2019-07-16 平安普惠企业管理有限公司 Task processing method and relevant apparatus based on thread resources

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1677952A (en) * 2004-03-30 2005-10-05 武汉烽火网络有限责任公司 Method and apparatus for wire speed parallel forwarding of packets
CN1952898A (en) * 2005-03-14 2007-04-25 Qnx软件操作系统公司 Adaptive partitioning process scheduler
EP3191973A1 (en) * 2014-09-09 2017-07-19 Intel Corporation Technologies for proxy-based multi-threaded message passing communication
CN104536827A (en) * 2015-01-27 2015-04-22 浪潮(北京)电子信息产业有限公司 Data dispatching method and device
CN110018892A (en) * 2019-03-12 2019-07-16 平安普惠企业管理有限公司 Task processing method and relevant apparatus based on thread resources

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113905273A (en) * 2021-09-29 2022-01-07 上海阵量智能科技有限公司 Task execution method and device
CN113905273B (en) * 2021-09-29 2024-05-17 上海阵量智能科技有限公司 Task execution method and device

Also Published As

Publication number Publication date
CN111614758B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
US11561830B2 (en) System and method for low latency node local scheduling in distributed resource management
US9720739B2 (en) Method and system for dedicating processors for desired tasks
WO2013082809A1 (en) Acceleration method, device and system for co-processing
CN111343288B (en) Job scheduling method and system and computing device
KR102594657B1 (en) Method and apparatus for implementing out-of-order resource allocation
WO2021082969A1 (en) Inter-core data processing method and system, system on chip and electronic device
US9286125B2 (en) Processing engine implementing job arbitration with ordering status
JP2009054003A (en) Image processing unit and program
CN114579285B (en) Task running system and method and computing device
WO2023201987A1 (en) Request processing method and apparatus, and device and medium
CN111614758B (en) Code stream forwarding method and device, readable storage medium and computing equipment
WO2021057759A1 (en) Memory migration method, device, and computing apparatus
US20230342086A1 (en) Data processing apparatus and method, and related device
US9298652B2 (en) Moderated completion signaling
WO2023165318A1 (en) Resource processing system and method
JP6869360B2 (en) Image processing equipment, image processing method, and image processing program
US20190196889A1 (en) Efficient communication overlap by runtimes collaboration
US11388050B2 (en) Accelerating machine learning and profiling over a network
US10284501B2 (en) Technologies for multi-core wireless network data transmission
CN110764710A (en) Data access method and storage system of low-delay and high-IOPS
CN110647383A (en) Application management method based on docker container and computing device
CN110837482A (en) Distributed block storage low-delay control method, system and equipment
WO2021179218A1 (en) Direct memory access unit, processor, device, processing method, and storage medium
CN117215802B (en) GPU management and calling method for virtualized network function
CN114338390B (en) Server configuration method, computing device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant