CN111711688A - Data transmission method, device and equipment based on transmitter and storage medium - Google Patents
Data transmission method, device and equipment based on transmitter and storage medium Download PDFInfo
- Publication number
- CN111711688A CN111711688A CN202010546956.7A CN202010546956A CN111711688A CN 111711688 A CN111711688 A CN 111711688A CN 202010546956 A CN202010546956 A CN 202010546956A CN 111711688 A CN111711688 A CN 111711688A
- Authority
- CN
- China
- Prior art keywords
- concurrency
- threshold
- transmitter
- task
- transmitted
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000005540 biological transmission Effects 0.000 title claims abstract description 221
- 238000000034 method Methods 0.000 title claims abstract description 131
- 230000015654 memory Effects 0.000 claims description 46
- 230000008569 process Effects 0.000 description 52
- 238000010586 diagram Methods 0.000 description 35
- 230000000670 limiting effect Effects 0.000 description 19
- 238000004590 computer program Methods 0.000 description 16
- 230000002829 reductive effect Effects 0.000 description 15
- 230000000875 corresponding effect Effects 0.000 description 12
- 238000012545 processing Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000001276 controlling effect Effects 0.000 description 4
- 238000013500 data storage Methods 0.000 description 4
- 238000013468 resource allocation Methods 0.000 description 4
- GNFTZDOKVXKIBK-UHFFFAOYSA-N 3-(2-methoxyethoxy)benzohydrazide Chemical compound COCCOC1=CC=CC(C(=O)NN)=C1 GNFTZDOKVXKIBK-UHFFFAOYSA-N 0.000 description 3
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000009191 jumping Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- YTAHJIFKAKIKAV-XNMGPUDCSA-N [(1R)-3-morpholin-4-yl-1-phenylpropyl] N-[(3S)-2-oxo-5-phenyl-1,3-dihydro-1,4-benzodiazepin-3-yl]carbamate Chemical compound O=C1[C@H](N=C(C2=C(N1)C=CC=C2)C1=CC=CC=C1)NC(O[C@H](CCN1CCOCC1)C1=CC=CC=C1)=O YTAHJIFKAKIKAV-XNMGPUDCSA-N 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1074—Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
- H04L67/1078—Resource delivery mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/06—Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Telephonic Communication Services (AREA)
Abstract
The application discloses a data transmission method, a data transmission device, data transmission equipment and a storage medium based on a transmitter, relates to the field of big data, and can be used for mass data transmission in automatic driving. The specific implementation scheme is as follows: acquiring a total threshold of the concurrency degree of a current transmitter and the actual concurrency degree of each task in the current transmitter, wherein the tasks have priorities, and the total threshold of the concurrency degree is an upper limit value of the total sum of the actual concurrency degrees; determining expected concurrency degrees of the tasks to be transmitted with different priorities according to the total concurrency degree threshold and the actual concurrency degree of each task, wherein the expected concurrency degrees are characterized by the number of resources distributed to the tasks to be transmitted, and the expected concurrency degrees of the tasks to be transmitted with different priorities are different; and starting resources corresponding to the number of the resources with the expected concurrency representation of each task to be transmitted, and respectively transmitting each task to be transmitted. Allocating different numbers of resources for different tasks; the task can be successfully and quickly transmitted; the resource is ensured not to be idle, and the resource is not wasted.
Description
Technical Field
The embodiment of the application relates to the field of big data in computer technology, in particular to a data transmission method, a data transmission device, data transmission equipment and a storage medium based on a transmitter, which can be used for mass data transmission in automatic driving.
Background
With the development of computer technology, the amount of data generated and processed is also increasing. Massive data can be transmitted and processed through the transmitter. For example, in the unmanned domain, after data on the unmanned vehicle is acquired, the data may be written to disk; the disk is connected with the transmitter, and the transmitter transmits and processes data.
In the prior art, a transmitter needs to process a large number of tasks (i.e., data transmission tasks) to transmit and process data, and the transmitter can allocate the same resource to each task to transmit the tasks by using the resource.
However, in the prior art, since the transmitter allocates the same resource to each task, the resource of some tasks is insufficient, which results in the transmission delay of the transmission task of data; or some resources are left unused, thereby wasting resources.
Disclosure of Invention
The application provides a data transmission method, a device, equipment and a storage medium based on a transmitter, which are used for distributing different resources for different tasks, improving the task transmission efficiency and avoiding resource waste.
According to a first aspect, there is provided a transmitter-based data transmission method, the method being applied to a transmitter, the method comprising:
acquiring a total threshold of the concurrency degree of a current transmitter and the actual concurrency degree of each task in the current transmitter, wherein the tasks have priorities, the total threshold of the concurrency degree is an upper limit value of the total actual concurrency degree, and the total actual concurrency degree is the sum of the actual concurrency degree of each task;
determining expected concurrency of the tasks to be transmitted with different priorities according to the total concurrency threshold and the actual concurrency of each task, wherein the expected concurrency is characterized by the number of resources allocated to the tasks to be transmitted, and the expected concurrency of the tasks to be transmitted with different priorities is different;
and starting resources corresponding to the number of the resources with the expected concurrency representation of each task to be transmitted, and respectively transmitting each task to be transmitted.
According to a second aspect, there is provided a transmitter-based data transmission apparatus, the apparatus being applied to a transmitter, the apparatus comprising:
a first obtaining unit, configured to obtain a total threshold of concurrency of a current transmitter and an actual concurrency of each task in the current transmitter, where the tasks have priorities, the total threshold of concurrency is an upper limit value of a total actual concurrency, and the total actual concurrency is a sum of actual concurrency of each task;
a first determining unit, configured to determine expected concurrency degrees of the tasks to be transmitted with different priorities according to the total concurrency degree threshold and the actual concurrency degree of each task, where the expected concurrency degrees are represented by the number of resources allocated to the tasks to be transmitted, and the expected concurrency degrees of the tasks to be transmitted with different priorities are different;
and the starting unit is used for starting the resources corresponding to the number of the resources with the expected concurrency representation of each task to be transmitted and respectively transmitting each task to be transmitted.
According to a third aspect, there is provided a transmitter-based data transmission method, the method being applied to a transmitter, the method comprising:
determining expected concurrency of tasks to be transmitted with different priorities according to a total concurrency threshold of a current transmitter and actual concurrency of each task in the current transmitter, wherein the tasks have priorities, the total concurrency threshold is an upper limit value of the total actual concurrency, and the total actual concurrency is the sum of the actual concurrency of each task; the expected concurrency is characterized by the number of resources allocated to the tasks to be transmitted, and the expected concurrency of the tasks to be transmitted with different priorities is different;
and starting resources corresponding to the number of the resources with the expected concurrency representation of each task to be transmitted, and respectively transmitting each task to be transmitted.
According to a fourth aspect, there is provided an electronic device comprising: a processor and a memory; the memory stores executable instructions of the processor; wherein the processor is configured to perform the transmitter-based data transmission method according to any one of the first aspect or the transmitter-based data transmission method according to the third aspect via execution of the executable instructions.
According to a fifth aspect, there is provided a non-transitory computer readable storage medium storing computer instructions which, when executed by a processor, implement the transmitter-based data transmission method of any one of the first aspects or perform the transmitter-based data transmission method of the third aspect.
According to a sixth aspect, there is provided a program product comprising: a computer program stored in a readable storage medium from which at least one processor of a server can read, the at least one processor executing the computer program to cause the server to perform the transmitter-based data transmission method according to any one of the first aspect or to perform the transmitter-based data transmission method according to the third aspect.
According to the technical scheme of the application, the total threshold of the concurrency of the current transmitter and the actual concurrency of each task in the current transmitter are obtained, wherein the tasks have priorities, the total threshold of the concurrency is an upper limit value of the total actual concurrency, and the total actual concurrency is the sum of the actual concurrency of each task; determining expected concurrency of tasks to be transmitted with different priorities according to the total concurrency threshold and the actual concurrency of each task, wherein the expected concurrency is characterized by the number of resources allocated to the tasks to be transmitted; and starting resources corresponding to the number of the resources with the expected concurrency representation of each task to be transmitted, and respectively transmitting each task to be transmitted. Determining expected concurrency of tasks to be transmitted with different priorities according to actual concurrency of tasks being transmitted in a transmitter and a total concurrency threshold of all the concurrency which can be distributed by the transmitter; in order to ensure the quick transmission of the tasks to be transmitted with high priority, expected concurrency degrees are distributed for the tasks to be transmitted according to the priority of the tasks to be transmitted, and then the expected concurrency degrees of the tasks to be transmitted with different priorities are different. Therefore, according to the concurrency of the transmitter and the actual concurrency of the tasks, different numbers of resources are allocated to different tasks, proper resources are allocated to each task, the sufficient resources of the tasks are ensured, and the tasks can be successfully and quickly transmitted; and because resources are allocated to the tasks according to the concurrency and the priorities of the tasks, the resources are ensured not to be idle, and the resources are not wasted.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic diagram of an application scenario of an embodiment of the present application;
FIG. 2 is a schematic diagram according to a first embodiment of the present application;
FIG. 3 is a schematic diagram according to a second embodiment of the present application;
FIG. 4 is a schematic diagram illustrating calculation of ideal concurrency of tasks according to an embodiment of the present disclosure;
FIG. 5 is a schematic illustration of step 204 in a second embodiment according to the present application;
FIG. 6 is a schematic diagram illustrating step 205 in a second embodiment of the present application;
FIG. 7 is a diagram illustrating comparison of thresholds provided in an embodiment of the present application;
fig. 8 is a schematic diagram of a queue to be transmitted and a transmission list provided in the present application;
FIG. 9 is a schematic diagram showing a specific comparison of thresholds provided in the present application;
FIG. 10 is a schematic illustration according to a third embodiment of the present application;
FIG. 11 is a schematic illustration according to a fourth embodiment of the present application;
FIG. 12 is a diagram illustrating another application scenario according to an embodiment of the present application;
FIG. 13 is a schematic diagram of another application scenario of an embodiment of the present application;
FIG. 14 is a schematic illustration according to a fifth embodiment of the present application;
FIG. 15 is a schematic illustration according to a sixth embodiment of the present application;
FIG. 16 is a schematic illustration of step 403 in a sixth embodiment according to the present application;
FIG. 17 is a schematic illustration of a seventh embodiment according to the present application;
FIG. 18 is a schematic illustration according to an eighth embodiment of the present application;
FIG. 19 is a schematic illustration of a ninth embodiment according to the present application;
FIG. 20 is a schematic illustration in accordance with a tenth embodiment of the present application;
FIG. 21 is a schematic illustration according to an eleventh embodiment of the present application;
FIG. 22 is a schematic illustration in accordance with a twelfth embodiment of the present application;
FIG. 23 is a schematic illustration in accordance with a thirteenth embodiment of the present application;
FIG. 24 is a schematic view of a fourteenth embodiment according to the present application;
FIG. 25 is a schematic illustration in accordance with a fifteenth embodiment of the present application;
fig. 26 is a schematic diagram according to a sixteenth embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
With the development of computer technology, the amount of data generated and processed is also increasing. Massive data can be transmitted and processed through the transmitter.
In one example, a plurality of machine rooms are provided, and a plurality of transmitters are provided in each machine room, where the transmitters need to process a large number of tasks (i.e., data transmission tasks) to transmit and process data, and the transmitters can allocate the same resource for each task to transmit the task by using the resource. In one example, in the unmanned domain, after data on the unmanned vehicle is acquired, the data may be written to disk; the disk is connected with the transmitter, and the transmitter transmits and processes data.
However, in the above manner, as shown in fig. 1, the transmitter allocates the same resource to each task in the current transmitter, but since the amount of resources required by different tasks is different, allocating the same number of resources to each task may result in insufficient resources of part of the tasks, and cause transmission delay of data transmission tasks; or, after some resources complete the transmission task, the resources are idle, thereby wasting the resources.
The inventor of the present application has obtained the inventive concept of the present application after creative efforts: the method and the device realize the allocation of different numbers of resources for different tasks, ensure the allocation of proper resources for each task, ensure the sufficient resources of the tasks and ensure the successful and rapid transmission of the tasks; the resource is ensured not to be idle, and the resource is not wasted.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a schematic diagram of a first embodiment of the present application, and as shown in fig. 2, the data transmission method based on a transmitter provided in this embodiment includes:
101. and acquiring a total threshold of the concurrency of the current transmitter and the actual concurrency of each task in the current transmitter, wherein the tasks have priorities, the total threshold of the concurrency is an upper limit value of the total actual concurrency, and the total actual concurrency is the sum of the actual concurrency of each task.
Illustratively, the execution subject of this embodiment may be a transmitter, or a data transmission device or apparatus based on the transmitter, or other devices or apparatuses that can execute the method of this embodiment. The present embodiment is described with the execution main body as a transmitter.
The transmitter needs to transmit a large amount of data. In one example, the transmitter may obtain data transmitted by other devices, for example, obtain data transmitted by an unmanned vehicle. Or, in another example, data may be written to a disk, which is a disk with a large capacity; and then, the disk is participated in a transmitter, and the transmitter transmits the data to the data storage cluster. For example, when mass data of the unmanned vehicle is returned, the data on the unmanned device is written into a large-capacity disk, and then the disk is inserted into the transmitter, and the transmitter collects the data into the data storage cluster.
The transmitter can allocate resources for the tasks when transmitting data. The task refers to a task of data transmission. In one example, each task includes data or data files that need to be transferred. The tasks comprise tasks to be transmitted and tasks being transmitted, the tasks to be transmitted comprise data or data files to be transmitted, and the tasks being transmitted comprise the data or data files being transmitted.
Each task is a folder with a certain organization (e.g., compression, encoding, encryption), and a copy of data or data file to be transmitted is called a task. In the present application, a task is a minimum unit of transmission scheduling, for example, one task (one folder) includes a plurality of files each having data to be transmitted therein; one task is used as the minimum scheduling unit (transmission unit), and in the actual transmission process, a resource (for example, a process or a thread) can be used to transmit files in the task one by one.
In one example, each transport may have at least one disk inserted thereon, with each disk storing a number of tasks (tasks).
In this embodiment, when allocating resources for tasks, the transmitter may allocate resources for tasks based on the concurrency degree, and no longer allocate resources of the same size for each task.
Firstly, a transmitter needs to acquire a total threshold of concurrency of the current transmitter; in one example, the currently obtained total threshold of the concurrency may be determined according to the network speed of the transmitter. Due to the fact that tasks to be transmitted (namely, tasks which need to be transmitted but are not transmitted) and the tasks which are being transmitted are in the transmitter, the actual concurrency of each task in the transmitter can be obtained, wherein only the tasks which are being transmitted have the actual concurrency; actual concurrency, the number of resources occupied by the characterizing task being transmitted; taking the sum of the actual concurrency degrees of all the tasks as the sum of the actual concurrency degrees; the concurrency represents the number of resources, and the sum of the actual concurrency represents the sum of the number of resources occupied by all tasks being transmitted in the transmitter; the total concurrency threshold of the current transmitter is characterized by the sum of the number of resources which can be allocated by the transmitter for all tasks.
In another example, the total concurrency threshold may be a predetermined value.
In the above process, the transmitter may obtain the actual concurrency of each task in the current transmitter, and at this time, the actual concurrency of the task being transmitted is obtained.
Also, each task has a priority, for example, the task is a high-priority task or a normal task.
102. And determining the expected concurrency of the tasks to be transmitted with different priorities according to the total concurrency threshold and the actual concurrency of each task, wherein the expected concurrency is characterized by the number of resources distributed by the tasks to be transmitted, and the expected concurrency of the tasks to be transmitted with different priorities is different.
Illustratively, the expected concurrency of each task to be transmitted is calculated according to the total concurrency threshold obtained in step 101 and the actual concurrency of each task, so as to obtain the expected concurrency of the tasks to be transmitted with different priorities. In addition, in order to ensure the quick transmission of the tasks to be transmitted with high priority, expected concurrency degrees are allocated to the tasks to be transmitted according to the priority of the tasks to be transmitted, and then the expected concurrency degrees of the tasks to be transmitted with different priorities are different.
In one example, the sum of the actual concurrency degrees of the tasks may be subtracted from the total threshold of the concurrency degrees to obtain a remaining concurrency degree; the remaining concurrency represents the number of resources that can be allocated to all tasks to be transmitted; according to the priority of the tasks to be transmitted, firstly allocating proper resource number to the tasks to be transmitted with high priority, and further obtaining the expected concurrency of the tasks to be transmitted with high priority; and then, allocating proper resource number for the tasks to be transmitted with low priority, and further obtaining the expected concurrency of the tasks to be transmitted with low priority.
For example, the current conveyor has 5 tasks, which are task 1, task 2, task 3, task 4, and task 5, respectively, where task 1 and task 2 are the tasks being transmitted, and task 3, task 4, and task 5 are the tasks to be transmitted; taking the sum of the actual concurrency of the task 1 and the actual concurrency of the task 2 as the sum of the actual concurrency; determining a total concurrency threshold according to the network speed of the conveyor; then subtracting the actual total concurrency from the total concurrency threshold to obtain the residual concurrency; the priorities of the task 3, the task 4 and the task 5 are from high to low, a proper concurrency degree is selected from the remaining concurrency degrees and is distributed to the task 3 to obtain the expected concurrency degree of the task 3, the expected concurrency degree of the task 3 represents the number of resources which can be used by the task 3, namely the expected concurrency degree of the task 3 represents the number of the resources distributed to the task 3; selecting a proper concurrency degree from the remaining concurrency degrees, and distributing the proper concurrency degree to the task 4 to obtain the expected concurrency degree of the task 4, wherein the expected concurrency degree of the task 4 represents the number of resources which can be used by the task 4; and selecting a proper concurrency degree from the remaining concurrency degrees, and distributing the proper concurrency degree to the task 5 to obtain the expected concurrency degree of the task 5, wherein the expected concurrency degree of the task 5 represents the number of resources which can be used by the task 5.
In another example, the sum of the actual concurrency degrees of the tasks may be subtracted from the total threshold of the concurrency degrees to obtain a remaining concurrency degree; the remaining concurrency represents the number of resources that can be allocated to all tasks to be transmitted; (ii) a Dividing the remaining concurrency by the number of the tasks to be transmitted to obtain a numerical value; then allocating the number of resources more than the numerical value to the task to be transmitted with high priority so as to obtain the expected concurrency of the task to be transmitted with high priority; and allocating the number of resources less than the numerical value to the tasks to be transmitted with high priority so as to obtain the expected concurrency of the tasks to be transmitted with low priority.
103. And starting resources corresponding to the number of the resources with the expected concurrency representation of each task to be transmitted, and respectively transmitting each task to be transmitted.
For example, since the expected concurrency indicates the number of resources allocated to the task to be transmitted, the current transmitter may start the resource corresponding to the number of resources represented by the expected concurrency of the task to be transmitted, so as to transmit the task to be transmitted. Among these resources are, for example, threads and/or processes.
In this embodiment, a total threshold of the concurrency degree of a current transmitter and an actual concurrency degree of each task in the current transmitter are obtained, where the tasks have priorities, the total threshold of the concurrency degree is an upper limit value of a total actual concurrency degree, and the total actual concurrency degree is a total actual concurrency degree of each task; determining expected concurrency of tasks to be transmitted with different priorities according to the total concurrency threshold and the actual concurrency of each task, wherein the expected concurrency is characterized by the number of resources allocated to the tasks to be transmitted; and starting resources corresponding to the number of the resources with the expected concurrency representation of each task to be transmitted, and respectively transmitting each task to be transmitted. Determining expected concurrency of tasks to be transmitted with different priorities according to actual concurrency of tasks being transmitted in a transmitter and a total concurrency threshold of all the concurrency which can be distributed by the transmitter; in order to ensure the quick transmission of the tasks to be transmitted with high priority, expected concurrency degrees are distributed for the tasks to be transmitted according to the priority of the tasks to be transmitted, and then the expected concurrency degrees of the tasks to be transmitted with different priorities are different. Therefore, according to the concurrency of the transmitter and the actual concurrency of the tasks, different numbers of resources are allocated to different tasks, proper resources are allocated to each task, the sufficient resources of the tasks are ensured, and the tasks can be successfully and quickly transmitted; and because resources are allocated to the tasks according to the concurrency and the priorities of the tasks, the resources are ensured not to be idle, and the resources are not wasted.
Fig. 3 is a schematic diagram of a second embodiment of the present application, and as shown in fig. 3, the data transmission method based on a transmitter provided in this embodiment includes:
201. and acquiring the file size of each data file of each task in the current transmitter.
Illustratively, the execution subject of this embodiment may be a transmitter, or a data transmission device or apparatus based on the transmitter, or other devices or apparatuses that can execute the method of this embodiment. The present embodiment is described with the execution main body as a transmitter.
The transmitter needs to transmit a large amount of data. In one example, the transmitter may obtain data transmitted by other devices, for example, obtain data transmitted by an unmanned vehicle. Or, in another example, data may be written to a disk, which is a disk with a large capacity; and then, the disk is participated in a transmitter, and the transmitter transmits the data to the data storage cluster. For example, when mass data of the unmanned vehicle is returned, the data on the unmanned device is written into a large-capacity disk, and then the disk is inserted into the transmitter, and the transmitter collects the data into the data storage cluster.
The transmitter can allocate resources for the tasks when transmitting data. The task refers to a task of data transmission. In one example, each task includes data or data files that need to be transferred. The tasks comprise tasks to be transmitted and tasks being transmitted, the tasks to be transmitted comprise data or data files to be transmitted, and the tasks being transmitted comprise the data or data files being transmitted.
The task introduction may refer to step 101 shown in fig. 2, and is not described again.
In one example, one task to be transmitted in a transmitter includes a plurality of data files, and the data files are large and small. Multiple concurrency resources (e.g., multiple threads) may be allocated for tasks to be transmitted; but often only a few concurrency resources are transmitting large data files; further, the transmission time is longer; meanwhile, the small data files are transmitted by the remaining concurrency resources, but the small data files can be transmitted by the remaining concurrency resources only in a short time, and then the concurrency resources are idle after the transmission is finished. The idle concurrency resources cannot be immediately allocated to other data to be transmitted; after the task to be transmitted is completely transmitted, the concurrency resources allocated to the task to be transmitted can be uniformly released. Therefore, the overall process consumes longer time, so that concurrency resources are idle and wasted, and the overall concurrency of the conveyor is lower; and then the utilization ratio of the network card bandwidth of the transmission machine is lower, so that the throughput of a machine room with a plurality of transmission machines is low.
In this embodiment, in order to allocate appropriate resources to tasks, it is necessary to obtain an ideal concurrency degree of each task. Ideal concurrency, which characterizes the minimum number of resources that a task can use; therefore, the ideal concurrency can ensure that the tasks are transmitted; the number of resources indicated by the ideal concurrency may be such that the task is transmitted with minimal transmission time of the task at the ideal concurrency. Ideal concurrency characterizes resource data (e.g., the number of concurrent threads or concurrent processes) needed to transmit a task; because the ideal concurrency is positively correlated with the transmission speed of the task, the transmission speed of the task can be controlled by adjusting the ideal concurrency. The initial ideal concurrency setting is very important because the allocation of resources to tasks in the subsequent steps of this embodiment provides a principle of "non-preemptive scheduling", that is, allocating concurrency to one task, allocating resources to a task at one time before transmitting the task, once allocating resources to a certain task at the concurrency and starting to transmit the task, not forcibly increasing or decreasing resources or stopping transmitting the task, but waiting until the task is naturally transmitted, and waiting for resources used by the task to be actively released; therefore, under the condition that the transmission time of the required task is approximately minimum, the concurrency degree resource used by the task is approximately minimum, and the concurrency degree resource is determined through an ideal concurrency degree.
When the ideal concurrency of each task in the transmitter is determined, the ideal concurrency of the tasks can be determined according to the data size of the tasks; the data size of the task is determined by the file size of the data file in the task.
The embodiment provides an equivalent header file algorithm to calculate the ideal concurrency of each task. The task comprises a plurality of data files, and each data file has data; data files cannot be transmitted in slices, and a data file can only be transmitted completely by one resource (thread or process). The transmission time of the data file is in direct proportion to the file size of the data file, and in one example, if the transmission speed of the thread is constant, the transmission time of the data file is only related to the file size of the data file, and the transmission time of the data file is in direct proportion to the file size of the data file. Thus, the file size of each data file of each task needs to be acquired. The transmission machine can directly read the tasks and each data file in each task, so that the transmission machine can directly read the file size of each data file of each task.
202. And determining the ideal concurrency of each task according to the file size of each data file of each task, wherein the ideal concurrency characterizes the minimum number of resources which can be used by the task, and the transmission time of the task under the ideal concurrency is minimum.
In one example, the ideal concurrency is the sum of the number of resources of the resources that a data file in a task may occupy; step 202 specifically includes the following steps:
the second step of step 202 is to determine that each header file occupies a resource, and determine the resource number of the resource occupied by each non-header file according to the sum of the file sizes of the non-header files.
In one example, step 202 specifically includes: and determining the ratio of the sum of the file sizes of the non-header files and the second parameter, which is the resource number of the resources occupied by the non-header files.
Illustratively, the transmission time of the data files and the file size of the data files are in a direct proportion relationship, the number of resources indicated by the ideal concurrency degree is related to the transmission time of the tasks, each task comprises at least one data file, so that the ideal concurrency degree of the task can be determined according to the file size of each data file of the task, the ideal concurrency degree of the task can be obtained according to the file size of the data files directly, and reasonable resources can be distributed for the task subsequently. For example, the number of resources that can be allocated to a task is determined according to the sum of the file sizes of the data files in the task, and then the ideal concurrency of the task is obtained.
In one example, since file sizes of different data files in a task are different, that is, the file sizes in the data files are large and small, if each data file in the task is allocated with a resource (thread or process), at this time, since the transmission time of the task depends on the largest data file in the task, the transmission time of the task is the largest at this time; however, the ideal degree of concurrency of tasks reaches a maximum. In order to reduce the transmission time of the task and the ideal concurrency of the task, the data files in the task can be divided into two groups, wherein each data file in one group is called a header file, and each data file in the other group is not a header file; and determining the ideal concurrency of the tasks according to the file size of each header file and the file size of each non-header file.
At this time, for one task, a few data files having the largest file size are referred to as "header files", and the remaining data files are referred to as non-header files. That is, the file size of each header file is greater than or equal to a preset threshold, and the file size of each non-header file is smaller than the preset threshold; the preset threshold value can be an empirical value or a preset value; for example, the preset threshold is a product of a first parameter and a second parameter, the first parameter is a fixed value, and the second parameter is a file size of a data file with the highest file size in the task.
The header files are of similar size and each header file may be assigned a resource, e.g., a concurrency level. At this time, the number of resources per header file in the task is obtained, that is, the number of resources per header file is 1. Then, according to the sum of the file sizes of the non-header files, the resource number of the resources occupied by the non-header files is determined. In this case, a plurality of resources may be allocated to each non-header file according to the sum of the file sizes of the non-header files, that is, the number of resources of each non-header file is a certain number. Further, the resource number of resources which can be occupied by each data file in the task is obtained, and then the sum of the resource numbers of the resources which can be occupied by the data files in the task is obtained; since the ideal concurrency is the sum of the number of resources of the resources that can be occupied by the data files in the task, the ideal concurrency of the task can be obtained. At this time, instead of allocating a resource to each data file in the task, the resource is allocated to each data file according to the size of the data file, so that the number of resources allocated to the task can be reduced, the ideal concurrency of the task is reduced, and the resource in the transmitter is not wasted.
In one example, the non-header files are reduced to equivalent header files according to the proportion between the sum of the file sizes of the non-header files and the data file with the largest file size in the task; at this time, the file size of the "data file with the largest file size in the task" is used as a second parameter, and the number of the "equivalent header files" is obtained by taking the ratio of the sum of the file sizes of the non-header files to the second parameter as the resource number of the resources occupied by the non-header files. Then taking the sum of the number of the header files and the number of the equivalent header files as the ideal concurrency degree of the task; in this case, the transmission time of the task is approximately minimized, and the ideal concurrency of the task is approximately minimized.
It can be seen that the ideal concurrency of each task is the number of header files + ceil ((total size of file-total size of header file)/size of the largest header file). Where ceil is an integer function. The ideal concurrency of each task is the number of concurrent threads or processes needed to be used for transmitting the task. The above algorithm configures an ideal thread number or process number for each task.
For example, a task includes N data files, where N is a positive integer greater than or equal to 1. And sorting the N data files in the task from large to small according to the file sizes, wherein the N data files are respectively a data file 1, a data file 2, a data file … … and a data file N. The file size of any data file i is S (i), the total size S (1.. j) of the first j data files can be calculated, and the total size S (1.. N) of the entire data files can be calculated.
Then, because the data files are reduced from large to small according to the file sizes, the data file with the highest file size is the 1 st data file, and the file size of the data file with the highest file size is s (1); the file size of each data file in the first n data files is larger than or equal to a preset threshold value, and each data file in the first n data files is a header file. n is a positive integer of 1 or more.
The number n of the header files is calculated by respectively comparing the file size of each data file with that of the data file 1 from the data file 2 in the descending order of the data files; the data file 1 is the data file having the highest file size in the task. If the file size s (i) of the current data file i is less than or equal to 50% of the file size of the data file 1, that is, s (i) <0.5 × s (1), it indicates that the data file i does not belong to the header file, and then all the data files located after the data file i do not belong to the header file, and at this time, the number n of the header files may be obtained as i-1. If the size s (N) of the minimum data file N is greater than 0.5 s (1), the number N of header files is determined to be N. That is, the first parameter is 0.5, and the second parameter is s (1).
Then, the total size S (1.. N) of the file sizes of the N header files, the total size S of the file sizes of the N data files in the task, and the size S (1) of the file having the largest file size are obtained, and the ideal concurrency degree concurency _ num of the task is obtained as N + ceil ((S-S (1.. N))/S (1)), where the ceil function is an rounding-up function.
For example, fig. 4 is a schematic diagram illustrating calculation of an ideal concurrency degree of a task provided in the embodiment of the present application, as shown in fig. 4, the task includes 11 data files, the data files are arranged according to file sizes from large to small, according to the above manner, the file size of the 4 th data file is 2, the file size of the 4 th data file does not exceed 50% of the file size 10 of the 1 st data file, only the first 3 data files are header files, and then the number N of the header files is 3; the sum of the file sizes of the header files is 10+8+ 6-24; the sum of the file sizes of the data files of the tasks is 36 +8+6+2+2+2+ 1+1+ 1; the file size of the data file having the highest file size is s (1) ═ 10. Thus, the ideal concurrency degree concurenc _ num of the task is 3+ ceil ((36-24)/10) 5. It can be known that, in the above process, the task can calculate 2 equivalent header files, that is, the 4 th data file (data file size is 2), the 5 th data file (data file size is 2), the 6 th data file (data file size is 2), the 7 th data file (data file size is 2), the 8 th data file (data file size is 1), and the 9 th data file (data file size is 1), which are combined into one equivalent header file; the 10 th data file (data file size 1) and the 11 th data file (data file size 1) are combined into one equivalent header file.
203. And acquiring a total threshold of the concurrency of the current transmitter and the actual concurrency of each task in the current transmitter, wherein the tasks have priorities, the total threshold of the concurrency is an upper limit value of the total actual concurrency, and the total actual concurrency is the sum of the actual concurrency of each task.
For example, this step may refer to step 101 shown in fig. 2, and is not described again.
204. Determining that resources in a current transmitter can support transmission of tasks to be transmitted at a first priority level according to a total concurrency limit and actual concurrency of each task, and determining expected concurrency of the tasks to be transmitted at the first priority level according to the actual concurrency of each task when the current transmitter is determined to have the tasks to be transmitted at the first priority level, wherein the first priority level is greater than or equal to a preset priority threshold; the expected concurrency is characterized by the number of resources allocated to the tasks to be transmitted, and the expected concurrency of the tasks to be transmitted with different priorities is different.
Exemplarily, a total threshold value of the concurrency of the tasks is obtained; the total concurrency threshold of the current transmitter is characterized by the sum of the number of resources which can be allocated by the transmitter for all tasks. The actual concurrency of each task in the transmitter can be obtained, wherein only the task being transmitted has the actual concurrency; actual concurrency, the number of resources occupied by the characterizing task being transmitted; taking the sum of the actual concurrency degrees of all the tasks as the sum of the actual concurrency degrees; since the degree of concurrency characterizes the number of resources, the sum of the actual degrees of concurrency characterizes the sum of the numbers of resources occupied by all transmitting tasks in the transmitter.
In this embodiment, the task has a priority, where the priority includes a first priority and a second priority, where the first priority is greater than or equal to a preset priority threshold, and the second priority is less than the preset priority threshold; it is noted that the first priority level is higher than the second priority level.
In this embodiment, an expected concurrency may be first allocated to the to-be-transmitted task at the first priority level, where the expected concurrency is represented by the number of resources (the number of threads or the number of processes) allocated to the to-be-transmitted task. Then, firstly, according to the total concurrency limit and the actual concurrency of each task, judging that the resources in the current transmitter can support the transmission of the tasks to be transmitted with the first priority level; if the total concurrency threshold and the actual concurrency of each task meet preset conditions, the resources in the current transmitter can support transmission of the tasks to be transmitted at the first priority level, for example, if the total actual concurrency is less than or equal to the total concurrency threshold, it is determined that the resources in the current transmitter can support transmission of the tasks to be transmitted at the first priority level, or if the total actual concurrency is less than or equal to a numerical value (the numerical value is less than the total concurrency threshold), it is determined that the resources in the current transmitter can support transmission of the tasks to be transmitted at the first priority level.
And if the fact that the resources in the current transmitter can support the transmission of the tasks to be transmitted with the first priority level is determined, judging that the current transmitter has the tasks to be transmitted with the first priority level with higher priority. If the current transmitter has the tasks to be transmitted with the first priority level, determining that the tasks to be transmitted with the first priority level need to be transmitted first, and distributing expected concurrency; and determining how many concurrency resources remain in the transmitter according to the actual concurrency of each task, and selecting the concurrency with proper size from the remaining concurrency resources as the expected concurrency of the tasks to be transmitted with the first priority level. And then preferentially distributing resources for the tasks to be transmitted with high levels, and ensuring the timely transmission of the tasks to be transmitted with high levels.
In an example, fig. 5 is a schematic diagram of step 204 in the second embodiment of the present application, and as shown in fig. 5, step 204 specifically includes the following steps:
2041. and determining the actual concurrency sum of the current transmitter according to the actual concurrency of each task, wherein the actual concurrency sum is the actual concurrency sum of each task.
2042. And when the actual concurrency sum is determined to be smaller than a preset first concurrency threshold, determining that the resources in the current transmitter can support the transmission of the tasks to be transmitted with the first priority level, wherein the first concurrency threshold is a difference value between a second concurrency threshold and a first preset threshold difference, the first concurrency threshold is smaller than the second concurrency threshold, and the second concurrency threshold is equal to the total concurrency threshold.
2043. And when determining that the current transmitter has the tasks to be transmitted with the first priority level, determining the expected concurrency of the tasks to be transmitted with the first priority level according to the second concurrency threshold and the sum of the actual concurrency.
2044. And when the actual concurrency sum is determined to be larger than or equal to the first concurrency threshold, determining that the resources in the current transmitter can not support the transmission of the tasks to be transmitted with the first priority level, and determining to compare the actual concurrency sum with the first concurrency threshold again after preset time.
In one example, step 2043 specifically includes:
the task scheduling method comprises the steps of firstly, obtaining ideal concurrency of tasks to be transmitted with a first priority level, wherein the ideal concurrency represents the minimum number of resources which can be used by the tasks, and the transmission time of the tasks under the ideal concurrency is minimum.
Determining the expected concurrency of the tasks to be transmitted with the first priority level according to a second concurrency threshold, the actual concurrency sum, the ideal concurrency of the tasks to be transmitted with the first priority level and a third concurrency threshold; wherein the third concurrency threshold is the maximum expected concurrency of a single task; the expected concurrency of the tasks to be transmitted with the first priority level is less than or equal to the ideal concurrency of the tasks to be transmitted with the first priority level, and the expected concurrency of the tasks to be transmitted with the first priority level is less than or equal to a third concurrency threshold.
Wherein the second step specifically comprises: determining a first concurrency threshold according to the second concurrency threshold and the sum of the actual concurrency, wherein the first concurrency threshold is the sum of the remaining actual concurrency of the current transmitter; determining the minimum value between the ideal concurrency of the tasks to be transmitted with the first priority level and the first concurrency threshold value as a second concurrency threshold value; determining a maximum value between a first preset value and a second concurrency threshold value as a third concurrency threshold value, wherein the first preset value is an integer larger than 0; and determining the minimum value between the third concurrency threshold and the third concurrency threshold, wherein the minimum value is the expected concurrency of the tasks to be transmitted with the first priority level.
Illustratively, when step 204 is executed, a queue to be transmitted is set, and scheduling information of all tasks to be transmitted is stored in the queue to be transmitted; the scheduling information of the tasks to be transmitted comprises ideal concurrency and priority (first priority level and second priority level) of the tasks. Where the ideal degree of concurrency for the task is calculated in step 202, the ideal degree of concurrency for the task is the most suitable degree of concurrency from the perspective of the task itself. The priority can be a first priority level and a second priority level, wherein the first priority level is higher than the second priority level; for example, the priority is "high-priority task" or "general task", and the priority indicates the importance or urgency of the task.
The tasks to be transmitted in the queue to be transmitted are sorted according to the priorities of the tasks, namely, the tasks with the first priority level in the queue to be transmitted are arranged in front of the tasks with the second priority level; for example, all high-priority tasks are ranked ahead of all normal tasks. Therefore, when the transmitter receives or acquires a new task to be transmitted again, the transmitter inserts the task with the first priority level in front of all the tasks with the second priority level. When the transmitter transmits the tasks to be transmitted in the queue to be transmitted, the first task to be transmitted at the head of the queue to be transmitted is taken out firstly, and then the next task to be transmitted in the queue to be transmitted is sent.
A transmission list can be provided, and scheduling information of all the tasks being transmitted is stored in the transmission list and comprises ideal concurrency of the tasks, expected concurrency of the tasks, actual concurrency of the tasks and completion marks of the tasks.
Wherein, at step 202, an ideal degree of concurrency has been assigned to each task; and, the value of the ideal degree of concurrency is fixed; thus, when a task is taken out of the queue to be transmitted by the transmitter, the task is configured with an ideal concurrency degree; the value of the ideal degree of concurrency of the tasks does not change.
The expected concurrency of the task is calculated in step 204, that is, the expected concurrency of the task is calculated in real time; after the transmitter takes out the task from the queue to be transmitted and before the task is transmitted, the transmitter allocates expected concurrency for the task according to the use condition of the real-time integral concurrency resource and the ideal concurrency of the task; after the desired degree of concurrency assigned to the task, the value of the desired degree of concurrency for the task does not change. Knowing the expected concurrency of the tasks, the number of concurrent threads or the number of concurrent processes started when the tasks are transmitted is determined; the desired concurrency of the tasks is a result of a tradeoff between the desired concurrency of the tasks and the overall concurrency resources.
Acquiring the actual concurrency of the tasks during the task transmission process; the initial actual degree of concurrency for a task may be set to the desired degree of concurrency for the task. Therefore, when the transmitter transmits the tasks in real time, the actual concurrency of the tasks can be adjusted in real time. In the whole transmission process, when transmission is started, the number of transmitted resources (the number of threads or the number of processes) reaches the maximum value, namely the expected concurrency, and then along with the transmission process, the number of transmitted resources can be gradually reduced along with the completion of the transmission; therefore, the actual concurrency of the tasks is only monotonically decreased, and the actual concurrency of the tasks is not increased.
A task completion flag for indicating whether the task has been transmitted; the completion flag may be configured as "complete" or "incomplete". When the task is not transmitted, setting a completion flag of the task as 'incomplete'; then, the transmitter starts a corresponding amount of resources to transmit the task according to the expected concurrency of the task; after the task is transferred, the completion flag for the task will be set to "complete".
Based on the pending transmission queue and the transmission list, the resource allocation procedure of this step 204 is performed. Firstly, step a is executed: the transmission opportunity checks the completion flags for all tasks in the transmission list, and for a task whose completion flag is "complete", deletes that task from the transmission list; for a task whose completion flag is "incomplete", it is necessary to allocate a desired concurrency to the task and transmit the task according to the priority of the task.
The actual concurrency of the tasks in transmission can be obtained, so that the actual concurrency of each task can be summed to obtain the sum of the actual concurrency of each task; and taking the sum of the actual concurrency of each task as the sum of the actual concurrency of the current transmitter.
After step a, step b is performed, comparing the actual concurrency sum with a first concurrency threshold. Wherein, the first concurrency threshold refers to a high-priority concurrency lower limit; the first concurrency threshold is smaller than the second concurrency threshold, and the second concurrency threshold is equal to the total concurrency threshold, namely the first concurrency threshold is smaller than the total concurrency threshold; and, the first concurrency threshold is a difference between the second concurrency threshold and a first preset threshold difference, and the first preset threshold difference may be an empirical value or a preset value.
Wherein, the total threshold of the concurrency degree refers to an upper limit value of the sum of the actual concurrency degrees of all tasks on the transmitter; at any moment, the actual total concurrency cannot exceed the total concurrency threshold, that is, the total concurrency threshold is used as the total concurrency control. The total threshold of the concurrency degree can be changed in real time, the total threshold of the concurrency degree is related to the maximum bandwidth of the transmitter, and the total threshold of the concurrency degree can be used for limiting the maximum bandwidth of the transmitter. And further, the bandwidth allocation among the transmitters is realized, which will be described in the following embodiments. The total threshold of concurrency may be changed in real time to adjust bandwidth allocation and scheduling between different transmitters (described in the embodiments below). The total threshold of the concurrency degree can be changed in real time, and then the total threshold of the concurrency degree needs to be at an initial total threshold of the concurrency degree, namely the initial total threshold of the concurrency degree is an initial value of the total threshold of the concurrency degree; the initial total concurrency threshold may be a larger concurrency value, and the initial total concurrency threshold is larger than the high-priority upper concurrency limit. The initial total threshold of the concurrency degree is, for example, "preset concurrency degree + high-priority concurrency degree threshold difference + common concurrency degree threshold difference", where the preset concurrency degree is the number of concurrency when the transmitter reaches the full network card; for example, according to an empirical value, the single-concurrent transmission rate is 1MB/s to 40MB/s, for example, 10MB/s to 30MB/s, and if the single-concurrent transmission rate is 15MB/s, the number of concurrences when the network card is just full is 83; furthermore, because the initial total threshold of the concurrency degree can be a larger numerical value of the concurrency degree, the transmitter can fully utilize the bandwidth as much as possible when the initial total threshold of the concurrency degree is not limited.
The first preset threshold difference refers to a high-priority concurrency threshold difference; the high-priority concurrency threshold difference is a difference value between a high-priority concurrency upper limit and a high-priority concurrency lower limit. Wherein, the high-priority concurrency upper limit is equal to the total concurrency threshold; the lower limit of the high-quality concurrency degree is equal to the upper limit of the high-quality concurrency degree-the threshold difference of the high-quality concurrency degree. The "first concurrency threshold" is a lower high-priority concurrency limit, the "second concurrency threshold" is an upper high-priority concurrency limit, and the "second concurrency threshold" is a total concurrency limit. The actual total concurrency needs to be always less than or equal to the second concurrency threshold (i.e., the actual total concurrency < — high-quality upper limit of the concurrency).
After step b, performing step c: when the actual concurrency sum is smaller than a preset first concurrency threshold (namely, the actual concurrency sum is smaller than a high-priority concurrency lower limit), determining that idle resources of the current transmitter can support transmission of the tasks to be transmitted at the first priority level, namely, the concurrent resources allocated to the high-priority tasks are sufficient, and further allowing allocation of expected concurrency to new tasks to be transmitted at the first priority level (namely, the high-priority tasks).
After step c, performing step d: at this time, whether the head of the queue to be transmitted has a task to be transmitted with a first priority level needs to be judged first; if the current transmitter has a task to be transmitted with a first priority level, taking out a task to be transmitted with the first priority level from the head of the queue to be transmitted; and adding the scheduling information of the tasks to be transmitted with the first priority level in the transmission list. Setting a completion flag of the tasks to be transmitted with the first priority level as 'incomplete'; the "ideal concurrency degree" of the tasks to be transmitted at the first priority level is the "ideal concurrency degree" when the tasks to be transmitted at the first priority level are taken out from the queue to be transmitted. And then, according to the second concurrency threshold and the sum of the actual concurrency, determining the expected concurrency of the tasks to be transmitted with the first priority level. Further, when determining that the resources are sufficient to allocate the resources for the high-priority task, allocating the resources for the high-priority task first; ensuring that the high priority task can be preferentially allocated the desired degree of concurrency, i.e., ensuring that the high priority task can be preferentially allocated resources.
In step d, when calculating the expected concurrency of the to-be-transmitted task with the first priority level, firstly, the ideal concurrency of the current to-be-transmitted task with the first priority level is obtained in the previous step. A third concurrency threshold (i.e., an individual concurrency threshold) is provided, the third concurrency threshold referring to a maximum expected concurrency for a single task; the third concurrency threshold is a general limit to the expected concurrency assigned to any task, and the expected concurrency of any task cannot exceed the third concurrency threshold (i.e., the expected concurrency of any task cannot exceed the individual concurrency threshold). Furthermore, a second concurrency threshold (i.e., a high-priority concurrency upper limit), an actual concurrency sum, an ideal concurrency of the to-be-transmitted tasks at the first priority level, and a third concurrency threshold (i.e., an individual concurrency threshold) may be adopted to constrain the expected concurrency of the to-be-transmitted tasks, so that the expected concurrency of the to-be-transmitted tasks at the first priority level is less than or equal to the ideal concurrency of the to-be-transmitted tasks at the first priority level, and the expected concurrency of the to-be-transmitted tasks at the first priority level is less than or equal to the third concurrency threshold.
Because an individual concurrency threshold is adopted to limit the expected concurrency of a single task, one task can be prevented from monopolizing all or most of the concurrency. Moreover, because some disk Input/Output (IO) interfaces are slower than network IO interfaces, if the expected concurrency of a single task is not limited, the bandwidth of the network card cannot be fully utilized; by limiting the expected concurrency of a single task, a plurality of tasks can be transmitted simultaneously, and the bandwidth of the network card is reused.
In one example, when calculating "the expected concurrency of the tasks to be transmitted at the first priority level", first, subtracting the actual concurrency sum from the second concurrency threshold (i.e., the high-priority concurrency upper limit) to obtain the remaining actual concurrency sum of the current transmitter, that is, obtaining a first concurrency threshold; then, taking the ideal concurrency degree of the tasks to be transmitted with the first priority level, a first concurrency degree threshold value and the minimum value between the ideal concurrency degree and the first concurrency degree threshold value as a second concurrency degree threshold value; and taking the maximum value between the first preset value and the second concurrency threshold value as a third concurrency threshold value, wherein the first preset value is an integer larger than 0, and further taking the minimum value between the third concurrency threshold value and the third concurrency threshold value as the expected concurrency of the tasks to be transmitted at the first priority level. The above process can be summarized as formula 1: "desired concurrency of tasks to be transmitted at the first priority level" min (third concurrency threshold, max (first preset value, min (ideal concurrency, second concurrency threshold-actual sum of concurrency))). That is, "the expected concurrency of the tasks to be transmitted at the first priority level" is min (individual concurrency threshold, max (first preset value, min (ideal concurrency, high-priority upper limit-actual sum of concurrency))). For example, if the first preset value is 1, formula 1 is "desired concurrency of tasks to be transmitted at the first priority level" ═ min (individual concurrency threshold, max (1, min (ideal concurrency, high-priority upper limit-actual sum of concurrency))).
And then through the calculation process of the formula 1, when calculating the expected concurrency of the task, carrying out balance between the ideal concurrency of the task and the currently and actually assignable concurrency to obtain the expected concurrency of the task. Furthermore, the desired degree of concurrency of the control tasks cannot be too large or too small; otherwise, the expected concurrency is forcibly normalized to an interval [ a first preset value, an individual concurrency threshold ], so that the expected concurrency cannot meet the requirement of a transmission point of the task. When the transmission is performed in step 206, the "expected concurrency" number of resources of the task (for example, the asynchronous transmission thread is started) is started to transmit the task to be transmitted with the first priority level, and the transmission resources update the actual concurrency of the task in real time. And d, after the step d is executed, jumping to the step a again, executing the step a, and further distributing the expected concurrency degree to the next high-priority task again.
Also, it is to be noted that after the task to be transmitted of the first priority level is transmitted, it is necessary to satisfy "the sum of actual concurrency ≦ high-quality concurrency upper limit", that is, it is necessary to satisfy "the sum of actual concurrency ≦ total concurrency threshold". And limiting the actual total concurrency of tasks in the transmitter so that the actual total concurrency cannot exceed the total concurrency threshold at any time, and limiting the bandwidth of the transmitter so as to reserve the bandwidth for other transmitters.
After the step a, executing a step e, when the actual concurrency sum is greater than or equal to a preset first concurrency threshold (namely, the actual concurrency sum is greater than or equal to a high-priority concurrency lower limit), determining that idle resources of the current transmitter cannot support the transmission of the tasks to be transmitted with the first priority level, namely, the concurrency resources allocated to the high-priority tasks are insufficient; furthermore, after waiting for a preset time (e.g. 5 seconds), the actual concurrency sum is compared with the first concurrency threshold again, i.e. step a is skipped again. At this time, it is determined that the resources are insufficient to allocate the resources for the high-priority task, a period of time is required to wait for the re-release of the resources, and then whether the resources are allocated for the high-priority task is determined; ensuring that the high priority task can be preferentially allocated the desired degree of concurrency, i.e., ensuring that the high priority task can be preferentially allocated resources.
205. And when the current transmitter is determined not to have the tasks to be transmitted with the first priority level, determining the expected concurrency of the tasks to be transmitted with the second priority level according to the actual concurrency of each task, wherein the second priority level is smaller than a preset priority threshold value.
For example, if it is determined that resources in the current transmitter can support transmission of the to-be-transmitted task with the first priority level, it is determined that the current transmitter has the to-be-transmitted task with the first priority level with the higher priority level. If the current transmitter does not have the tasks to be transmitted with the first priority level, determining that the tasks to be transmitted with the first priority level do not need to be transmitted, and determining that the expected concurrency degree needs to be allocated to the tasks to be transmitted with the second priority level firstly; and determining how many concurrency resources remain in the transmitter according to the actual concurrency of each task, and selecting the concurrency with proper size from the remaining concurrency resources as the expected concurrency of the tasks to be transmitted with the second priority level. After step 204, resources are allocated to the tasks to be transmitted with the low level in priority, so as to ensure the timely transmission of the tasks to be transmitted with the high level.
In an example, fig. 6 is a schematic diagram of step 205 in the second embodiment of the present application, and as shown in fig. 6, step 205 specifically includes the following steps:
2051. and when the actual concurrency sum is determined to be smaller than a first concurrency threshold, determining that the resources in the current transmitter can support the transmission of the tasks to be transmitted with the first priority level.
2052. When the current transmitter is determined not to have the task to be transmitted with the first priority level, determining whether the actual concurrency sum is greater than or equal to a preset fourth concurrency threshold, wherein the fourth concurrency threshold is a difference value between a fifth concurrency threshold and a second preset threshold difference, the fourth concurrency threshold is smaller than the fifth concurrency threshold, and the fifth concurrency threshold is equal to the first concurrency threshold.
2053. Determining that the resource in the current transmitter can support the transmission of the task to be transmitted with the second priority level when the actual concurrency sum is smaller than a fourth concurrency threshold; and when determining that the current transmitter has the task to be transmitted with the second priority level, determining the expected concurrency of the task to be transmitted with the second priority level according to the fifth concurrency threshold and the actual concurrency sum.
Wherein, the step 2053 of determining the expected concurrency of the to-be-transmitted task at the second priority level according to the fifth concurrency threshold and the actual concurrency sum specifically includes:
and a first step of step 2053, acquiring an ideal concurrency of the task to be transmitted at the second priority level, where the ideal concurrency represents a minimum number of resources that can be used by the task, and the transmission time of the task at the ideal concurrency is minimum.
A second step of step 2053, determining an expected concurrency of the to-be-transmitted task at the second priority level according to the fifth concurrency threshold, the actual concurrency sum, the ideal concurrency of the to-be-transmitted task at the second priority level, and the third concurrency threshold; wherein the third concurrency threshold is the maximum expected concurrency of a single task; the expected concurrency of the tasks to be transmitted with the second priority level is less than or equal to the ideal concurrency of the tasks with the second priority level, and the expected concurrency of the tasks to be transmitted with the second priority level is less than or equal to a third concurrency threshold.
In one example, the second step of step 2053 specifically includes: determining a fourth concurrency threshold value according to the fifth concurrency threshold value, the actual concurrency sum and a second preset value, wherein the fourth concurrency threshold value is the sum of the residual actual concurrency of the current transmitter, and the second preset value is an integer greater than or equal to 1; determining the minimum value between the ideal concurrency of the tasks to be transmitted with the second priority level and a fourth concurrency threshold value as a fifth concurrency threshold value; determining a maximum value between a third preset value and a fifth concurrency threshold value as a sixth concurrency threshold value, wherein the third preset value is an integer greater than 0; and determining the minimum value between the third concurrency threshold and the sixth concurrency threshold, wherein the minimum value is the expected concurrency of the tasks to be transmitted with the second priority level.
2054. And when the actual concurrency sum is determined to be greater than or equal to the fourth concurrency threshold, determining that the resources in the current transmitter can not support the transmission of the tasks to be transmitted with the second priority level, and determining to compare the actual concurrency sum with the first concurrency threshold again after preset time.
2055. Determining that the resource in the current transmitter can support the transmission of the task to be transmitted with the second priority level when the actual concurrency sum is smaller than a fourth concurrency threshold; and when the current transmitter is determined not to have the task to be transmitted with the second priority level, determining that the current transmitter does not have the task to be transmitted, and determining that the actual concurrency sum and the first concurrency threshold are compared again after the preset time.
Illustratively, when step 205 is executed, the pending transmission queue and the transmission list in step 204 are also set. And, a first concurrency threshold (i.e., a lower high-priority concurrency limit) and a second concurrency threshold (i.e., an upper high-priority concurrency limit and a total concurrency threshold) are configured.
And a fourth concurrency threshold, a fifth concurrency threshold, and a second preset threshold difference may also be configured, where the second preset threshold difference is a difference between the fifth concurrency threshold and the fourth concurrency threshold, and the fourth concurrency threshold is smaller than the fifth concurrency threshold. Wherein, the fourth concurrency threshold refers to a lower limit of common concurrency; a fifth concurrency threshold, which refers to an upper limit of the common concurrency; the second preset threshold difference refers to a difference between an upper limit of the normal concurrency and a lower limit of the normal concurrency, that is, refers to a threshold difference of the normal concurrency. Meanwhile, a fifth concurrency threshold is set equal to the first concurrency threshold (i.e., a high-priority concurrency lower limit).
It can be known that the second preset threshold difference is the fifth concurrency threshold-the fourth concurrency threshold, that is, the lower common concurrency limit is the upper common concurrency limit-the common concurrency threshold difference.
Fig. 7 is a comparison diagram of thresholds provided in an embodiment of the present application, and as shown in fig. 7, shows a relationship between a first concurrency threshold (i.e., a high-priority concurrency lower limit), a second concurrency threshold (i.e., a high-priority concurrency upper limit, which is also a total concurrency threshold), a first preset threshold difference (i.e., a high-priority concurrency threshold difference), a third concurrency threshold (i.e., an individual concurrency threshold), a fourth concurrency threshold (i.e., a common concurrency lower limit), a fifth concurrency threshold (i.e., a common concurrency upper limit), and a second preset threshold difference (i.e., a common concurrency threshold difference), the high-quality total threshold is set as a high-quality total threshold, and the high-quality total threshold is set as an initial total threshold of the high-quality concurrency; the upper limit of the high-quality concurrency degree is greater than the lower limit of the high-quality concurrency degree; the lower limit of the high-quality concurrency degree is equal to the upper limit of the common concurrency degree, and the upper limit of the common concurrency degree is greater than the lower limit of the common concurrency degree; an individual concurrency threshold is configured for any task, and the expected concurrency of any task cannot exceed the individual concurrency threshold. In one example, a "high priority concurrency threshold difference" is set to be greater than a "normal concurrency threshold difference"; since the "high-priority concurrency threshold difference" is greater than the "normal concurrency threshold difference", the task of the first priority level (i.e., the high-priority task) occupies more resources than the task of the second priority level (i.e., the normal task).
When determining that the resources in the current transmitter can support the transmission of the tasks to be transmitted with the first priority level, but the current transmitter does not have the tasks to be transmitted with the first priority level, determining to allocate the expected concurrency degree for the tasks to be transmitted with the first priority level. At this point, according to steps a-c in step 204, after step c, step f is performed: at this time, if the current transmitter does not have the task to be transmitted with the first priority level, whether the task to be transmitted with the second priority level exists is judged, and if the task to be transmitted with the first priority level is determined, whether the actual concurrency sum is greater than or equal to a fourth concurrency threshold (namely, a common concurrency lower limit) is judged.
After step f, step g is performed: if the actual concurrency sum is smaller than the fourth concurrency threshold (namely, the lower limit of the common concurrency), determining that the resources in the current transmitter can support the transmission of the tasks to be transmitted with the second priority level, namely, determining that the resources are enough to allocate the expected concurrency for the common tasks, and further allowing the new common tasks to be allocated with the expected concurrency.
After step g, step h is performed: at this time, whether the head of the queue to be transmitted has a task to be transmitted with a second priority level needs to be judged first; if the current transmitter has a task to be transmitted with a second priority level, taking out a task to be transmitted with the second priority level from the head of the queue to be transmitted; and adding the scheduling information of the tasks to be transmitted with the second priority level in the transmission list. Setting a completion flag of the task to be transmitted with the second priority level as 'incomplete'; the "ideal concurrency" of the task to be transmitted at the second priority level is the "ideal concurrency" when the task to be transmitted at the second priority level is taken out from the queue to be transmitted. And then, according to the fifth concurrency threshold and the sum of the actual concurrency, determining the expected concurrency of the tasks to be transmitted with the second priority level. Furthermore, when determining that resources do not need to be allocated to the high-priority task, if determining that the resources are enough to allocate the resources to the common task, allocating the resources to the common task; ensuring that generic tasks can also be assigned a desired degree of concurrency ensures that generic tasks can be transmitted.
In step h, when calculating the expected concurrency of the to-be-transmitted task with the second priority level, firstly, the ideal concurrency of the current to-be-transmitted task with the first priority level is obtained in the previous step. A third concurrency threshold (i.e., an individual concurrency threshold) is provided, the third concurrency threshold referring to a maximum expected concurrency for a single task; the third concurrency threshold is a general limit to the expected concurrency assigned to any task, and the expected concurrency of any task cannot exceed the third concurrency threshold (i.e., the expected concurrency of any task cannot exceed the individual concurrency threshold). Furthermore, a fifth concurrency threshold (i.e., an upper limit of common concurrency), a sum of actual concurrency, an ideal concurrency of the to-be-transmitted tasks at the second priority level, and a third concurrency threshold (i.e., an individual concurrency threshold) may be adopted to constrain the expected concurrency of the to-be-transmitted tasks, so that the expected concurrency of the to-be-transmitted tasks at the second priority level is less than or equal to the ideal concurrency of the tasks at the second priority level, and the expected concurrency of the to-be-transmitted tasks at the second priority level is less than or equal to the third concurrency threshold (i.e., the individual concurrency threshold).
In one example, when calculating "the expected concurrency of the task to be transmitted at the second priority level", respectively subtracting a second preset value and an actual concurrency sum from a fifth concurrency threshold (i.e., an ordinary concurrency upper limit) to obtain a remaining actual concurrency sum of the current transmitter, that is, obtaining a fourth concurrency threshold; wherein the second preset value is an integer greater than or equal to 1, for example, the second preset value is 1; then, taking the ideal concurrency degree of the tasks to be transmitted with the second priority level, a fourth concurrency degree threshold value and the minimum value between the ideal concurrency degree and the fourth concurrency degree threshold value as a fifth concurrency degree threshold value; then, a third preset value and a fifth concurrency threshold value are determined, and the maximum value between the third preset value and the fifth concurrency threshold value is used as a sixth concurrency threshold value, wherein the third preset value is an integer larger than 0, for example, the third preset value is 1; and finally, taking a third concurrency threshold and a sixth concurrency threshold, and taking the minimum value between the third concurrency threshold and the sixth concurrency threshold as the expected concurrency of the tasks to be transmitted at the second priority level.
The above process can be summarized as formula 2: "desired concurrency of tasks to be transmitted at the second priority level" ═ min (third concurrency threshold, max third preset value, min (ideal concurrency, fifth concurrency threshold-second preset value-actual concurrency sum))). That is, "the expected concurrency of the tasks to be transmitted at the second priority level" is min (individual concurrency threshold, max (third preset value, min (ideal concurrency, upper limit of common concurrency-second preset value-sum of actual concurrency))). For example, if the second preset value is 1 and the third preset value is 1, then equation 1 is "desired concurrency of tasks to be transmitted at the second priority level" ═ min (individual concurrency threshold, max (1, min (ideal concurrency, common concurrency upper limit-1 — actual concurrency sum))).
And then through the calculation process of the formula 2, when calculating the expected concurrency of the task, similarly carrying out balance between the ideal concurrency of the task and the currently and actually assignable concurrency to obtain the expected concurrency of the task. Meanwhile, what is different from the formula 1 is that a second preset value needs to be subtracted, so that partial resources can be reserved, and further, when a next high-priority task arrives, sufficient resources can still be allocated to a new high-priority task; therefore, the lower limit of the high-priority concurrency degree cannot be reached, and certain resources are reserved for high-priority tasks. At this time, the second preset value may be 1; or, if the second preset value can be an integer greater than 1 (for example, the second preset value is 2 or 3), then there are more idle resources left for the new high-priority task. When the transmission is performed in step 206, the "expected concurrency" number of resources of the task (for example, the asynchronous transmission thread is started) is started to transmit the task to be transmitted with the second priority, and the transmission resources update the actual concurrency of the task in real time. And c, after the step h is executed, jumping to the step a again, executing the step a, and further distributing the expected concurrency degree to the next high-priority task again.
After step g, step i is performed: at this time, if the current transmitter does not have the task to be transmitted with the second priority level, determining that the list to be transmitted is empty, and no task needing to be transmitted exists in the transmitter; then, after waiting for a preset time (e.g. 5 seconds), the actual concurrency sum is compared with the first concurrency threshold again, i.e. step a above is skipped again. At this time, when determining that there is no task to be transmitted in the resource, it needs to wait for a period of time to wait for the re-release of the resource, and then determines whether to allocate the resource for the high-priority task; and ensuring that the expected concurrency is always allocated to the high-priority tasks, and further ensuring that the high-priority tasks are transmitted first.
After step f, step j is performed: if the actual concurrency sum is determined to be greater than or equal to a fourth concurrency threshold (namely, a common concurrency lower limit), determining that the resources in the current transmitter cannot support the transmission of the tasks to be transmitted with the second priority level, namely determining that the resources are not enough to allocate the expected concurrency for the common tasks, namely, the idle concurrency resources of the transmitter are insufficient; furthermore, after waiting for a preset time (e.g. 5 seconds), the actual concurrency sum is compared with the first concurrency threshold again, i.e. the step a is skipped again. At this time, it is determined that the resources are insufficient to allocate the resources for the common tasks and even cannot allocate the resources for the high-priority tasks, so that a period of time needs to be waited for to re-release the resources, and then whether the resources are allocated for the high-priority tasks is determined; and ensuring that the expected concurrency is always allocated to the high-priority tasks, and further ensuring that the high-priority tasks are transmitted first.
Also, it should be noted that after the task to be transmitted of the second priority level is transmitted, it is necessary to satisfy "the sum of actual concurrency degrees < the upper limit of ordinary concurrency degrees".
According to the scheme provided by the embodiment, since the resources are sequentially allocated (i.e., the concurrency is allocated) to the tasks to be transmitted in the queue to be processed, and the resources are waited to be released, the situation that the remaining allocable total resources (i.e., the total concurrency is very small) does not occur, for example, only 2 concurrency numbers are left; the task of the second priority level (i.e. the ordinary task) can be at least allocated to the expected concurrency of the number represented by the threshold difference of the ordinary concurrency in any case, and the task of the first priority level (i.e. the high-priority task) can be at least allocated to the expected concurrency of the number represented by the threshold difference of the high-priority concurrency; however, when the ideal number of concurrencies for a task is very small, the desired concurrency required for the task is itself very small, and thus the desired concurrency assigned to the task is very small. As all the tasks (common tasks) with the second priority level can only occupy the concurrency number of "" total concurrency threshold-high-priority concurrency threshold difference-second preset value "", the high-priority tasks are ensured to be immediately processed by the tasks (high-priority tasks) with the first priority level, and the processing of the high-priority tasks is ensured not to be blocked by the tasks (common tasks) with the second priority level.
206. And starting resources corresponding to the number of the resources with the expected concurrency representation of each task to be transmitted, and respectively transmitting each task to be transmitted.
Illustratively, after step 204, this step 206 is executed to start the "expected concurrency" number of resources of the task to be transmitted at the first priority level (for example, start an asynchronous transmission thread, or start a parallel process) to transmit the task to be transmitted at the first priority level.
After step 205, this step 206 is executed, and the "expected concurrency" number of resources of the task to be transmitted at the second priority level is started (for example, an asynchronous transmission thread is started, or a parallel process is started), so as to transmit the task to be transmitted at the second priority level.
For example, fig. 8 is a schematic diagram of a queue to be transmitted and a transmission list provided in the present application, and as shown in fig. 8, there are 4 tasks in the queue to be transmitted, which are task t1, task t2, task t3, and task t4 from head to tail, respectively. The tasks are sorted from high to low according to priority. Among them, task t1 is a high-priority task (task of the first priority level), and the ideal concurrency is 30; task t2, which is a high-priority task (task of the first priority level), the ideal concurrency is 15; task t3, which is a normal task (task of second priority level), the ideal degree of concurrency is 10; task t4, which is a normal task (task of the second priority level), has an ideal concurrency of 20. When the queue to be transmitted is processed, the task at the head is processed.
The transmission list has records of 3 tasks, one task is transmitted and completed, the ideal concurrency is 30, the expected concurrency is 15, and the actual concurrency is 0; two tasks are not transmitted, the ideal concurrency degree of one task is 25, the expected concurrency degree is 25, and the actual concurrency degree is 3; the ideal concurrency 50 for another task, the desired concurrency 40, and the actual concurrency 7.
Fig. 9 is a schematic diagram showing a specific comparison of thresholds provided in the present application, and as shown in fig. 9, the following thresholds are set: the initial total threshold of the concurrency degree is 120, the total threshold of the concurrency degree is 50, the difference between the high-priority threshold of the concurrency degree and the high-priority threshold of the concurrency degree is 20, the difference between the common threshold of the concurrency degree and the common threshold of the concurrency degree is 15, and the threshold of the individual concurrency degree is 40; it is understood that the upper limit of the high-quality degree of concurrency is 50, the lower limit of the high-quality degree of concurrency (i.e., the upper limit of the normal degree of concurrency) is 30, and the lower limit of the normal degree of concurrency is 15.
In the examples shown in fig. 8 and fig. 9, based on the current state, the scheme provided by the present embodiment is adopted, and first all tasks in the transmission list are checked, and a "completed" task is found, and the task is deleted; it can then be determined that there are two "incomplete" tasks whose actual concurrency is 3 and 7, respectively, resulting in an actual concurrency sum of 10. And comparing the actual concurrency sum 10 with the high-priority concurrency lower limit 30, wherein 10 is less than 30, and further judging whether the head of the queue to be transmitted has a high-priority task. It can be determined that the head of the queue to be transmitted has a high-priority task t1, the task t1 can be taken out of the queue to be transmitted, the scheduling information of the task t1 is added to the transmission list, the "completion flag" is set as "incomplete", the "ideal concurrency degree" is set as 30, and the expected concurrency degree of the task t1 is calculated as min (individual concurrency degree threshold, max (1, min (ideal concurrency degree, high-priority concurrency degree upper limit-actual concurrency degree sum))) (40, max (1, min (30, 50-10)))) as 30; and, at this time, the actual degree of concurrency of task t1 may be determined to be 30. Then, 30 resources (e.g., 30 asynchronous transfer threads are started) are started to transfer task t 1. Then, jump to the step of "checking all the tasks in the transmission list first", and restart a new round of resource allocation.
In this embodiment, when determining that the resources in the current transmitter can support transmission of the tasks to be transmitted at the first priority level according to the total concurrency limit and the actual concurrency of each task, if it is determined that the current transmitter has the tasks to be transmitted at the first priority level, the actual concurrency of each task, and the expected concurrency of the tasks to be transmitted at the first priority level are determined; and if the current transmitter is determined not to have the tasks to be transmitted with the first priority level, determining the expected concurrency of the tasks to be transmitted with the second priority level according to the actual concurrency of each task, wherein the first priority level is higher than the second priority level. Thereby preferentially allocating the concurrency degree to the task with high priority. When the tasks to be transmitted at the first priority level and the tasks to be transmitted at the second priority level are respectively allocated with the expected concurrency, a plurality of thresholds are configured to limit the allocated expected concurrency; checking the actual concurrency sum of all tasks being transmitted on the current transmitter at regular time, taking out one task to be transmitted from the queue to be transmitted when the actual concurrency sum is lower than a certain threshold value, distributing expected concurrency for the task to be transmitted, and transmitting the task to be transmitted; for example, if the sum of the actual concurrency degrees is lower than a first concurrency degree threshold (i.e., a high-priority concurrency degree lower limit), it is determined that the resource can support transmission of the tasks to be transmitted at the first priority level, and one task to be transmitted at the first priority level (i.e., a high-priority task) is taken out of the queue to be transmitted; if the actual concurrency sum is lower than a fourth concurrency threshold (namely, a common concurrency lower limit), determining that the resource can support the transmission of the tasks to be transmitted at the second priority level, and taking one task to be transmitted (namely, a common task) at the second priority level out of the queue to be transmitted; if the sum of the actual concurrency degrees is not lower than a certain threshold value, if not, any new task to be transmitted is not taken for transmission temporarily, and whether resources exist is waited; and the actual total concurrency sum does not exceed the total concurrency threshold whenever the actual total concurrency sum is met, and the situation that no reserved concurrency resource exists is avoided. Setting a total concurrency threshold on a transmitter, and configuring the expected concurrency for each task by using the method provided by the embodiment so that each task can be transmitted according to the ideal concurrency as much as possible; and the concurrency resource is uniformly distributed to all tasks, and simultaneously, the control ensures that all actual concurrency of the transmitter at any moment does not exceed the 'total threshold of the concurrency', under the condition, the approximately ideal concurrency is still distributed to each task, and the tasks are effectively and quickly transmitted. In the embodiment, the concurrency degree resources can be reasonably allocated to the tasks to be transmitted (namely, the expected concurrency degree is allocated), so that each task can be transmitted according to the ideal concurrency degree as much as possible; and then the concurrency resources can be reasonably utilized and released as soon as possible, and the idleness and waste of the concurrency resources are avoided. Therefore, the overall concurrency of the transmitter is improved, and the utilization rate of the network card bandwidth of the transmitter is improved; furthermore, the throughput of the machine room of a plurality of conveyors is improved.
Fig. 10 is a schematic diagram of a third embodiment of the present application, and as shown in fig. 10, the present embodiment provides a data transmission apparatus based on a transmitter, which can be applied to the transmitter; the data transmission device based on the transmitter provided by the embodiment comprises:
the first obtaining unit 31 is configured to obtain a total threshold of concurrency of the current transmitter and an actual concurrency of each task in the current transmitter, where the tasks have priorities, the total threshold of concurrency is an upper limit value of a total actual concurrency, and the total actual concurrency is a total actual concurrency of each task.
The first determining unit 32 is configured to determine expected concurrency degrees of the tasks to be transmitted with different priorities according to the total concurrency degree threshold and the actual concurrency degree of each task, where the expected concurrency degree is represented by the number of resources allocated to the tasks to be transmitted, and the expected concurrency degrees of the tasks to be transmitted with different priorities are different.
The starting unit 33 is configured to start a resource corresponding to the number of resources with the expected concurrency characterization of each to-be-transmitted task, and transmit each to-be-transmitted task respectively.
The data transmission device based on the transmitter in this embodiment may execute the technical solutions in the embodiments of fig. 2 to fig. 3, and the specific implementation process and the technical principle are the same, and are not described herein again.
Fig. 11 is a schematic diagram of a fourth embodiment of the present application, and as shown in fig. 11, based on the embodiment shown in fig. 10, the data transmission apparatus based on a transmitter provided by the present embodiment includes:
the first determining subunit 321 is configured to determine, according to the total concurrency threshold and the actual concurrency of each task, that resources in the current transmitter can support transmission of the tasks to be transmitted at the first priority level, and when it is determined that the current transmitter has the tasks to be transmitted at the first priority level, determine, according to the actual concurrency of each task, an expected concurrency of the tasks to be transmitted at the first priority level, where the first priority level is greater than or equal to a preset priority threshold.
A second determining subunit 322, configured to determine, according to the total concurrency threshold and the actual concurrency of each task, that resources in the current transmitter can support transmission of the tasks to be transmitted at the first priority level, and when it is determined that the current transmitter does not have the tasks to be transmitted at the first priority level, determine, according to the actual concurrency of each task, an expected concurrency of the tasks to be transmitted at the second priority level, where the second priority level is smaller than the preset priority threshold.
In one example, the first determining subunit 321 includes:
the first determining module 3211 is configured to determine a total actual concurrency of the current transmitter according to the actual concurrency of each task, where the total actual concurrency is a sum of actual concurrency of each task.
A second determining module 3212, configured to determine that resources in the current transmitter can support transmission of the task to be transmitted at the first priority level when it is determined that the actual concurrency sum is smaller than a preset first concurrency threshold, where the first concurrency threshold is a difference between a second concurrency threshold and a first preset threshold difference, the first concurrency threshold is smaller than the second concurrency threshold, and the second concurrency threshold is equal to the total concurrency threshold.
The third determining module 3213 is configured to, when it is determined that the current transmitter has the to-be-transmitted task with the first priority level, determine the expected concurrency of the to-be-transmitted task with the first priority level according to the second concurrency threshold and the sum of the actual concurrency.
In one example, the third determining module 3213 includes:
the first obtaining submodule 32131 is configured to obtain an ideal concurrency degree of the task to be transmitted at the first priority level, where the ideal concurrency degree represents a minimum number of resources that can be used by the task, and a transmission time of the task at the ideal concurrency degree is minimum.
The first determining submodule 32132 is configured to determine an expected concurrency of the to-be-transmitted task at the first priority level according to the second concurrency threshold, the actual sum of the concurrencies, the ideal concurrency of the to-be-transmitted task at the first priority level, and the third concurrency threshold; wherein the third concurrency threshold is the maximum expected concurrency of a single task; the expected concurrency of the tasks to be transmitted with the first priority level is less than or equal to the ideal concurrency of the tasks to be transmitted with the first priority level, and the expected concurrency of the tasks to be transmitted with the first priority level is less than or equal to a third concurrency threshold.
In one example, the first determining submodule 32132 is specifically configured to:
determining a first concurrency threshold according to the second concurrency threshold and the sum of the actual concurrency, wherein the first concurrency threshold is the sum of the remaining actual concurrency of the current transmitter; determining the minimum value between the ideal concurrency of the tasks to be transmitted with the first priority level and the first concurrency threshold value as a second concurrency threshold value; determining a maximum value between a first preset value and a second concurrency threshold value as a third concurrency threshold value, wherein the first preset value is an integer larger than 0; and determining the minimum value between the third concurrency threshold and the third concurrency threshold, wherein the minimum value is the expected concurrency of the tasks to be transmitted with the first priority level.
In an example, the first determining subunit 321 further includes:
a fourth determining module 3214, configured to determine that the resource in the current transmitter cannot support transmission of the task to be transmitted at the first priority level when it is determined that the actual total concurrency is greater than or equal to the first concurrency threshold, and determine to compare the actual total concurrency with the first concurrency threshold again after a preset time.
In one example, the second determining subunit 322 includes:
a fifth determining module 3221 is configured to determine that the resources in the current transmitter may support transmission of the task to be transmitted at the first priority level when it is determined that the actual total concurrency is smaller than the first concurrency threshold.
A sixth determining module 3222 is configured to, when it is determined that the current transmitter does not have the task to be transmitted at the first priority level, determine whether an actual concurrency sum is greater than or equal to a preset fourth concurrency threshold, where the fourth concurrency threshold is a difference between a fifth concurrency threshold and a second preset threshold, the fourth concurrency threshold is smaller than the fifth concurrency threshold, and the fifth concurrency threshold is equal to the first concurrency threshold.
A seventh determining module 3223, configured to determine that resources in the current transmitter may support transmission of the task to be transmitted at the second priority level when it is determined that the actual total concurrency is smaller than the fourth concurrency threshold; and when determining that the current transmitter has the task to be transmitted with the second priority level, determining the expected concurrency of the task to be transmitted with the second priority level according to the fifth concurrency threshold and the actual concurrency sum.
In one example, the seventh determining module 3223 includes:
the second obtaining sub-module 32231 is configured to obtain an ideal concurrency degree of the tasks to be transmitted at the second priority level, where the ideal concurrency degree represents a minimum number of resources that can be used by the tasks, and a transmission time of the tasks at the ideal concurrency degree is minimum.
The second determining sub-module 32232 is configured to determine, according to the fifth concurrency threshold, the actual concurrency sum, the ideal concurrency of the to-be-transmitted task at the second priority level, and the third concurrency threshold, an expected concurrency of the to-be-transmitted task at the second priority level; wherein the third concurrency threshold is the maximum expected concurrency of a single task; the expected concurrency of the tasks to be transmitted with the second priority level is less than or equal to the ideal concurrency of the tasks with the second priority level, and the expected concurrency of the tasks to be transmitted with the second priority level is less than or equal to a third concurrency threshold.
In one example, the second determining sub-module 32232 is specifically configured to:
determining a fourth concurrency threshold value according to the fifth concurrency threshold value, the actual concurrency sum and a second preset value, wherein the fourth concurrency threshold value is the sum of the residual actual concurrency of the current transmitter, and the second preset value is an integer greater than or equal to 1; determining the minimum value between the ideal concurrency of the tasks to be transmitted with the second priority level and a fourth concurrency threshold value as a fifth concurrency threshold value; determining a maximum value between a third preset value and a fifth concurrency threshold value as a sixth concurrency threshold value, wherein the third preset value is an integer greater than 0; and determining the minimum value between the third concurrency threshold and the sixth concurrency threshold, wherein the minimum value is the expected concurrency of the tasks to be transmitted with the second priority level.
In an example, the second determining subunit 322 further includes:
an eighth determining module 3224 is configured to, when it is determined that the actual total concurrency is greater than or equal to the fourth concurrency threshold, determine that the resources in the current transmitter may not support transmission of the task to be transmitted at the second priority level, and determine to compare the actual total concurrency with the first concurrency threshold again after a preset time.
In an example, the second determining subunit 322 further includes:
a ninth determining module 3225, configured to determine that resources in the current transmitter may support transmission of the task to be transmitted at the second priority level when it is determined that the actual total concurrency is smaller than the fourth concurrency threshold; and when the current transmitter is determined not to have the task to be transmitted with the second priority level, determining that the current transmitter does not have the task to be transmitted, and determining that the actual concurrency sum and the first concurrency threshold are compared again after the preset time.
In an example, the data transmission apparatus based on a transmitter provided in this embodiment further includes:
a second obtaining unit 41, configured to obtain a file size of each data file of each task in the current conveyor.
And a second determining unit 42, configured to determine an ideal concurrency degree of each task according to the file size of each data file of each task, where the ideal concurrency degree represents a minimum number of resources that can be used by the task, and a transmission time of the task at the ideal concurrency degree is minimum.
In one example, where the ideal concurrency is the sum of the number of resources of the resources that a data file in a task may occupy; the second determination unit 42 includes:
the third determining subunit 421 is configured to determine, according to the file size of each data file of each task, each header file and each non-header file in each task, where the file size of each header file is greater than or equal to a preset threshold, the file size of each non-header file is smaller than the preset threshold, the preset threshold is a product of a preset first parameter and a preset second parameter, and the second parameter is a file size of a data file with a highest file size in the task.
The fourth determining subunit 422 is configured to determine that each header file occupies one resource, and determine the number of resources occupied by each non-header file according to the sum of the file sizes of the non-header files.
In an example, when the fourth determining subunit 422 determines, according to the sum of the file sizes of the non-header files, the number of resources of the resources occupied by the non-header files, it is specifically configured to:
and determining the ratio of the sum of the file sizes of the non-header files and the second parameter, which is the resource number of the resources occupied by the non-header files.
The data transmission device based on the transmitter in this embodiment may execute the technical solutions in the embodiments of fig. 2 to fig. 3, and the specific implementation process and the technical principle are the same, and are not described herein again.
Fig. 12 is a schematic view of another application scenario of the embodiment of the present application, and in the scenario shown in fig. 12, a plurality of conveyors are disposed in one machine room. Fig. 13 is a schematic view of another application scenario of the embodiment of the present application, and as shown in fig. 13, different machine rooms (physically separated) are provided, and a plurality of conveyors are provided in each machine room.
The overall concurrency resources within one room are limited; the same concurrency degree resource can be distributed to each transmitter, so that each transmitter completes respective transmission task; thereby ensuring the concurrency resource allocated to each transmitter. However, since the task amount and the task condition in different transmitters are different, the way of equally distributing concurrency resources can cause unreasonable resource allocation; some transmitters cannot utilize the concurrency degree resource, and the concurrency degree is wasted; and the concurrency degree resources of some transmitters are insufficient, and data cannot be transmitted in time.
The inventors of the present application, after inventive work, have arrived at the concepts of the present application for a method, apparatus, device and storage medium for multi-transmitter based concurrency allocation: the method has the advantages that the transmission machine of the task to be transmitted (high-priority task) with the first priority level is guaranteed to obtain more bandwidth resources; more bandwidth resources are distributed to the high-priority transmitter, and tasks in the high-priority transmitter are transmitted preferentially and quickly; and bandwidth resources are still allocated to the ordinary state transmitter, and tasks in the ordinary state transmitter are guaranteed to be transmitted.
Fig. 14 is a schematic diagram according to a fifth embodiment of the present application, and as shown in fig. 14, the method for allocating concurrency based on multiple transmitters according to the present embodiment includes:
301. acquiring attribute information of each transmitter in a machine room to which the current transmitter belongs, wherein the attribute information comprises a state identifier and the current actual network speed of the transmitter; the state identifier is used for representing that the transmitter is in an idle state, a high-priority state or a common state, the transmitter in the idle state is not transmitting a task, the transmitter in the high-priority state is transmitting a task with a priority higher than a preset priority threshold, and the transmitter in the common state is transmitting a task with a priority lower than the preset priority threshold.
Illustratively, the execution subject of this embodiment may be a transmitter, or a multi-transmitter-based concurrency allocation apparatus or device, or other apparatuses or devices that may execute the method of this embodiment. The present embodiment is described with the execution main body as a transmitter.
On the basis of the above-mentioned embodiments, a plurality of conveyors are provided in one machine room. In one example, each transmitter in the same machine room freely competes for the outlet bandwidth of the machine room; for example, with a static, average policy, a fixed 5-way transmission channel is maintained, each channel transmitting tasks with a fixed 20 concurrency (i.e., 20 threads or processes); further, the tasks to be transmitted form a queue, and the tasks to be transmitted enter any one transmission channel according to a certain sequence to complete transmission; in this example, the transmitters of the same transmission have the same concurrency (i.e., the same number of threads or the same number of processes), and each transmitter is given the same bandwidth resource.
However, in the above example, the egress bandwidth of the computer room is limited, and the way of sharing the concurrency resources may cause unreasonable resource allocation; some transmitters cannot utilize the concurrency degree resource, and the concurrency degree is wasted; the concurrency degree resources of some transmitters are insufficient, and data cannot be transmitted in time; moreover, when a plurality of transmission machines in the machine room transmit data simultaneously, the transmission machines contend for the outlet bandwidth of the machine room under the condition of heavy transmission load.
Each task may be configured with a priority, e.g., the task is a high-priority task or a normal task. Tasks to be transmitted (high-priority tasks) with the first priority level may only exist on part of the transmitters, so that uniform bandwidth allocation needs to be performed on the transmitters in the same machine room, and the transmitters with the tasks to be transmitted (high-priority tasks) with the first priority level can allocate more bandwidth.
In order to perform uniform bandwidth allocation on all the conveyors in the same machine room, the conveyors of the tasks to be transmitted (high-priority tasks) with the first priority level can be allocated with more bandwidths, and the total concurrency threshold of each conveyor can be dynamically adjusted in real time. The embodiment provides a bandwidth allocation strategy between conveyors in the same machine room, which dynamically adjusts the total concurrency threshold of each conveyor in real time, improves the total concurrency threshold of the conveyors for tasks to be transmitted (high-priority tasks) with a first priority level, and reduces the total concurrency threshold of the tasks to be transmitted (high-priority tasks) without the first priority level; and further, the transmission machine of the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources, that is, the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources and to be transmitted quickly.
When adjusting the total threshold of the concurrency of the transmitter in real time each time, the current transmitter firstly acquires the attribute information of each transmitter in a machine room to which the current transmitter belongs; the attribute information comprises a state identifier and the current actual network speed of the transmitter.
The state identification is used for representing that the transmitter is in any one of an idle state, a high-priority state or a common state. If the transmitter is in an idle state, the transmitter is not transmitting the task; if the transmitter is in a high-priority state, the transmitter is transmitting the task, and the priority of the task transmitted by the transmitter is higher than a preset priority threshold (namely the priority of the task transmitted by the transmitter is mostly the task of the first priority level); if the transmitter is in a normal state, the transmitter is transmitting the task, and the priority of the task transmitted by the transmitter is lower than a preset priority threshold (that is, the priority of the task transmitted by the transmitter is mostly the task of the second priority level, and the first priority level is higher than the second priority level).
302. And determining the network speed limit of the current transmitter according to the state identifier of each transmitter.
For example, since each of the conveyors in the same machine room has its own status, that is, each of the conveyors has its own status identifier, different network speed limits are configured for the conveyors in different statuses. Limiting the wire speed characterizes the maximum wire speed that can be reached by the conveyor.
For example, the network speed limit of the transmitter in the idle state is smaller than that of the transmitter in the normal state; the network speed limit of the conveyor in the normal state is smaller than that of the conveyor in the high-quality state.
303. And determining the total concurrency threshold of the current transmitter according to the current actual network speed and the limited network speed of the current transmitter, wherein the total concurrency threshold is used for data transmission of the current transmitter.
In one example, the total threshold of concurrency is an upper bound of the actual sum of concurrency, which is the sum of the actual concurrency of each task in the transmitter.
Illustratively, since each conveyor is operating and thus each conveyor has an actual wire speed, the current conveyor may read the current actual wire speed of the current conveyor. The representation of the network speed limit of the current transmitter is the maximum network speed which can be reached by the transmitter, and the total concurrency threshold of the current transmitter can be calculated according to the current actual network speed and the network speed limit of the current transmitter according to the actual network speed which is represented by the current actual network speed and the actual network speed which is possessed by the transmitter at the moment. In one example, the current actual network speed and the limited network speed, which are some intermediate value therebetween, may be taken as the total threshold of the concurrency of the current transmitter.
Then, since the total threshold of the concurrency degree of the current transmitter indicates the sum of all the available concurrency degree resources of the current transmitter, the current transmitter may allocate the concurrency degree (i.e., the concurrency degree resources) to the task to be transmitted in the current transmitter according to the total threshold of the concurrency degree of the current transmitter. In one example, the current transmitter may allocate resources (i.e., concurrency resources) for each task to be transmitted according to the priority level of each task to be transmitted.
Then, the current transmitter transmits each task to be transmitted respectively according to the resources allocated to each task to be transmitted.
Through the process of step 301 plus 303, different network speed limits are configured for the transmitters in different states, so as to configure different total concurrency thresholds for the transmitters in different states, improve the total concurrency threshold of the task (high-priority task) with the first priority level, and reduce the total concurrency threshold of the task (common task) without the first priority level; and then the total threshold of the concurrency of each transmitter can be dynamically adjusted in real time. And further, the transmission machine of the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources, that is, the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources and to be transmitted quickly.
In one example, after step 303 is performed, the process of the embodiment of fig. 2 or fig. 3 may also be performed.
In this embodiment, the network speed limit of the current conveyor is determined by the current conveyor according to the state identifier of each conveyor in the same machine room, and then the total concurrency threshold of the current conveyor is determined according to the current actual network speed and the network speed limit of the current conveyor; the total threshold of the concurrency of the transmitter can be allocated according to the task state and the task condition in the current transmitter, so that more bandwidth resources are allocated to the high-priority transmitter, and the task in the high-priority transmitter is preferentially and quickly transmitted; and bandwidth resources are still allocated to the ordinary state transmitter, and tasks in the ordinary state transmitter are guaranteed to be transmitted. The total threshold of the concurrency of each transmitter can be dynamically adjusted in real time, the total threshold of the concurrency of the transmitter of the task (high-priority task) with the first priority level is improved, and the total threshold of the concurrency of the transmitter of the task (common task) without the first priority level is reduced; further, the transmission machine of the task to be transmitted (high-priority task) with the first priority level is ensured to obtain more bandwidth resources, that is, the task to be transmitted (high-priority task) with the first priority level is ensured to obtain more bandwidth resources and to be transmitted quickly; that is, it is ensured that concurrency resources are allocated to the task to be transmitted (high-priority task) at the first priority level, and further it is ensured that the high-priority task occupies more bandwidth resources, so that the high-priority task is transmitted preferentially and rapidly. The bandwidth is allocated for each transmitter in the same machine room, all tasks in the same machine room are guaranteed to maintain high-speed transmission, bandwidth resources are fully utilized, the idle bandwidth is avoided, the bandwidth is not wasted, and the transmission throughput in the same machine room is maximized.
Fig. 15 is a schematic diagram according to a sixth embodiment of the present application, and as shown in fig. 15, the method for allocating concurrency based on multiple transmitters according to the present embodiment includes:
401. acquiring attribute information of each transmitter in a machine room to which the current transmitter belongs, wherein the attribute information comprises a state identifier and the current actual network speed of the transmitter; the state identifier is used for representing that the transmitter is in an idle state, a high-priority state or a common state, the transmitter in the idle state is not transmitting a task, the transmitter in the high-priority state is transmitting a task with a priority higher than a preset priority threshold, and the transmitter in the common state is transmitting a task with a priority lower than the preset priority threshold. The attribute information also includes the network card bandwidth of the transmitter.
Illustratively, the execution subject of this embodiment may be a transmitter, or a data transmission device or apparatus based on the transmitter, or other devices or apparatuses that can execute the method of this embodiment. The present embodiment is described with the execution main body as a transmitter.
This step can be referred to as step 301 shown in fig. 14, and is not described again. Different from step 301, the attribute information of the transmitter further includes a network card bandwidth of the transmitter, where the network card bandwidth is a fixed value, and the network card bandwidth represents a bandwidth that can be supported by the transmitted network card.
402. And when the current transmitter is determined to meet the free competition requirement according to the state identifier of each transmitter, determining that the current transmitter competes in a free competition mode to determine the network speed limit.
In one example, step 402 specifically includes: if the current transmitter is determined to meet the first preset condition, determining the network card bandwidth of the current transmitter as the limited network speed of the current transmitter; the first preset condition is that the current actual total bandwidth of the machine room to which the current transmitter belongs is smaller than a preset bandwidth threshold, or the state identifier of the current transmitter is in a high-priority state, or the number of the transmitters in the high-priority state in the machine room to which the current transmitter belongs is zero; the bandwidth threshold is the product of the total bandwidth of the outlet of the machine room to which the current conveyor belongs and a preset proportional value.
For example, when the network speed limit of the current transmitter is determined, the state identifier of each transmitter in the machine room to which the current transmitter belongs is analyzed, or the state identifier of the current transmitter is analyzed to determine whether the current transmitter meets the free contention requirement. The "free competition requirement" is met, for example, all the transmitters in the machine room to which the current transmitter belongs are in a high-quality state, or all the transmitters in the machine room to which the current transmitter belongs are in a common state, or the current transmitter is in a high-quality state.
And if the current transmitter meets the free competition requirement, the current transmitter adopts a free competition mode to compete to determine the network speed limit. The "competing to determine the network speed limit by adopting a free competition mode" means that the transmitter is not limited and freely competes for the total bandwidth of the outlet of the machine room. Therefore, when all the transmitters in the machine room to which the current transmitter belongs are in the high-priority state, or when the current transmitter is in the high-priority state, the current transmitter preferentially competes for the total bandwidth of the outlet of the machine room, that is, the high-priority transmitter preferentially competes for the total bandwidth of the outlet of the machine room, and sufficient bandwidth can be guaranteed to be allocated to the high-priority transmitter. And when all the transmitters in the machine room to which the current transmitter belongs are in the common state, the common transmitter can also compete for the total bandwidth of the outlet of the machine room, so that the bandwidth of each common transmitter can be allocated.
In one example, when the current transmitter is determined to meet the free contention requirement according to the status identifier of each transmitter, the status identifier or the bandwidth of the transmitter may be analyzed to determine that the current transmitter meets the free contention requirement. If the current transmitter meets the first preset condition, the current transmitter competes in a free competition mode to determine to limit the network speed, and at the moment, the network card bandwidth of the current transmitter is directly used as the network speed limit of the current transmitter.
For example, the ith transmitter, the limit _ speed (i) of the ith transmitter in the computer room is bandwith _ machine (i), where bandwith _ machine (i) is the network card bandwidth of the ith transmitter, and the network card bandwidth is a fixed value. Wherein i is a positive integer greater than or equal to 1, and is less than or equal to P, and P is the total number of all the conveyors in one machine room; the "all-conveyors" include a high-quality conveyor, a normal conveyor, and an idle conveyor.
The first preset condition includes the following implementation manners.
The first implementation mode comprises the following steps: and if the current transmitter is the transmitter in the high-priority state, namely the state identifier of the previous transmitter is in the high-priority state, determining that the current transmitter meets the first preset condition. At this time, the current transmitter determines the network speed limit through competition in a free competition mode, and the network card bandwidth of the current transmitter is directly used as the network speed limit of the current transmitter. And further ensure that sufficient bandwidth is allocated to the high-priority transmitter first.
The second implementation mode comprises the following steps: the number of the transmitters in the high-priority state in the machine room to which the current transmitter belongs is zero, that is, the transmitter in the machine room to which the current transmitter belongs is in a normal state or an idle state (no transmitter in the machine room in the high-priority state), and it is determined that the current transmitter meets the first preset condition. At this time, the current transmitter determines the network speed limit through competition in a free competition mode, and the network card bandwidth of the current transmitter is directly used as the network speed limit of the current transmitter.
The third implementation mode comprises the following steps: and if the current actual total bandwidth of the machine room to which the current conveyor belongs is smaller than a preset bandwidth threshold, wherein the bandwidth threshold is related to the total bandwidth of an outlet of the machine room, determining that the current conveyor meets a first preset condition. At this time, the current transmitter determines the network speed limit through competition in a free competition mode, and the network card bandwidth of the current transmitter is directly used as the network speed limit of the current transmitter. In one example, the preset bandwidth threshold is a product of an outlet total bandwidth of a machine room to which the current conveyor belongs and a preset proportional value; the preset proportional value is, for example, 0.5.
The fourth implementation mode comprises the following steps: and if the transmitters in the machine room to which the current transmitter belongs are all high-quality transmitters, determining that the current transmitter meets a first preset condition. At this time, the current transmitter determines the network speed limit through competition in a free competition mode, and the network card bandwidth of the current transmitter is directly used as the network speed limit of the current transmitter. And furthermore, the bandwidth of each high-quality state transmitter is not limited, and each high-quality state transmitter freely competes for the bandwidth.
403. And when the current transmitter is determined not to meet the free competition requirement according to the state identifier of each transmitter, determining the limited network speed of the current transmitter according to the network card bandwidth of the current transmitter.
In one example, the specific implementation of step 403 includes: if the current transmitter is determined to meet the second preset condition, determining the network speed limit of the current transmitter according to the network card bandwidth of the current transmitter, the total outlet bandwidth of the machine room to which the current transmitter belongs, the current actual network speed of the high-priority transmitter in the machine room to which the current transmitter belongs, and the total number of the common transmitters in the machine room to which the current transmitter belongs; the second preset condition is that the current actual total bandwidth of the machine room to which the current transmitter belongs is greater than or equal to a preset bandwidth threshold, or the state identifier of the current transmitter is not in a high-priority state, or the number of the transmitters in the high-priority state in the machine room to which the current transmitter belongs is not zero; the bandwidth threshold is the product of the total bandwidth of the outlet of the machine room to which the current conveyor belongs and a preset proportional value.
Illustratively, if the current transmitter does not meet the free contention requirement, the current transmitter does not contend in a free contention manner to determine the limited network speed, but determines the limited network speed of the current transmitter according to the network card bandwidth of the current transmitter. The limited network speed of the current transmitter can be limited (i.e. reduced) through the network card bandwidth of the current transmitter. Because the network speed of the conveyor is limited, the bandwidth of the common conveyor can be limited, and the high-priority conveyor is preferentially ensured to occupy the outlet bandwidth of the machine room.
In one example, when the current transmitter is determined to meet the free contention requirement according to the status identifier of each transmitter, the status identifier or the bandwidth of the transmitter may be analyzed to determine that the current transmitter meets the free contention requirement. If the current transmitter meets the second preset condition, the current transmitter cannot compete in a free competition mode to determine to limit the network speed, but the current transmitter limits the network speed of the current transmitter according to the network card bandwidth of the current transmitter. The current conveyor can read the current actual network speed of the high-priority conveyor in the machine room to which the current conveyor belongs and the total number of common conveyors in the machine room to which the current conveyor belongs; the current transmitter calculates the network speed limit of the current transmitter according to the network card bandwidth of the current transmitter, the total outlet bandwidth of the machine room to which the current transmitter belongs, the current actual network speed of the high-priority transmitter in the machine room to which the current transmitter belongs, and the total number of the common transmitters in the machine room to which the current transmitter belongs.
The second preset condition includes the following implementation manners.
The first implementation mode comprises the following steps: and if the current transmitter is not the transmitter in the high-priority state, namely the state identifier of the previous transmitter is not in the high-priority state, determining that the current transmitter meets a second preset condition. At this time, the current transmitter limits the network speed by limiting the current transmitter, so as to limit the bandwidth resource allocated to the current transmitter. And further, sufficient bandwidth is guaranteed to be allocated for the current high-priority transmitter.
The second implementation mode comprises the following steps: and if the number of the transmitters in the high-priority state in the machine room to which the current transmitter belongs is not zero, namely the machine room to which the current transmitter belongs has other transmitters in the high-priority state, determining that the current transmitter meets a second preset condition. At this time, the current transmitter limits the network speed by limiting the current transmitter, so as to limit the bandwidth resource allocated to the current transmitter. And further ensure that other high-priority transmitters are allocated with sufficient bandwidth.
The third implementation mode comprises the following steps: and if the current actual total bandwidth of the machine room to which the current conveyor belongs is greater than or equal to a preset bandwidth threshold, and the bandwidth threshold is related to the total bandwidth of the outlet of the machine room, determining that the current conveyor meets a second preset condition. At this time, the current transmitter needs to limit the network speed of the current transmitter. In one example, the preset bandwidth threshold is a product of an outlet total bandwidth of a machine room to which the current conveyor belongs and a preset proportional value; the preset proportional value is, for example, 0.5.
In an example, fig. 16 is a schematic diagram of step 403 in a sixth embodiment of the present application, and as shown in fig. 16, a specific implementation process of step 403 includes the following steps 4031 and 4034:
4031. and if the current transmitter is determined to meet the second preset condition, determining a first bandwidth parameter according to the current actual network speed and the network card bandwidth of each high-priority transmitter in the machine room to which the current transmitter belongs, wherein the first bandwidth parameter is the sum of the minimum bandwidths reserved for all the high-priority transmitters in the machine room to which the current transmitter belongs.
In one example, step 4031 specifically includes: determining a network speed ratio value of each high-priority transmitter according to the current actual network speed of each high-priority transmitter in a machine room to which the current transmitter belongs, wherein the network ratio value is the product of the current actual network speed and a fifth preset value, and the fifth preset value is a positive number greater than 1; determining the minimum value between the network speed ratio value of each high-priority transmitter and the network card bandwidth, and taking the minimum value as the limited bandwidth of each high-priority transmitter; and determining the sum of the limited bandwidths of the high-priority conveyors as a first bandwidth parameter.
For example, if the current transmitter determines that the current transmitter meets the second preset condition, the limited bandwidth of the current transmitter needs to be calculated. The current transmitter can read the information of each high-priority transmitter in the machine room to which the current transmitter belongs, and further read the current actual network speed and the network card bandwidth of each high-priority transmitter; the current actual network speed of each high-priority transmitter can be the actual average network speed (unit MB/s) in the current 30s of the transmitter; the network card bandwidth of each high-priority transmitter is a fixed value.
The current transmitter calculates a first bandwidth parameter according to the current actual network speed and the network card bandwidth of each high-priority transmitter, wherein the first bandwidth parameter is the sum of minimum bandwidths reserved for all high-priority transmitters in a machine room to which the current transmitter belongs.
In one example, when the current conveyor calculates the first bandwidth parameter, for each high-priority conveyor in the machine room to which the current conveyor belongs, the current conveyor multiplies the current actual network speed of the high-priority conveyor by a fifth preset value, where the fifth preset value is a positive number greater than 1, to obtain a network speed ratio of the high-priority conveyor. For example, the network speed ratio of the j-th high-priority transmitter is real _ speed (j) 1.5, where real _ speed (j) is the current actual network speed of the j-th high-priority transmitter, and 1.5 is a fifth preset value. j is a positive integer greater than or equal to 1, and j is less than or equal to Q, and Q is the total number of high-priority conveyors in one machine room.
Then, for each high-quality transmitter in the machine room to which the current transmitter belongs, the current transmitter obtains the network speed ratio value of the high-quality transmitter, the network card bandwidth of the high-quality transmitter, and the minimum value between the network speed ratio value and the network card bandwidth of the high-quality transmitter, so as to obtain the limited bandwidth of the high-quality transmitter. Furthermore, for each high-quality transmitter, the minimum value between the actual bandwidth of the high-quality transmitter 1.5 times and the network card bandwidth of the high-quality transmitter is obtained.
For example, the limited bandwidth of the j-th high-quality transmitter is min (bandwidth _ machine (j), real _ speed (j) × 1.5), where bandwidth _ machine (j) is the network card bandwidth of the j-th high-quality transmitter, and real _ speed (j) × 1.5 is the network speed ratio of the j-th high-quality transmitter.
Then, the current transmitter sums the limited bandwidths of all high-priority transmitters to obtain a first bandwidth parameter; the sum of the minimum bandwidths which can be reserved for all the high-quality transmitters is calculated, so that the limitation bandwidth of each high-quality transmitter can be conveniently determined subsequently, and enough bandwidth is reserved for each high-quality transmitter; the "sum of minimum bandwidths" does not include the contended bandwidth of the high-priority transmitter; that is, the first bandwidth parameter characterizes the sum of the minimum bandwidths that can be reserved for all high-priority transmitters. For example, the first bandwidth parameter is sigma (j in machine _ priority, min (bandwith _ machine (j), real _ speed (j) × 1.5)), where sigma is a summation formula. The machine _ priority represents the sum of high-priority conveyors in the same machine room; j in machine _ priority, which represents the j-th high-priority transmitter among all high-priority transmitters in the same machine room.
4032. And determining a second bandwidth parameter according to the first bandwidth parameter and the total bandwidth of the outlet of the machine room to which the current transmitter belongs, wherein the second bandwidth parameter is the sum of the maximum bandwidths reserved for all the common transmitters in the machine room to which the current transmitter belongs.
In one example, step 4032 specifically includes: and subtracting the first bandwidth parameter from the total bandwidth of the outlet of the machine room to which the current conveyor belongs to obtain a second bandwidth parameter.
Illustratively, after the step 4031, the current transmitter may obtain a first bandwidth parameter; the first bandwidth parameter is characterized by the sum of minimum bandwidths which can be reserved for all high-quality transmitters; in addition, the current conveyor can acquire the total bandwidth of an outlet of the machine room to which the current conveyor belongs, wherein the total bandwidth of the outlet is a fixed value; the current transmitter may subtract the first bandwidth parameter from the total bandwidth of the outlet, so as to obtain a sum of maximum bandwidths reserved for all ordinary transmitters in the machine room to which the current transmitter belongs, that is, obtain the second bandwidth parameter. The second bandwidth parameter is the bandwidth cap of all normal state transmitters. And after subtracting the first bandwidth parameter from the total bandwidth of the outlet, the bandwidth of all the high-quality transmitters is subtracted, so that enough bandwidth is reserved for all the high-quality transmitters, and the bandwidth resource of the high-quality transmitters is ensured.
For example, the second bandwidth parameter is (bandwidth _ idc _ sigma (j in machine _ priority, min (bandwidth _ machine (j), real _ speed (j) × 1.5))); wherein, bandwith _ idc is the total bandwidth of an outlet of a machine room to which the current conveyor belongs; sigma (j in machine _ priority, min (baseband _ machine (j), real _ speed (j) × 1.5)) is the first bandwidth parameter.
4033. And determining a third bandwidth parameter according to the second bandwidth parameter and the total number of the common-state transmitters in the machine room to which the current transmitter belongs, wherein the third bandwidth parameter is the maximum bandwidth reserved for each common-state transmitter in the machine room to which the current transmitter belongs.
In one example, step 4032 specifically includes: and dividing the second bandwidth parameter by a sixth preset value to obtain a third bandwidth parameter, wherein the sixth preset value is the sum of the total number of the normal transmitters in the machine room to which the current transmitter belongs and a seventh preset value, and the seventh preset value is a positive number greater than 0.
Illustratively, after step 4032, the current transmitter obtains a second bandwidth parameter. The second bandwidth parameter is the bandwidth upper limit of all the ordinary transmitters; and the current transmitter calculates a third bandwidth parameter according to the second bandwidth parameter and the total number of the ordinary transmitters in the machine room to which the current transmitter belongs. For example, the second bandwidth parameter is divided by "the total number of ordinary transmitters in the machine room to which the current transmitter belongs" to obtain a third bandwidth parameter.
The third bandwidth parameter represents a maximum bandwidth reserved for each common-state transmitter in the machine room to which the current transmitter belongs. And reserving bandwidth for the ordinary transmitter, and ensuring that the ordinary transmitter is allocated with bandwidth resources.
In one example, when the third bandwidth parameter is calculated, the current transmitter sums the total number of the transmitters in the common state in the machine room to which the current transmitter belongs and a seventh preset value to obtain a sixth preset value; wherein the seventh preset value is a positive number greater than 0. And then, the current transmitter divides the second bandwidth parameter by a sixth preset value to obtain a third bandwidth parameter.
For example, the sixth preset value is (0.1+ | machine _ ordering |), where | machine _ ordering | is "the total number of transmitters of the common state in the machine room to which the current transmitter belongs by the current transmitter", and 0.1 is the seventh preset value. The third bandwidth parameter is (bandwidth _ idc-sigma (j in machine _ priority, min (bandwidth _ machine (j)), real _ speed (j) × 1.5))/(0.1 + | machine _ ordering |), and the third bandwidth parameter is the upper bandwidth limit of each normal-state transmitter. Wherein, the second bandwidth parameter is (bandwidth _ idc _ sigma (j in machine _ priority, min (bandwidth _ machine (j), real _ speed (j) × 1.5))).
4034. And determining the network speed limit of the current transmitter according to a fourth preset value, the network card bandwidth of the current transmitter and the third bandwidth parameter, wherein the network speed limit of the current transmitter is greater than or equal to the fourth preset value, the network speed limit of the current transmitter is less than or equal to the network card bandwidth of the current transmitter, and the fourth preset value is a positive number greater than zero.
In one example, step 4034 specifically includes: determining the minimum value between the network card bandwidth of the current transmitter and the third bandwidth parameter as a fourth bandwidth parameter; and determining the maximum value between the fourth preset value and the fourth bandwidth parameter as the limited network speed of the current transmitter.
Illustratively, after the step 4033, the current transmitter obtains a third bandwidth parameter; and the third bandwidth parameter is characterized by the maximum bandwidth reserved for each common-state transmitter in the machine room to which the current transmitter belongs. And the current transmitter can directly acquire the network card bandwidth of the current transmitter, and the network card bandwidth of the current transmitter is a fixed value.
And then, the current transmitter limits the network speed limit of the current transmitter according to a fourth preset value, the network card bandwidth of the current transmitter and the third bandwidth parameter, so as to obtain the network speed limit of the current transmitter. Wherein the fourth preset value is a positive number greater than zero. And the obtained network speed limit is greater than or equal to a fourth preset value, and the obtained network speed limit is less than or equal to the network card bandwidth of the current transmitter.
In one example, the current transmitter takes the network card bandwidth and the third bandwidth parameter of the current transmitter, and the minimum value between the network card bandwidth and the third bandwidth parameter as a fourth bandwidth parameter; when the minimum value between the network card bandwidth and the third bandwidth parameter of the current transmitter is obtained, the limitation of the network speed of the current transmitter can be limited to be not too large. Then, the current transmitter takes a fourth preset value, a fourth bandwidth parameter and the maximum value between the fourth preset value and the fourth bandwidth parameter as the network speed limit of the current transmitter; when the maximum value between the fourth preset value and the fourth bandwidth parameter is taken, the limitation that the network speed of the current conveyor cannot be too small can be limited.
For example, max (1, min (bandwith _ machine (i)), third bandwidth parameter); wherein, bandwith _ machine (i) is the network card bandwidth of the ith transmitter (i.e. the current transmitter), and 1 is the fourth preset value. Min (bandwidth _ machine (i), the third bandwidth parameter), may ensure that the value of the current transmitter that limits the network speed cannot be too large. max can ensure that the value of the current transmission machine for limiting the network speed cannot be too small; and if min (bandwidth _ machine (i)), the third bandwidth parameter is a very small value or a negative number, and it can also be ensured that the network speed limit of the current transmitter is greater than or equal to 1 (i.e., a fourth preset value).
Through the step 4031-4034, the network speed limit of the current transmitter can be determined; the limited network speed is used to generate a total threshold for concurrency for the current transmitter. The network speed limitation can be carried out through a plurality of parameters, and the network speed limitation is ensured not to be too large or too small.
404. And determining a first concurrency threshold value of the current transmitter according to the current actual network speed and the limited network speed of the current transmitter, wherein the first concurrency threshold value represents the available concurrency of the current transmitter under the limited network speed.
In one example, step 404 specifically includes:
the first step of step 404 is to determine a network speed limit ratio of the current transmitter according to the limited network speed of the current transmitter and the current actual network speed of the current transmitter, wherein the network speed limit ratio is a ratio between the limited network speed and the current actual network speed.
And a second step of step 404, determining a first concurrency threshold value of the current transmitter according to the network speed limit proportion of the current transmitter and the actual concurrency sum of the current transmitter, wherein the actual concurrency sum is the actual concurrency sum of each task in the transmission state in the current transmitter.
Illustratively, since each conveyor is operating and thus each conveyor has an actual wire speed, the current conveyor may read the current actual wire speed of the current conveyor. The representation of the network speed limit of the current transmitter is the maximum network speed which can be reached by the transmitter, and the total concurrency threshold of the current transmitter can be calculated according to the current actual network speed and the network speed limit of the current transmitter according to the actual network speed which is represented by the current actual network speed and the actual network speed which is possessed by the transmitter at the moment.
The current transmitter can limit the available concurrency of the current transmitter according to the current actual network speed and the limited network speed of the current transmitter, and calculate a first concurrency threshold of the current transmitter; that is, the first concurrency threshold characterizes the available concurrency for the current transmitter at the limited network speed.
In one example, the current transmitter divides the network speed limit of the current transmitter by the current actual network speed of the current transmitter, so as to obtain the network speed limit ratio of the current transmitter. Then, the current transmitter multiplies the network speed limit proportion of the current transmitter and the actual concurrency sum of the current transmitter to obtain a first concurrency threshold value for determining the current transmitter; and the first concurrency threshold is an integer. And further, the available concurrency of the current transmitter is obtained, so that the total concurrency threshold of the current transmitter is determined.
For example, the limited network speed limit _ speed (i) of the ith transmitter is divided by the current actual network speed real _ speed (i) of the ith transmitter, and then multiplied by the actual concurrency sum real _ concurrent _ num (i) of the ith transmitter to obtain a value, and the value is rounded to obtain the first concurrency threshold int (limit _ speed (i)/real _ speed (i)) of the ith transmitter. Where int is a rounded down function.
405. And determining the total threshold of the concurrency of the current transmitter according to the first threshold of the concurrency of the current transmitter and a preset total threshold of the initial concurrency, wherein the total threshold of the concurrency of the current transmitter is less than or equal to the total threshold of the initial concurrency. And the total threshold of the concurrency degree is an upper limit value of the total actual concurrency degree, and the total actual concurrency degree is the total actual concurrency degree of each task.
In one example, step 405 specifically includes:
the first step of step 405 is to determine the minimum value between the first concurrency threshold and the initial total concurrency threshold, which is the second concurrency threshold.
And a second step of step 405, determining a maximum value between an eighth preset value and a second concurrency threshold, where the maximum value is a total concurrency threshold of the current transmitter, and the eighth preset value is a positive number greater than or equal to zero.
Illustratively, after obtaining the first concurrency threshold of the current transmitter, an initial total concurrency threshold is preset, and the initial total concurrency threshold is a larger concurrency; the current transmitter may utilize the initial total threshold of concurrency and the first threshold of concurrency to constrain the total threshold of concurrency of the current transmitter, and then obtain a final total threshold of concurrency. In the process of constraint, the finally obtained total threshold of the concurrency degree needs to be controlled to be less than or equal to the initial total threshold of the concurrency degree.
In one example, the current transmitter takes a first concurrency threshold value and an initial concurrency total threshold value of the current transmitter, and a minimum value between the first concurrency threshold value and the initial concurrency total threshold value as a second concurrency threshold value of the current transmitter; and then the total threshold of the concurrency of the current transmitter is controlled not to be too large. Then, the current transmitter takes an eighth preset value and a second concurrency threshold value, and the maximum value between the eighth preset value and the second concurrency threshold value is used as the total concurrency threshold of the current transmitter; furthermore, the total threshold for controlling the concurrency of the current conveyor cannot be too small. And the eighth preset value is a positive number which is greater than or equal to zero.
For example, the second concurrency threshold of the ith transmitter is min (init _ concurrency _ threshold (i), int (limit _ speed) (i)/real _ speed (i) × real _ concurrency _ num (i)); wherein int (limit _ speed (i)/real _ speed (i) × real _ confluency _ num (i)) is the first concurrency threshold of the ith transmitter. Total threshold of concurrency of ith transmitter, total _ threshold, (i) max (0, min (init _ total _ threshold) (i), int (limit _ speed (i)/real _ speed (i))); wherein 0 is an eighth preset value.
Then, since the total threshold of the concurrency degree of the current transmitter indicates the sum of all the available concurrency degree resources of the current transmitter, the current transmitter may allocate the concurrency degree (i.e., the concurrency degree resources) to the task to be transmitted in the current transmitter according to the total threshold of the concurrency degree of the current transmitter. In one example, the current transmitter may allocate resources (i.e., concurrency resources) for each task to be transmitted according to the priority level of each task to be transmitted.
Then, the current transmitter transmits each task to be transmitted respectively according to the resources allocated to each task to be transmitted.
Through the process of step 404 and 405, different network speed limits are configured for the transmitters in different states, so that different total concurrency thresholds are configured for the transmitters in different states, the total concurrency threshold of the task (high-priority task) with the first priority level is increased, and the total concurrency threshold of the task (common task) without the first priority level is reduced; and then the total threshold of the concurrency of each transmitter can be dynamically adjusted in real time. And further, the transmission machine of the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources, that is, the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources and to be transmitted quickly.
Through the step 401 and the step 405, if the same equipment room has the high-priority transmitter and the normal-mode transmitter and the total bandwidth of the equipment room is insufficient, the bandwidths of the normal-mode transmitter and the idle-mode transmitter need to be limited (that is, the bandwidths of the non-high-priority transmitters are limited); and preferentially distributing the bandwidth for the high-priority state transmitter, and ensuring that the high-priority state transmitter occupies the machine room outlet bandwidth. By the free competition and bandwidth limitation mode, enough bandwidth can be allocated to the high-quality transmitter, and the rest bandwidth can be allocated to the normal transmitter; in addition, the whole transmission speed of the whole transmitter in the machine room can be furthest not reduced.
In one example, after step 405 is performed, the processes of the embodiments of fig. 2 or fig. 3 may also be performed.
In this embodiment, on the basis of the above embodiment, the network speed limit of the current conveyor is determined by analyzing the state identifiers of all the conveyors in the machine room to which the current conveyor belongs. The transmission machines meeting the free competition requirement compete to determine the limited network speed in a free competition mode, and the transmission machines not meeting the free competition requirement obtain the limited network speed in a limiting mode; furthermore, in the subsequent process, the high-quality state transmitter is guaranteed to preferentially compete for the total bandwidth of the outlet of the machine room, and sufficient bandwidth can be guaranteed to be allocated to the high-quality state transmitter. Then, the total concurrency threshold of the current conveyor is allocated according to parameters such as the current actual network speed and the limited network speed; the total threshold for controlling the concurrency cannot be too large or too small. Enabling the high-priority state transmitter to be allocated with sufficient bandwidth, and enabling the common state transmitter to be allocated with the rest bandwidth; in addition, the whole transmission speed of the whole transmitter in the machine room can be furthest not reduced. Furthermore, on the basis of the above embodiment, it is ensured that the high-priority state transmitter and the high-priority task can occupy more bandwidth resources, and the high-priority task is preferentially and quickly transmitted; meanwhile, all tasks in the machine room can be transmitted at high speed, the transmission machine can repeatedly utilize bandwidth resources, the non-idle bandwidth resources are guaranteed, and the overall transmission throughput of the machine room is maximized.
Fig. 17 is a schematic diagram of a seventh embodiment of the present application, in which, as shown in fig. 17, the multi-transmitter-based concurrency allocation apparatus provided in the present embodiment is applicable to a transmitter; the concurrency allocation device based on multiple transmitters provided by the embodiment comprises:
a first obtaining unit 51, configured to obtain attribute information of each conveyor in a machine room to which a current conveyor belongs, where the attribute information includes a state identifier and a current actual network speed of the conveyor; the state identifier is used for representing that the transmitter is in an idle state, a high-priority state or a common state, the transmitter in the idle state is not transmitting a task, the transmitter in the high-priority state is transmitting a task with a priority higher than a preset priority threshold, and the transmitter in the common state is transmitting a task with a priority lower than the preset priority threshold.
The first determining unit 52 is configured to determine the network speed limit of the current transmitter according to the status identifier of each transmitter.
And the second determining unit 53 is configured to determine a total concurrency threshold of the current conveyor according to the current actual network speed and the limited network speed of the current conveyor.
In an example, the apparatus provided in this embodiment may further perform the technical solution of the embodiment shown in fig. 2 or fig. 3; the device provided by this embodiment may further include the device of the embodiment shown in fig. 10 or fig. 11.
The data transmission device based on the transmitter in this embodiment may execute the technical solutions in the embodiments of fig. 14 to fig. 15, and the specific implementation process and the technical principle are the same, and are not described herein again.
Fig. 18 is a schematic diagram according to an eighth embodiment of the present application, and as shown in fig. 18, on the basis of the embodiment shown in fig. 17, in the concurrency allocation apparatus based on multiple transmitters provided in this embodiment, the attribute information further includes a network card bandwidth of the transmitter; the first determination unit 52 includes:
the first determining subunit 521 is configured to determine that the current transmitter contends in a free contention manner to determine the network speed limit when it is determined that the current transmitter meets the free contention requirement according to the state identifier of each transmitter.
And a second determining subunit 522, configured to determine, according to the status identifier of each transmitter, the network speed limit of the current transmitter according to the network card bandwidth of the current transmitter when it is determined that the current transmitter does not meet the free contention requirement.
In an example, the first determining subunit 521 is specifically configured to:
if the current transmitter is determined to meet the first preset condition, determining the network card bandwidth of the current transmitter as the limited network speed of the current transmitter; the first preset condition is that the current actual total bandwidth of the machine room to which the current transmitter belongs is smaller than a preset bandwidth threshold, or the state identifier of the current transmitter is in a high-priority state, or the number of the transmitters in the high-priority state in the machine room to which the current transmitter belongs is zero; the bandwidth threshold is the product of the total bandwidth of the outlet of the machine room to which the current conveyor belongs and a preset proportional value.
In an example, the second determining subunit 522 is specifically configured to:
if the current transmitter is determined to meet the second preset condition, determining the network speed limit of the current transmitter according to the network card bandwidth of the current transmitter, the total outlet bandwidth of the machine room to which the current transmitter belongs, the current actual network speed of the high-priority transmitter in the machine room to which the current transmitter belongs, and the total number of the common transmitters in the machine room to which the current transmitter belongs; the second preset condition is that the current actual total bandwidth of the machine room to which the current transmitter belongs is greater than or equal to a preset bandwidth threshold, or the state identifier of the current transmitter is not in a high-priority state, or the number of the transmitters in the high-priority state in the machine room to which the current transmitter belongs is not zero; the bandwidth threshold is the product of the total bandwidth of the outlet of the machine room to which the current conveyor belongs and a preset proportional value.
In one example, the second determining subunit 522 includes:
a first determining module 5221, configured to determine a first bandwidth parameter according to the current actual network speed and the network card bandwidth of each high-priority transmitter in the machine room to which the current transmitter belongs if it is determined that the current transmitter meets the second preset condition, where the first bandwidth parameter is a sum of minimum bandwidths reserved for all high-priority transmitters in the machine room to which the current transmitter belongs.
A second determining module 5222, configured to determine a second bandwidth parameter according to the first bandwidth parameter and the total bandwidth of the outlets of the machine rooms to which the current transmission machine belongs, where the second bandwidth parameter is a sum of maximum bandwidths reserved for all general transmission machines in the machine room to which the current transmission machine belongs.
A third determining module 5223, configured to determine a third bandwidth parameter according to the second bandwidth parameter and the total number of the ordinary transmitters in the machine room to which the current transmitter belongs, where the third bandwidth parameter is a maximum bandwidth reserved for each ordinary transmitter in the machine room to which the current transmitter belongs.
A fourth determining module 5224, configured to determine the network speed limit of the current transmitter according to a fourth preset value, the network card bandwidth of the current transmitter, and the third bandwidth parameter, where the network speed limit of the current transmitter is greater than or equal to the fourth preset value, and the network speed limit of the current transmitter is less than or equal to the network card bandwidth of the current transmitter, and the fourth preset value is a positive number greater than zero.
In one example, the first determining module 5221 is specifically configured to:
determining a network speed ratio value of each high-priority transmitter according to the current actual network speed of each high-priority transmitter in a machine room to which the current transmitter belongs, wherein the network ratio value is the product of the current actual network speed and a fifth preset value, and the fifth preset value is a positive number greater than 1; determining the minimum value between the network speed ratio value of each high-priority transmitter and the network card bandwidth, and taking the minimum value as the limited bandwidth of each high-priority transmitter; and determining the sum of the limited bandwidths of the high-priority conveyors as a first bandwidth parameter.
In one example, the second determining module 5222 is specifically configured to: and subtracting the first bandwidth parameter from the total bandwidth of the outlet of the machine room to which the current conveyor belongs to obtain a second bandwidth parameter.
In one example, the third determining module 5223 is specifically configured to:
and dividing the second bandwidth parameter by a sixth preset value to obtain a third bandwidth parameter, wherein the sixth preset value is the sum of the total number of the normal transmitters in the machine room to which the current transmitter belongs and a seventh preset value, and the seventh preset value is a positive number greater than 0.
In one example, the fourth determining module 5224 is specifically configured to:
determining the minimum value between the network card bandwidth of the current transmitter and the third bandwidth parameter as a fourth bandwidth parameter; and determining the maximum value between the fourth preset value and the fourth bandwidth parameter as the limited network speed of the current transmitter.
In one example, the second determining unit 53 includes:
a third determining subunit 531, configured to determine a first concurrency threshold of the current transmitter according to the current actual network speed of the current transmitter and the network speed limit, where the first concurrency threshold represents an available concurrency of the current transmitter at the network speed limit.
A fourth determining subunit 532, configured to determine a total concurrency threshold of the current transmitter according to the first concurrency threshold of the current transmitter and a preset total initial concurrency threshold, where the total concurrency threshold of the current transmitter is less than or equal to the total initial concurrency threshold.
In one example, the third determining subunit 531 includes:
a fifth determining module 5311, configured to determine a network speed limitation ratio of the current transmitter according to the network speed limitation of the current transmitter and the current actual network speed of the current transmitter, where the network speed limitation ratio is a ratio between the network speed limitation and the current actual network speed.
A sixth determining module 5312, configured to determine the first concurrency threshold of the current transmitter according to the network speed limit ratio of the current transmitter and the sum of actual concurrency of the current transmitter, where the sum of actual concurrency is the sum of actual concurrency of each task in the transmission state in the current transmitter.
In one example, the fourth determining subunit 532 includes:
a seventh determining module 5321, configured to determine a minimum value between the first concurrency threshold and the initial total concurrency threshold as the second concurrency threshold;
an eighth determining module 5322, configured to determine a maximum value between an eighth preset value and the second concurrency threshold, where the maximum value is a total threshold of concurrency of the current transmitter, and the eighth preset value is a positive number greater than or equal to zero.
In an example, the apparatus provided in this embodiment may further perform the technical solution of the embodiment shown in fig. 2 or fig. 3; the device provided by this embodiment may further include the device of the embodiment shown in fig. 10 or fig. 11.
The data transmission device based on the transmitter in this embodiment may execute the technical solutions in the embodiments of fig. 14 to fig. 15, and the specific implementation process and the technical principle are the same, and are not described herein again.
Based on the application scenario shown in fig. 12 or fig. 13, a plurality of conveyors are provided in each machine room. The transmitter can allocate resources to complete data transmission; however, the main task of the transmitter is to transmit data, and the transmitter in the same machine room cannot know the transmission condition of other transmitters, at this time, all the transmitters in the same machine room compete for the total bandwidth of the machine room together, and the total concurrency resource in one machine room is limited; thus, the transmitter cannot allocate appropriate concurrency resources for itself.
The inventor of the present application, after inventive work, has arrived at the idea of the present invention of a machine room system based conveyor processing method, system and storage medium: the method and the device realize that each transmitter is allocated with proper concurrency resources.
Fig. 19 is a schematic diagram according to a ninth embodiment of the present application, and as shown in fig. 19, the method for processing a conveyor based on a machine room system provided in this embodiment includes:
501. each transmitter sends a state identifier of the current transmitter to a server, wherein the state identifier is used for representing that the transmitter is in an idle state, a high-priority state or a common state; the transmission machine in the idle state is not transmitting the task, the transmission machine in the high-priority state is transmitting the task with the priority higher than the preset priority threshold, and the transmission machine in the common state is transmitting the task with the priority lower than the preset priority threshold.
For example, the method provided by the embodiment can be applied to a computer room system, wherein the computer room system comprises a server and at least one conveyor; in this embodiment, the server cooperates with each transmitter to complete the allocation of the concurrency resource to each transmitter.
In addition, similar to the embodiment provided in fig. 14, uniform bandwidth allocation needs to be performed on each transmitter in the same machine room, so that the transmitter of the task to be transmitted (high-priority task) with the first priority level can allocate more bandwidth.
In order to perform uniform bandwidth allocation on all the conveyors in the same machine room, the conveyors of the tasks to be transmitted (high-priority tasks) with the first priority level can be allocated with more bandwidths, and the total concurrency threshold of each conveyor can be dynamically adjusted in real time. The embodiment provides a bandwidth allocation strategy between conveyors in the same machine room, which dynamically adjusts the total concurrency threshold of each conveyor in real time, improves the total concurrency threshold of the conveyors for tasks to be transmitted (high-priority tasks) with a first priority level, and reduces the total concurrency threshold of the tasks to be transmitted (high-priority tasks) without the first priority level; and further, the transmission machine of the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources, that is, the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources and to be transmitted quickly.
In this embodiment, each transmitter only knows the information and status of its own transmitter, but not the information and status of other transmitters, so that each server is required to collect the information of all transmitters, and the servers uniformly perform calculation and allocation of bandwidth. In the embodiment, all the transmitters report the information and the parameters of the transmitters to the server at regular time; the server calculates the limited network speed of each transmitter; then, each server determines respective total concurrency threshold according to the limited network speed.
When adjusting the total threshold of the concurrency of the current conveyor in real time each time, the current conveyor firstly acquires the attribute information of each conveyor in a machine room to which the current conveyor belongs; the attribute information comprises a state identifier and the current actual network speed of the transmitter.
The state identification is used for representing that the transmitter is in any one of an idle state, a high-priority state or a common state. If the transmitter is in an idle state, the transmitter is not transmitting the task; if the transmitter is in a high-priority state, the transmitter is transmitting the task, and the priority of the task transmitted by the transmitter is higher than a preset priority threshold; and if the transmitter is in a normal state, the transmitter is transmitting the tasks, and the priority of the tasks transmitted by the transmitter is lower than a preset priority threshold.
502. Each transmitter receives the limited network speed of the current transmitter sent by the server, wherein the server stores the attribute information of each transmitter in the machine room to which the current transmitter belongs, and the attribute information comprises the state identification of the transmitter; the network speed limit is determined by the server according to the state identification of each transmitter.
Illustratively, the server may receive attribute information sent by each transmitter, where the attribute information includes status identification of the transmitter and current actual wire speed of the transmitter. The server may store attribute information for each conveyor in each room.
Each transmitter in the same machine room has a respective state, that is, each transmitter has a respective state identifier; and the server determines the respective limited network speed of each transmitter in the same machine room according to the state identifier of each transmitter in the same machine room. Thus, different limiting network speeds are configured for different state transmitters. Limiting the wire speed characterizes the maximum wire speed that can be reached by the conveyor.
For example, the network speed limit of the transmitter in the idle state is smaller than that of the transmitter in the normal state; the network speed limit of the conveyor in the normal state is smaller than that of the conveyor in the high-quality state.
Then, the server sends the network speed limit of each transmitter to each transmission. Furthermore, the current transmitter may receive the network speed limit of the current transmitter sent by the server.
503. And each transmitter determines the total concurrency threshold of the current transmitter according to the current actual network speed and the limited network speed of the current transmitter, and the total concurrency threshold is used for data transmission of the transmitter.
In one example, the total threshold of concurrency is an upper bound of the actual sum of concurrency, which is the sum of the actual concurrency of each task in the transmitter.
Illustratively, since each conveyor is operating and thus each conveyor has an actual wire speed, the current conveyor may read the current actual wire speed of the current conveyor. The representation of the network speed limit of the current transmitter is the maximum network speed which can be reached by the transmitter, and the total concurrency threshold of the current transmitter can be calculated according to the current actual network speed and the network speed limit of the current transmitter according to the actual network speed which is represented by the current actual network speed and the actual network speed which is possessed by the transmitter at the moment. In one example, the current actual network speed and the limited network speed, which are some intermediate value therebetween, may be taken as the total threshold of the concurrency of the current transmitter.
Then, since the total threshold of the concurrency degree of the current transmitter indicates the sum of all the available concurrency degree resources of the current transmitter, the current transmitter may allocate the concurrency degree (i.e., the concurrency degree resources) to the task to be transmitted in the current transmitter according to the total threshold of the concurrency degree of the current transmitter. In one example, the current transmitter may allocate resources (i.e., concurrency resources) for each task to be transmitted according to the priority level of each task to be transmitted.
Then, the current transmitter transmits each task to be transmitted respectively according to the resources allocated to each task to be transmitted.
Through the process of step 501 plus 503, different network speed limits are configured for the transmitters in different states, so that different total concurrency thresholds are configured for the transmitters in different states, the total concurrency threshold of the task (high-priority task) with the first priority level is increased, and the total concurrency threshold of the task (common task) without the first priority level is reduced; and then the total threshold of the concurrency of each transmitter can be dynamically adjusted in real time. And further, the transmission machine of the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources, that is, the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources and to be transmitted quickly.
In one example, after step 303 is performed, the process of the embodiment of fig. 2 or fig. 3 may also be performed.
In this embodiment, each transmitter sends the state identifier of the current transmitter and the current actual network speed to the server, and the server determines the network speed limit of each transmitter according to the state identifier of the transmitter in the attribute information of each transmitter in the same machine room; the server sends the limited network speed of each transmitter to each transmitter; and each transmitter allocates the total concurrency threshold of the current transmitter according to the current actual network speed and the limited network speed of the transmitter. And then the server uniformly acquires the states of all the transmitters in the same machine room, and the server allocates the limited network speed of each transmitter so that each transmitter determines the concurrency resource of the transmitter. And determining the concurrency resource of each transmitter, and allocating a proper concurrency resource for each transmitter by taking the task state and the task condition of each transmitter in the same machine room into consideration. In addition, the scheme provided by this embodiment ensures that the concurrency resources are allocated to the to-be-transmitted task (high-priority task) at the first priority level first, and further ensures that the high-priority task occupies more bandwidth resources, so that the high-priority task is transmitted preferentially and rapidly. Meanwhile, the total threshold of the concurrency of each transmitter can be dynamically adjusted in real time, the total threshold of the concurrency of tasks (high-priority tasks) with the first priority level is improved, and the total threshold of the concurrency of tasks (common tasks) without the first priority level is reduced; and further, the transmission machine of the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources, that is, the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources and to be transmitted quickly. Therefore, the bandwidth is allocated for each transmitter in the same machine room, all tasks in the same machine room are guaranteed to maintain high-speed transmission, bandwidth resources are fully utilized, the idle bandwidth is avoided, the bandwidth is not wasted, and the transmission throughput in the same machine room is maximized.
Fig. 20 is a schematic diagram according to a tenth embodiment of the present application, and as shown in fig. 20, the method for processing a conveyor based on a machine room system provided in this embodiment includes:
601. each transmitter sends a state identifier of the current transmitter to a server, wherein the state identifier is used for representing that the transmitter is in an idle state, a high-priority state or a common state; the transmission machine in the idle state is not transmitting the task, the transmission machine in the high-priority state is transmitting the task with the priority higher than the preset priority threshold, and the transmission machine in the common state is transmitting the task with the priority lower than the preset priority threshold.
Illustratively, the execution subject of this embodiment may be a transmitter, or a data transmission device or apparatus based on the transmitter, or other devices or apparatuses that can execute the method of this embodiment. The present embodiment is described with the execution main body as a transmitter.
This step can be referred to as step 501 shown in fig. 19, and is not described again.
602. Each transmitter receives the limited network speed of the current transmitter sent by the server, wherein the server stores the attribute information of each transmitter in the machine room to which the current transmitter belongs, and the attribute information comprises the state identification of the transmitter; the network speed limit is determined by the server according to the state identification of each transmitter. The attribute information also includes the network card bandwidth of the transmitter.
For an example, this step may refer to step 502 shown in fig. 19, and unlike step 502, the attribute information of the transmitter further includes a network card bandwidth of the transmitter, where the network card bandwidth is a fixed value, and the network card bandwidth represents a bandwidth that can be supported by the transmitted network card.
In one example, the limiting of the wire speed in step 602 specifically includes the following embodiments, which are "the first embodiment of limiting the wire speed in step 602" and "the second embodiment of limiting the wire speed in step 602", respectively:
in the first implementation manner of limiting the network speed in step 602, when the server determines that the current transmitter meets the free contention requirement according to the state identifier of each transmitter, the network speed limit of the current transmitter is determined by the server according to the free contention manner.
In one example, the first implementation manner specifically includes: when the server determines that the current transmitter meets the free competition requirement according to the state identifier of each transmitter, the current transmitter meets a first preset condition, and the network card bandwidth of the current transmitter is the limited network speed of the current transmitter; the first preset condition is that the current actual total bandwidth of the machine room to which the current transmitter belongs is smaller than a preset bandwidth threshold, or the state identifier of the current transmitter is in a high-priority state, or the number of the transmitters in the high-priority state in the machine room to which the current transmitter belongs is zero; the bandwidth threshold is the product of the total bandwidth of the outlet of the machine room to which the current conveyor belongs and a preset proportional value.
For example, when determining the network speed limit of the current transmitter, the server analyzes the status identifier of each transmitter in the machine room to which the current transmitter belongs, or analyzes the status identifier of the current transmitter to determine whether the current transmitter meets the free contention requirement. The "free competition requirement" is met, for example, all the transmitters in the machine room to which the current transmitter belongs are in a high-quality state, or all the transmitters in the machine room to which the current transmitter belongs are in a common state, or the current transmitter is in a high-quality state.
And when the server determines that the current transmitter meets the free competition requirement, the server informs the current transmitter to compete in a free competition mode to determine the network speed limit. The "competing to determine the network speed limit by adopting a free competition mode" means that the transmitter is not limited and freely competes for the total bandwidth of the outlet of the machine room. Therefore, when all the transmitters in the machine room to which the current transmitter belongs are in the high-priority state, or when the current transmitter is in the high-priority state, the current transmitter preferentially competes for the total bandwidth of the outlet of the machine room, that is, the high-priority transmitter preferentially competes for the total bandwidth of the outlet of the machine room, and sufficient bandwidth can be guaranteed to be allocated to the high-priority transmitter. And when all the transmitters in the machine room to which the current transmitter belongs are in the common state, the common transmitter can also compete for the total bandwidth of the outlet of the machine room, so that the bandwidth of each common transmitter can be allocated.
In one example, when the server determines that the current transmitter meets the free contention requirement according to the state identifier of each transmitter, the server may analyze the state identifier or the bandwidth of the transmitter to determine that the current transmitter meets the free contention requirement. If the server determines that the current transmitter meets the first preset condition, the server informs the current transmitter to compete to determine the network speed limit in a free competition mode; at this time, the server may directly use the network card bandwidth of the current transmitter as the network speed limit of the current transmitter.
For example, the i-th transmitter in the computer room, i is a positive integer greater than or equal to 1, and the limit _ speed (i) of the i-th transmitter is bandwith _ machine (i), where bandwith _ machine (i) is a network card bandwidth of the i-th transmitter, and the network card bandwidth is a fixed value.
The first preset condition includes the following implementation manners.
The first implementation mode comprises the following steps: if the server determines that the current transmitter is the transmitter in the high-priority state, namely the state identifier of the previous transmitter is in the high-priority state, the server determines that the current transmitter meets a first preset condition. At the moment, the server informs the current transmitter to compete to determine the network speed limit in a free competition mode; the server directly uses the network card bandwidth of the current transmitter as the network speed limit of the current transmitter. And further ensure that sufficient bandwidth is allocated to the high-priority transmitter first.
The second implementation mode comprises the following steps: if the server determines that the number of the high-priority transmitters in the machine room to which the current transmitter belongs is zero, that is, the transmitter in the machine room to which the current transmitter belongs is in a normal state or an idle state (no high-priority transmitter exists in the machine room), the server determines that the current transmitter meets a first preset condition. At the moment, the server informs the current transmitter to compete to determine the network speed limit in a free competition mode; the server directly uses the network card bandwidth of the current transmitter as the network speed limit of the current transmitter.
The third implementation mode comprises the following steps: if the server determines that the current actual total bandwidth of the machine room to which the current transmitter belongs is smaller than a preset bandwidth threshold, wherein the bandwidth threshold is related to the total bandwidth of an outlet of the machine room, the server determines that the current transmitter meets a first preset condition. At the moment, the server informs the current transmitter to compete to determine the network speed limit in a free competition mode; the server directly uses the network card bandwidth of the current transmitter as the network speed limit of the current transmitter. In one example, the preset bandwidth threshold is a product of an outlet total bandwidth of a machine room to which the current conveyor belongs and a preset proportional value; the preset proportional value is, for example, 0.5.
The fourth implementation mode comprises the following steps: if the server determines that the transmitters in the machine room to which the current transmitter belongs are all high-priority transmitters, the server determines that the current transmitter meets a first preset condition. At the moment, the server informs the current transmitter to compete to determine the network speed limit in a free competition mode; the server directly uses the network card bandwidth of the current transmitter as the network speed limit of the current transmitter. And the server does not limit the bandwidth of each high-quality transmitter, so that each high-quality transmitter can freely compete for the bandwidth.
In the second implementation manner of limiting the network speed in step 602, when the server determines that the current transmitter does not meet the free contention requirement according to the state identifier of each transmitter, the network speed limit of the current transmitter is determined by the server according to the network card bandwidth of the current transmitter.
In one example, the second implementation manner specifically includes: when the server determines that the current transmitter does not meet the free competition requirement according to the state identifier of each transmitter, the current transmitter meets a second preset condition, and the limited network speed of the current transmitter is related to the network card bandwidth of the current transmitter, the total outlet bandwidth of the machine room to which the current transmitter belongs, the current actual network speed of the high-priority transmitter in the machine room to which the current transmitter belongs, and the total number of the common-state transmitters in the machine room to which the current transmitter belongs. The second preset condition is that the current actual total bandwidth of the machine room to which the current transmitter belongs is greater than or equal to a preset bandwidth threshold, or the state identifier of the current transmitter is not in a high-priority state, or the number of the transmitters in the high-priority state in the machine room to which the current transmitter belongs is not zero; the bandwidth threshold is the product of the total bandwidth of the outlet of the machine room to which the current conveyor belongs and a preset proportional value.
Illustratively, if the current transmitter does not meet the free competition requirement, the server informs the current transmitter of not competing in a free competition manner to determine the limited network speed, and the server determines the limited network speed of the current transmitter according to the network card bandwidth of the current transmitter. The limited network speed of the current transmitter can be limited (i.e. reduced) through the network card bandwidth of the current transmitter. Because the network speed of the conveyor is limited, the bandwidth of the common conveyor can be limited, and the high-priority conveyor is preferentially ensured to occupy the outlet bandwidth of the machine room.
In one example, when the server determines that the current transmitter meets the free contention requirement according to the state identifier of each transmitter, the server may analyze the state identifier or the bandwidth of the transmitter to determine that the current transmitter meets the free contention requirement. And if the server determines that the current transmitter meets the second preset condition, the server informs the current transmitter that the current transmitter cannot compete in a free competition mode to determine the network speed limit, and the server limits the network speed limit of the current transmitter according to the network card bandwidth of the current transmitter.
The server can obtain the current actual network speed of the high-quality transmitter in the machine room to which the current transmitter belongs and the total number of the common transmitters in the machine room to which the current transmitter belongs; the server can calculate the network speed limit of the current transmitter according to the network card bandwidth of the current transmitter, the total outlet bandwidth of the machine room to which the current transmitter belongs, the current actual network speed of the high-priority transmitter in the machine room to which the current transmitter belongs, and the total number of the common transmitters in the machine room to which the current transmitter belongs.
In one example, the server sets a total machine room exit bandwidth bandwith _ idc (in units of MB/s), and establishes three machine information lists, namely a machine flag bit list { flag (i) ═ idle machine "| i in machine _ all }, a machine network card bandwidth list { bandwith _ machine (i) ═ 0| i in machine-all }, and a machine actual network speed list { real _ speed (i) } 0| i in machine _ all }. The system comprises a machine room, a machine _ all, an i-inmachine _ all, a flag (i), a state flag (b), a state flag (c), a state flag (f), a state flag; bandwith _ machine (i) characterizes the network card bandwidth of the ith transmitter; real _ speed (i) characterizes the current actual wire speed of the ith transmitter. And the actual network speed list of the machine is obtained in real time.
Any transmitter i obtains the actual concurrency sum real _ concurrenc _ num (i) of the ith transmitter, a state flag (i), a network card bandwidth _ machine (i) and the current actual network speed real _ speed (i), and reports the information to the server.
The server saves and updates the information reported by the transmitter, and allocates a limit speed _ speed (i) for the ith transmitter; the server will return the limited network speed limit _ speed (i) of the ith transmitter to the ith transmitter.
The second preset condition includes the following implementation manners.
The first implementation mode comprises the following steps: and if the server determines that the current transmitter is not the transmitter in the high-priority state, namely the state identifier of the previous transmitter is not in the high-priority state, the server determines that the current transmitter meets a second preset condition. At this time, the server needs to limit the network speed of the current transmitter, so as to limit the bandwidth resource allocated to the current transmitter. And further, sufficient bandwidth is guaranteed to be allocated for the current high-priority transmitter.
The second implementation mode comprises the following steps: if the server determines that the number of the transmitters in the high-priority state in the machine room to which the current transmitter belongs is not zero, namely the machine room to which the current transmitter belongs has other transmitters in the high-priority state, the server determines that the current transmitter meets a second preset condition. At this time, the server needs to limit the network speed of the current transmitter, so as to limit the bandwidth resource allocated to the current transmitter. And further ensure that other high-priority transmitters are allocated with sufficient bandwidth.
The third implementation mode comprises the following steps: and if the server determines that the current actual total bandwidth of the machine room to which the current transmitter belongs is greater than or equal to a preset bandwidth threshold, wherein the bandwidth threshold is related to the total bandwidth of the outlet of the machine room, the server determines that the current transmitter meets a second preset condition. At this time, the server needs to limit the network speed of the current transmitter. In one example, the preset bandwidth threshold is a product of an outlet total bandwidth of a machine room to which the current conveyor belongs and a preset proportional value; the preset proportional value is, for example, 0.5.
In one example, the network speed limit of the current transmitter is related to the first bandwidth parameter, the second bandwidth parameter and the third bandwidth parameter.
The first bandwidth parameter is determined by the server according to the current actual network speed and the network card bandwidth of each high-priority transmitter in the machine room to which the current transmitter belongs; the first bandwidth parameter is the sum of the minimum bandwidths reserved for all high-priority conveyors in the machine room to which the current conveyor belongs.
The real _ speed (j) is the current actual network speed of the j-th high-priority transmitter, the real _ speed (j) a is the network speed ratio of the j-th high-priority transmitter, a is a fifth preset value, the fifth preset value is a positive number greater than 1, bandwith _ machine (j) is the network card bandwidth of the j-th high-priority transmitter, and j is a positive integer greater than or equal to 1.
The second bandwidth parameter is determined by the server according to the first bandwidth parameter and the total bandwidth of the outlet of the machine room to which the current conveyor belongs; the second bandwidth parameter is the sum of the maximum bandwidths reserved for all the ordinary transmitters in the machine room to which the current transmitter belongs. In one example, the second bandwidth parameter is B ═ bandwith _ idc-a; wherein, bandwith _ idc is the total bandwidth of the outlet of the machine room to which the current conveyor belongs, and A is the first bandwidth parameter.
The third bandwidth parameter is determined by the server according to the second bandwidth parameter and the total number of the ordinary transmitters in the machine room to which the current transmitter belongs; the third bandwidth parameter is the maximum bandwidth reserved for each common-state transmitter in the machine room to which the current transmitter belongs. In one example, the third bandwidth parameter is C ═ B/(| machine _ ordering |) B; wherein B is a second bandwidth parameter, | machine _ ordering | is a total number of ordinary-state transmitters in a machine room to which the current transmitter belongs, | machine _ ordering | B is a sixth preset value, B is a seventh preset value, and the seventh preset value is a positive number greater than 0.
The network speed limit of the current transmitter is determined by the server according to a fourth preset value, the network card bandwidth of the current transmitter and the third bandwidth parameter, wherein the network speed limit of the current transmitter is greater than or equal to the fourth preset value, the network speed limit of the current transmitter is less than or equal to the network card bandwidth of the current transmitter, and the fourth preset value is a positive number greater than zero. In one example, the limited network speed of the current transmitter is max (d, min (bandwith _ machine, C)); wherein, the bandwith _ machine is the network card bandwidth of the current transmitter, C is the third bandwidth parameter, and d is the fourth preset value.
For example, if the server determines that the current transmitter meets the second preset condition, the server needs to calculate the limited bandwidth of the current transmitter. The server can obtain the information of each high-quality transmitter in the machine room to which the current transmitter belongs, and further read the current actual network speed and the network card bandwidth of each high-quality transmitter; the current actual network speed of each high-priority transmitter can be the actual average network speed (unit MB/s) in the current 30s of the transmitter; the network card bandwidth of each high-priority transmitter is a fixed value.
The server calculates a first bandwidth parameter according to the current actual network speed and the network card bandwidth of each high-priority transmitter, wherein the first bandwidth parameter is the sum of minimum bandwidths reserved for all high-priority transmitters in a machine room to which the current transmitter belongs.
In one example, when the server calculates the first bandwidth parameter, for a jth high-priority transmitter in a machine room to which the current transmitter belongs, the server multiplies a current actual network speed of the jth high-priority transmitter by a fifth preset value a, where the fifth preset value a is a positive number greater than 1, to obtain a network speed ratio of the jth high-priority transmitter. For example, the network speed ratio of the j-th high-priority transmitter is real _ speed (j) 1.5, where real _ speed (j) is the current actual network speed of the j-th high-priority transmitter, and 1.5 is a fifth preset value.
Then, aiming at the network speed ratio value of the j-th high-priority transmitter in the machine room to which the current transmitter belongs, the server takes the network speed ratio value of the j-th high-priority transmitter, the network card bandwidth bandwith _ machine (j) of the j-th high-priority transmitter and the minimum value between the network speed ratio value and the network card bandwidth bandwith _ machine (j) of the j-th high-priority transmitter to obtain the limited bandwidth min (bandwith _ machine (j) and real _ speed (j) a) of the j-th high-priority transmitter. Furthermore, for each high-quality transmitter, the server may find the minimum value between the actual bandwidth of the high-quality transmitter and the network card bandwidth of the high-quality transmitter, which is 1.5 times the actual bandwidth of the high-quality transmitter.
For example, the limited bandwidth of the j-th high-priority transmitter is min (bandwidth _ machine (j)), real _ speed (j) 1.5, where bandwidth _ machine (j) is the network card bandwidth of the j-th high-priority transmitter, and real _ speed (j) 1.5 is the network speed ratio of the j-th high-priority transmitter.
Then, the server sums the limited bandwidths of the high-priority state transmitters to obtain a first bandwidth parameter
The server calculates the sum of the minimum bandwidths which can be reserved for all the high-quality transmitters, so that the limitation bandwidth of each high-quality transmitter can be conveniently determined subsequently, and enough bandwidth is reserved for each high-quality transmitter; the "sum of minimum bandwidths" does not include the contended bandwidth of the high-priority transmitter; that is, the first bandwidth parameter characterizes the sum of the minimum bandwidths that can be reserved for all high-priority transmitters. For example, the first bandwidth parameter is sigma (j in machine _ priority, min (bandwith _ machine (j), real _ speed (j) × 1.5)), where sigma is a summation formula. The machine _ priority represents the sum of high-priority conveyors in the same machine room; j in machine _ priority, which represents the j-th high-priority transmitter among all high-priority transmitters in the same machine room.
The server can obtain a first bandwidth parameter A; the first bandwidth parameter A is characterized by the sum of minimum bandwidths which can be reserved for all high-quality transmitters; moreover, the server can acquire the total bandwidth with _ idc of the outlet of the machine room to which the current conveyor belongs, wherein the total bandwidth of the outlet is a fixed value; the server may subtract the first bandwidth parameter a from the total bandwidth _ idc at the outlet, and then obtain a sum of maximum bandwidths reserved for all general-state transmitters in the machine room to which the current transmitter belongs, that is, obtain a second bandwidth parameter B ═ bandwidth _ idc-a. The second bandwidth parameter is the bandwidth cap of all normal state transmitters. And after subtracting the first bandwidth parameter from the total bandwidth of the outlet, the bandwidth of all the high-quality transmitters is subtracted, so that enough bandwidth is reserved for all the high-quality transmitters, and the bandwidth resource of the high-quality transmitters is ensured.
For example, the second bandwidth parameter is (bandwidth _ idc _ sigma (j in machine _ priority, min (bandwidth _ machine (j), real _ speed (j) × 1.5))); wherein, bandwith _ idc is the total bandwidth of an outlet of a machine room to which the current conveyor belongs; sigma (j in machine _ priority, min (baseband _ machine (j), real _ speed (j) × 1.5)) is the first bandwidth parameter.
The server obtains a second bandwidth parameter B. The second bandwidth parameter B is the bandwidth upper limit of all the ordinary transmitters; and the server calculates a third bandwidth parameter C according to the second bandwidth parameter B and the total number | machine _ organization | of the common-state transmitters in the machine room to which the current transmitter belongs. For example, the second bandwidth parameter is divided by "the total number of ordinary transmitters in the machine room to which the current transmitter belongs" to obtain a third bandwidth parameter.
The third bandwidth parameter C represents a maximum bandwidth reserved for each ordinary transmitter in the machine room to which the current transmitter belongs. And reserving bandwidth for the ordinary transmitter, and ensuring that the ordinary transmitter is allocated with bandwidth resources.
In one example, when the third bandwidth parameter is calculated, the server sums the total number | machine _ ordering |, the seventh preset value b of the normal-state transmitters in the machine room to which the current transmitter belongs, and obtains a sixth preset value | machine _ ordering | > b; wherein, the seventh preset value b is a positive number greater than 0. Then, the server divides the second bandwidth parameter B by a sixth preset value | machine _ ordering |, B, to obtain a third bandwidth parameter C ═ B/(| machine _ ordering |).
For example, the sixth preset value is (0.1+ | machine _ ordering |), where | machine _ ordering | is "the total number of transmitters of the common state in the machine room to which the current transmitter belongs by the current transmitter", and 0.1 is the seventh preset value. The third bandwidth parameter is (bandwidth _ idc-sigma (j in machine _ priority, min (bandwidth _ machine (j)), real _ speed (j) × 1.5))/(0.1 + | machine _ ordering |), and the third bandwidth parameter is the upper bandwidth limit of each normal-state transmitter. Wherein, the second bandwidth parameter is (bandwidth _ idc _ sigma (j in machine _ priority, min (bandwidth _ machine (j), real _ speed (j) × 1.5))).
The server obtains a third bandwidth parameter C; and a third bandwidth parameter C, which represents the maximum bandwidth reserved for each common-state transmitter in the machine room to which the current transmitter belongs. And the server can directly acquire the network card bandwidth of the current transmitter, wherein the network card bandwidth of the current transmitter is a fixed value.
And then, the server limits the network speed limit of the current transmitter according to the fourth preset value d, the network card bandwidth _ machine of the current transmitter and the third bandwidth parameter C, so as to obtain the network speed limit of the current transmitter. Wherein the fourth preset value is a positive number greater than zero. And the obtained network speed limit is greater than or equal to a fourth preset value, and the obtained network speed limit is less than or equal to the network card bandwidth of the current transmitter.
In one example, the server takes the network card bandwidth bandwith _ machine and the third bandwidth parameter C of the current transmitter, and the minimum value between the two as a fourth bandwidth parameter min (bandwith _ machine, C); when the minimum value between the network card bandwidth and the third bandwidth parameter of the current transmitter is obtained, the limitation of the network speed of the current transmitter can be limited to be not too large. Then, the server takes a fourth preset value d, a fourth bandwidth parameter min (bandwidth _ machine, C), and a maximum value between the fourth preset value d and the fourth bandwidth parameter min (bandwidth _ machine, C) as a limited network speed max (d, min (bandwidth _ machine, C)) of the current transmitter; when the maximum value between the fourth preset value and the fourth bandwidth parameter is taken, the limitation that the network speed of the current conveyor cannot be too small can be limited.
For example, max (1, min (bandwith _ machine (i)), third bandwidth parameter); wherein, bandwith _ machine (i) is the network card bandwidth of the ith transmitter (i.e. the current transmitter), and 1 is the fourth preset value. Min (bandwidth _ machine (i), the third bandwidth parameter), may ensure that the value of the current transmitter that limits the network speed cannot be too large. max can ensure that the value of the current transmission machine for limiting the network speed cannot be too small; and if min (bandwidth _ machine (i)), the third bandwidth parameter is a very small value or a negative number, and it can also be ensured that the network speed limit of the current transmitter is greater than or equal to 1 (i.e., a fourth preset value).
Through the specific technical process, the limited network speed of the current conveyor can be determined; the limited network speed is used to generate a total threshold for concurrency for the current transmitter. The network speed limitation can be carried out through a plurality of parameters, and the network speed limitation is ensured not to be too large or too small.
603. Each transmitter determines a first concurrency threshold value of the current transmitter according to the current actual network speed and the limited network speed of the current transmitter, wherein the first concurrency threshold value represents the available concurrency of the current transmitter under the limited network speed.
In one example, step 603 specifically includes:
in the first step of step 603, each of the conveyors determines a network speed limit ratio of the current conveyor according to the limited network speed of the current conveyor and the current actual network speed of the current conveyor, where the network speed limit ratio is a ratio between the limited network speed and the current actual network speed.
And a second step of step 603, determining, by each transmitter, a first concurrency threshold of the current transmitter according to the network speed limit proportion of the current transmitter and the actual concurrency sum of the current transmitter, where the actual concurrency sum is the actual concurrency sum of each task in the transmission state in the current transmitter.
Illustratively, since each conveyor is operating and thus each conveyor has an actual wire speed, the current conveyor may read the current actual wire speed of the current conveyor. The representation of the network speed limit of the current transmitter is the maximum network speed which can be reached by the transmitter, and the total concurrency threshold of the current transmitter can be calculated according to the current actual network speed and the network speed limit of the current transmitter according to the actual network speed which is represented by the current actual network speed and the actual network speed which is possessed by the transmitter at the moment.
The current transmitter can limit the available concurrency of the current transmitter according to the current actual network speed and the limited network speed of the current transmitter, and calculate a first concurrency threshold of the current transmitter; that is, the first concurrency threshold characterizes the available concurrency for the current transmitter at the limited network speed.
In one example, the current transmitter divides the network speed limit of the current transmitter by the current actual network speed of the current transmitter, so as to obtain the network speed limit ratio of the current transmitter. Then, the current transmitter multiplies the network speed limit proportion of the current transmitter and the actual concurrency sum of the current transmitter to obtain a first concurrency threshold value for determining the current transmitter; and the first concurrency threshold is an integer. And further, the available concurrency of the current transmitter is obtained, so that the total concurrency threshold of the current transmitter is determined.
For example, the limited network speed limit _ speed (i) of the ith transmitter is divided by the current actual network speed real _ speed (i) of the ith transmitter, and then multiplied by the actual concurrency sum real _ concurrent _ num (i) of the ith transmitter to obtain a value, and the value is rounded to obtain the first concurrency threshold int (limit _ speed (i)/real _ speed (i)) of the ith transmitter. Where int is a rounded down function.
604. Each transmitter determines a total concurrency threshold of the current transmitter according to a first concurrency threshold of the current transmitter and a preset total initial concurrency threshold, wherein the total concurrency threshold of the current transmitter is smaller than or equal to the total initial concurrency threshold. And the total threshold of the concurrency degree is an upper limit value of the total actual concurrency degree, and the total actual concurrency degree is the total actual concurrency degree of each task.
In one example, step 604 specifically includes:
in the first step of step 604, each transmitter determines the minimum value between the first concurrency threshold and the initial total concurrency threshold as the second concurrency threshold.
In the second step of step 604, each transmitter determines a maximum value between an eighth preset value and a second concurrency threshold, where the maximum value is a total concurrency threshold of the current transmitter, and the eighth preset value is a positive number greater than or equal to zero.
Illustratively, after obtaining the first concurrency threshold of the current transmitter, an initial total concurrency threshold is preset, and the initial total concurrency threshold is a larger concurrency; the current transmitter may utilize the initial total threshold of concurrency and the first threshold of concurrency to constrain the total threshold of concurrency of the current transmitter, and then obtain a final total threshold of concurrency. In the process of constraint, the finally obtained total threshold of the concurrency degree needs to be controlled to be less than or equal to the initial total threshold of the concurrency degree.
In one example, the current transmitter takes a first concurrency threshold value and an initial concurrency total threshold value of the current transmitter, and a minimum value between the first concurrency threshold value and the initial concurrency total threshold value as a second concurrency threshold value of the current transmitter; and then the total threshold of the concurrency of the current transmitter is controlled not to be too large. Then, the current transmitter takes an eighth preset value and a second concurrency threshold value, and the maximum value between the eighth preset value and the second concurrency threshold value is used as the total concurrency threshold of the current transmitter; furthermore, the total threshold for controlling the concurrency of the current conveyor cannot be too small. And the eighth preset value is a positive number which is greater than or equal to zero.
For example, the second concurrency threshold of the ith transmitter is min (init _ concurrency _ threshold (i), int (limit _ speed) (i)/real _ speed (i) × real _ concurrency _ num (i)); wherein int (limit _ speed (i)/real _ speed (i) × real _ confluency _ num (i)) is the first concurrency threshold of the ith transmitter. Total threshold of concurrency of ith transmitter, total _ threshold, (i) max (0, min (init _ total _ threshold) (i), int (limit _ speed (i)/real _ speed (i))); wherein 0 is an eighth preset value.
Then, since the total threshold of the concurrency degree of the current transmitter indicates the sum of all the available concurrency degree resources of the current transmitter, the current transmitter may allocate the concurrency degree (i.e., the concurrency degree resources) to the task to be transmitted in the current transmitter according to the total threshold of the concurrency degree of the current transmitter. In one example, the current transmitter may allocate resources (i.e., concurrency resources) for each task to be transmitted according to the priority level of each task to be transmitted.
Then, the current transmitter transmits each task to be transmitted respectively according to the resources allocated to each task to be transmitted.
Through the process of step 603-; and then the total threshold of the concurrency of each transmitter can be dynamically adjusted in real time. And further, the transmission machine of the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources, that is, the task (high-priority task) to be transmitted with the first priority level is ensured to obtain more bandwidth resources and to be transmitted quickly.
Through the step 603 and the step 604, if the high-quality transmitter and the normal-state transmitter are in the same equipment room and the total bandwidth of the equipment room is insufficient, the bandwidths of the normal-state transmitter and the idle-state transmitter need to be limited (that is, the bandwidth of the non-high-quality transmitter is limited); and preferentially distributing the bandwidth for the high-priority state transmitter, and ensuring that the high-priority state transmitter occupies the machine room outlet bandwidth. By the free competition and bandwidth limitation mode, enough bandwidth can be allocated to the high-quality transmitter, and the rest bandwidth can be allocated to the normal transmitter; in addition, the whole transmission speed of the whole transmitter in the machine room can be furthest not reduced.
In one example, after step 604 is performed, the processes of the embodiments of fig. 2 or fig. 3 may also be performed.
In this embodiment, on the basis of the above embodiment, the network speed limit of the current conveyor is determined by analyzing the state identifiers of all the conveyors in the machine room to which the current conveyor belongs. The transmission machines meeting the free competition requirement compete to determine the limited network speed in a free competition mode, and the transmission machines not meeting the free competition requirement obtain the limited network speed in a limiting mode; furthermore, in the subsequent process, the high-quality state transmitter is guaranteed to preferentially compete for the total bandwidth of the outlet of the machine room, and sufficient bandwidth can be guaranteed to be allocated to the high-quality state transmitter. Then, the total concurrency threshold of the current conveyor is allocated according to parameters such as the current actual network speed and the limited network speed; the total threshold for controlling the concurrency cannot be too large or too small. Enabling the high-priority state transmitter to be allocated with sufficient bandwidth, and enabling the common state transmitter to be allocated with the rest bandwidth; in addition, the whole transmission speed of the whole transmitter in the machine room can be furthest not reduced. Furthermore, on the basis of the above embodiment, it is ensured that the high-priority state transmitter and the high-priority task can occupy more bandwidth resources, and the high-priority task is preferentially and quickly transmitted; meanwhile, all tasks in the machine room can be transmitted at high speed, the transmission machine can repeatedly utilize bandwidth resources, the non-idle bandwidth resources are guaranteed, and the overall transmission throughput of the machine room is maximized.
Fig. 21 is a schematic diagram of an eleventh embodiment of the present application, and as shown in fig. 21, the present embodiment provides a conveyor processing system based on a machine room system, where the system includes a server and at least one conveyor, and each conveyor includes:
a sending unit 61, configured to send a state identifier of a current transmitter to a server, where the state identifier is used to represent that the transmitter is in an idle state, a high-priority state, or a normal state; the transmission machine in the idle state is not transmitting the task, the transmission machine in the high-priority state is transmitting the task with the priority higher than the preset priority threshold, and the transmission machine in the common state is transmitting the task with the priority lower than the preset priority threshold.
The receiving unit 62 is configured to receive the limited network speed of the current transmitter sent by the server, where the server stores attribute information of each transmitter in a machine room to which the current transmitter belongs, and the attribute information includes a status identifier of the transmitter and a current actual network speed of the transmitter; the network speed limit is determined by the server according to the state identification of each transmitter.
The determining unit 63 is configured to determine a total concurrency threshold of the current transmitter according to the current actual network speed and the limited network speed of the current transmitter, where the total concurrency threshold is used for data transmission of the transmitter.
In an example, the transmission machine processing system based on the machine room system provided in this embodiment may further execute the technical solution of the embodiment shown in fig. 2 or fig. 3; the conveyor processing system based on the machine room system provided by this embodiment may further include the apparatus in the embodiment shown in fig. 10 or fig. 11.
The transmitter processing system based on the machine room system in this embodiment may execute the technical solutions in the embodiments of fig. 19 to 20, and the specific implementation process and technical principle thereof are the same, and are not described herein again.
Fig. 22 is a schematic diagram according to a twelfth embodiment of the present application, as shown in fig. 22, on the basis of the embodiment shown in fig. 21, the attribute information further includes a network card bandwidth of the transmitter; and when the server determines that the current transmitter meets the free competition requirement according to the state identifier of each transmitter, the network speed limit of the current transmitter is determined by the server according to a free competition mode.
And when the server determines that the current transmitter does not meet the free competition requirement according to the state identifier of each transmitter, the network speed limit of the current transmitter is determined by the server according to the network card bandwidth of the current transmitter.
In one example, when the server determines that the current transmitter meets the free contention requirement according to the state identifier of each transmitter, the current transmitter meets a first preset condition, and the network card bandwidth of the current transmitter is the network speed limit of the current transmitter.
The first preset condition is that the current actual total bandwidth of the machine room to which the current transmitter belongs is smaller than a preset bandwidth threshold, or the state identifier of the current transmitter is in a high-priority state, or the number of the transmitters in the high-priority state in the machine room to which the current transmitter belongs is zero; the bandwidth threshold is the product of the total bandwidth of the outlet of the machine room to which the current conveyor belongs and a preset proportional value.
In one example, the attribute information further includes a current actual wire speed of the transmitter; when the server determines that the current transmitter does not meet the free competition requirement according to the state identifier of each transmitter, the current transmitter meets a second preset condition, and the limited network speed of the current transmitter is related to the network card bandwidth of the current transmitter, the total outlet bandwidth of the machine room to which the current transmitter belongs, the current actual network speed of the high-priority transmitter in the machine room to which the current transmitter belongs, and the total number of the common-state transmitters in the machine room to which the current transmitter belongs.
The second preset condition is that the current actual total bandwidth of the machine room to which the current transmitter belongs is greater than or equal to a preset bandwidth threshold, or the state identifier of the current transmitter is not in a high-priority state, or the number of the transmitters in the high-priority state in the machine room to which the current transmitter belongs is not zero; the bandwidth threshold is the product of the total bandwidth of the outlet of the machine room to which the current conveyor belongs and a preset proportional value.
In one example, the network speed limit of the current transmitter is related to the first bandwidth parameter, the second bandwidth parameter and the third bandwidth parameter.
The first bandwidth parameter is determined by the server according to the current actual network speed and the network card bandwidth of each high-priority transmitter in the machine room to which the current transmitter belongs; the first bandwidth parameter is the sum of the minimum bandwidths reserved for all high-priority conveyors in the machine room to which the current conveyor belongs.
The second bandwidth parameter is determined by the server according to the first bandwidth parameter and the total bandwidth of the outlet of the machine room to which the current conveyor belongs; the second bandwidth parameter is the sum of the maximum bandwidths reserved for all the ordinary transmitters in the machine room to which the current transmitter belongs.
The third bandwidth parameter is determined by the server according to the second bandwidth parameter and the total number of the ordinary transmitters in the machine room to which the current transmitter belongs; the third bandwidth parameter is the maximum bandwidth reserved for each common-state transmitter in the machine room to which the current transmitter belongs.
And the network speed limit of the current transmitter is determined by the server according to a fourth preset value, the network card bandwidth of the current transmitter and the third bandwidth parameter, wherein the network speed limit of the current transmitter is greater than or equal to the fourth preset value, the network speed limit of the current transmitter is less than or equal to the network card bandwidth of the current transmitter, and the fourth preset value is a positive number greater than zero.
The real _ speed (j) is the current actual network speed of the j-th high-priority transmitter, the real _ speed (j) a is the network speed ratio of the j-th high-priority transmitter, a is a fifth preset value, the fifth preset value is a positive number greater than 1, bandwith _ machine (j) is the network card bandwidth of the j-th high-priority transmitter, and j is a positive integer greater than or equal to 1.
In one example, the second bandwidth parameter is B ═ bandwith _ idc-a; wherein, bandwith _ idc is the total bandwidth of the outlet of the machine room to which the current conveyor belongs, and A is the first bandwidth parameter.
In one example, the third bandwidth parameter is C ═ B/(| machine _ ordering |) B.
Wherein B is a second bandwidth parameter, | machine _ ordering | is a total number of ordinary-state transmitters in a machine room to which the current transmitter belongs, | machine _ ordering | B is a sixth preset value, B is a seventh preset value, and the seventh preset value is a positive number greater than 0.
In one example, the limited network speed of the current transmitter is max (d, min (bandwith _ machine, C)).
Wherein, the bandwith _ machine is the network card bandwidth of the current transmitter, C is the third bandwidth parameter, and d is the fourth preset value.
In one example, the determining unit 63 includes:
the first determining subunit 631 is configured to determine a first concurrency threshold of the current transmitter according to the current actual network speed of the current transmitter and the network speed limit, where the first concurrency threshold represents an available concurrency of the current transmitter at the network speed limit.
A second determining subunit 632, configured to determine a total concurrency threshold of the current transmitter according to the first concurrency threshold of the current transmitter and a preset total initial concurrency threshold, where the total concurrency threshold of the current transmitter is less than or equal to the total initial concurrency threshold.
In one example, the first determining subunit 631 includes:
the first determining module 6311 is configured to determine a network speed limit ratio of the current transmitter according to the limited network speed of the current transmitter and the current actual network speed of the current transmitter, where the network speed limit ratio is a ratio between the limited network speed and the current actual network speed.
A second determining module 6312, configured to determine the first concurrency threshold of the current transmitter according to the network speed limit ratio of the current transmitter and a sum of actual concurrency of the current transmitter, where the sum of actual concurrency is a sum of actual concurrency of each task in a transmission state in the current transmitter.
In one example, the second determining subunit 632 includes:
a third determining module 6321, configured to determine a minimum value between the first concurrency threshold and the initial total concurrency threshold as the second concurrency threshold.
A fourth determining module 6322, configured to determine a maximum value between an eighth preset value and the second concurrency threshold, where the maximum value is a total threshold of concurrency of the current transmitter, and the eighth preset value is a positive number greater than or equal to zero.
In an example, the transmission machine processing system based on the machine room system provided in this embodiment may further execute the technical solution of the embodiment shown in fig. 2 or fig. 3; the conveyor processing system based on the machine room system provided by this embodiment may further include the apparatus in the embodiment shown in fig. 10 or fig. 11.
The transmitter processing system based on the machine room system in this embodiment may execute the technical solutions in the embodiments of fig. 19 to 20, and the specific implementation process and technical principle thereof are the same, and are not described herein again.
Fig. 23 is a schematic diagram of a thirteenth embodiment of the present application, and as shown in fig. 23, the data transmission method based on a transmitter provided in this embodiment includes:
701. determining expected concurrency of tasks to be transmitted with different priorities according to a total concurrency threshold of a current transmitter and actual concurrency of each task in the current transmitter, wherein the tasks have priorities, the total concurrency threshold is an upper limit value of the total actual concurrency, and the total actual concurrency is the sum of the actual concurrency of each task; the expected concurrency is characterized by the number of resources allocated to the tasks to be transmitted, and the expected concurrency of the tasks to be transmitted with different priorities is different.
Illustratively, the execution subject of this embodiment may be a transmitter, or a data transmission device or apparatus based on the transmitter, or other devices or apparatuses that can execute the method of this embodiment. The present embodiment is described with the execution main body as a transmitter.
This step can be referred to as step 101-102 shown in fig. 3, and will not be described again.
702. And starting resources corresponding to the number of the resources with the expected concurrency representation of each task to be transmitted, and respectively transmitting each task to be transmitted.
For example, this step may be performed in step 103 shown in fig. 3, which is not described again.
The technical effect of this embodiment can be seen in the technical effect of fig. 3, and is not described again.
Fig. 24 is a schematic diagram of a fourteenth embodiment of the present application, and as shown in fig. 24, an electronic device 70 in the present embodiment may include: a processor 71 and a memory 72.
A memory 72 for storing programs; the Memory 72 may include a volatile Memory (RAM), such as a Static Random Access Memory (SRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (DDR SDRAM), and the like; the memory may also comprise a non-volatile memory, such as a flash memory. The memory 72 is used to store computer programs (e.g., applications, functional modules, etc. that implement the above-described methods), computer instructions, etc., which may be stored in one or more of the memories 72 in a partitioned manner. And the above-mentioned computer program, computer instructions, data, etc. can be called by the processor 71.
The computer programs, computer instructions, etc. described above may be stored in one or more memories 72 in partitions. And the above-mentioned computer program, computer instruction, etc. can be called by the processor 71.
A processor 71, configured to execute the computer program stored in the memory 72 to implement the steps in the methods related to the embodiments of fig. 2, or fig. 3, or fig. 23, or fig. 14, or fig. 15.
Reference may be made in particular to the description relating to the preceding method embodiment.
The processor 71 and the memory 72 may be separate structures or may be an integrated structure integrated together. When the processor 71 and the memory 72 are separate structures, the memory 72 and the processor 71 may be coupled by a bus 73.
In an example, the electronic device of this embodiment may be used as a transmitter, and the electronic device of this embodiment may execute the technical solutions in the embodiments of fig. 2, fig. 3, or fig. 23, and specific implementation processes and technical principles thereof are the same, and are not described herein again.
In another example, the electronic device of this embodiment may be used as a transmitter, and the electronic device of this embodiment may execute the technical solution of the embodiment in fig. 14 or fig. 15, and the specific implementation process and the technical principle are the same, and are not described herein again.
Fig. 25 is a schematic diagram according to a fifteenth embodiment of the present application, and as shown in fig. 25, the computer room system in the present embodiment includes a server 80 and at least one electronic device 90; the electronic device 90 may include: a processor 91 and a memory 92.
A memory 92 for storing programs; memory 92, which may include volatile memory, such as random access memory, e.g., SRAM, DDR SDRAM, etc.; the memory may also include non-volatile memory, such as flash memory. The memory 92 is used to store computer programs (e.g., applications, functional modules, etc. that implement the above-described methods), computer instructions, etc., which may be stored in one or more of the memories 92 in a partitioned manner. And the above-mentioned computer program, computer instructions, data, etc. can be called by the processor 91.
The computer programs, computer instructions, etc. described above may be stored in one or more memories 92 in partitions. And the above-mentioned computer program, computer instruction, etc. can be called by the processor 71.
A processor 91 configured to execute the computer program stored in the memory 92 to implement the steps of the method according to the embodiment of fig. 19 or fig. 20.
Reference may be made in particular to the description relating to the preceding method embodiment.
The processor 91 and the memory 92 may be separate structures or may be an integrated structure integrated together. When the processor 91 and the memory 92 are separate structures, the memory 92 and the processor 91 may be coupled by a bus 93.
In an example, the electronic device of this embodiment may be used as a transmitter, and the electronic device of this embodiment may execute the technical solution of the embodiment in fig. 19 or fig. 20, and a specific implementation process and a technical principle thereof are the same, and are not described herein again.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 26 is a schematic diagram of a sixteenth embodiment of the present application, and as shown in fig. 26, fig. 26 is a block diagram of an electronic device for implementing any one of the methods of the embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 26, the electronic apparatus includes: one or more processors 801, memory 802, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 26 illustrates an example of one processor 801.
The memory 802 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform any of the methods provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform any of the methods provided herein above.
The memory 802 serves as a non-transitory computer-readable storage medium, and may be used to store a non-transitory software program, a non-transitory computer-executable program, and a module, such as program instructions/modules corresponding to any one of the above methods based on a transmitter in the embodiments of the present application (for example, the first acquisition unit 31, the first determination unit 32, and the activation unit 33 shown in fig. 10; or, the first acquisition unit 51, the first determination unit 52, and the second determination unit 53 shown in fig. 17; or, the transmission unit 61, the reception unit 62, and the determination unit 63 of the transmitter of the system shown in fig. 21). The processor 801 executes various functional applications and data processing of the server by running non-transitory software programs, instructions and modules stored in the memory 802, that is, the technical solution in the above method embodiment is realized.
The memory 802 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of an electronic device for implementing any of the above-described methods, and the like. Further, the memory 802 may include high speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 802 optionally includes memory located remotely from the processor 801, which may be connected via a network to electronic devices for implementing any of the methods described above. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of any of the above methods may further comprise: an input device 803 and an output device 804. The processor 801, the memory 802, the input device 803, and the output device 804 may be connected by a bus or other means, and are exemplified by a bus in fig. 26.
The input device 803 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of an electronic apparatus for implementing any of the methods described above, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or like input device. The output devices 804 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The embodiments provided by the application can be used for mass data transmission in automatic driving.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (18)
1. A data transmission method based on a transmitter, the method being applied to the transmitter, the method comprising:
acquiring a total threshold of the concurrency degree of a current transmitter and the actual concurrency degree of each task in the current transmitter, wherein the tasks have priorities, the total threshold of the concurrency degree is an upper limit value of the total actual concurrency degree, and the total actual concurrency degree is the sum of the actual concurrency degree of each task;
determining expected concurrency of the tasks to be transmitted with different priorities according to the total concurrency threshold and the actual concurrency of each task, wherein the expected concurrency is characterized by the number of resources allocated to the tasks to be transmitted, and the expected concurrency of the tasks to be transmitted with different priorities is different;
and starting resources corresponding to the number of the resources with the expected concurrency representation of each task to be transmitted, and respectively transmitting each task to be transmitted.
2. The method of claim 1, wherein determining the expected concurrency of the tasks to be transmitted with different priorities according to the total threshold of the concurrency and the actual concurrency of each task comprises:
determining that resources in a current transmitter can support transmission of tasks to be transmitted at a first priority level according to the total concurrency threshold and the actual concurrency of each task, and determining the expected concurrency of the tasks to be transmitted at the first priority level according to the actual concurrency of each task when the current transmitter is determined to have the tasks to be transmitted at the first priority level, wherein the first priority level is greater than or equal to a preset priority threshold;
and when the current transmitter is determined not to have the tasks to be transmitted with the first priority level, determining the expected concurrency of the tasks to be transmitted with the second priority level according to the actual concurrency of each task, wherein the second priority level is smaller than a preset priority threshold value.
3. The method of claim 2, wherein when determining that resources in a current transmitter can support transmission of the tasks to be transmitted at the first priority level according to the total concurrency threshold and the actual concurrency of each task, and determining that the current transmitter has the tasks to be transmitted at the first priority level, determining the expected concurrency of the tasks to be transmitted at the first priority level according to the actual concurrency of each task comprises:
determining the actual concurrency sum of the current conveyor according to the actual concurrency of each task, wherein the actual concurrency sum is the actual concurrency sum of each task;
when the actual concurrency sum is determined to be smaller than a preset first concurrency threshold, determining that resources in a current transmitter can support transmission of tasks to be transmitted at a first priority level, wherein the first concurrency threshold is a difference value between a second concurrency threshold and a first preset threshold difference, the first concurrency threshold is smaller than the second concurrency threshold, and the second concurrency threshold is equal to the total concurrency threshold;
and when the current transmitter is determined to have the tasks to be transmitted with the first priority level, determining the expected concurrency of the tasks to be transmitted with the first priority level according to the second concurrency threshold and the actual concurrency sum.
4. The method of claim 3, wherein determining the desired concurrency for the tasks to be transmitted at the first priority level based on the second concurrency threshold and the actual sum comprises:
acquiring the ideal concurrency of the tasks to be transmitted at the first priority level, wherein the ideal concurrency represents the minimum number of resources which can be used by the tasks, and the transmission time of the tasks under the ideal concurrency is minimum;
determining the expected concurrency of the tasks to be transmitted at the first priority level according to the second concurrency threshold, the actual concurrency sum, the ideal concurrency of the tasks to be transmitted at the first priority level and a third concurrency threshold; wherein the third concurrency threshold is the maximum expected concurrency of a single task; the expected concurrency of the tasks to be transmitted with the first priority level is less than or equal to the ideal concurrency of the tasks to be transmitted with the first priority level, and the expected concurrency of the tasks to be transmitted with the first priority level is less than or equal to the third concurrency threshold.
5. The method of claim 4, wherein determining the desired concurrency for the tasks to be transmitted at the first priority level based on the second concurrency threshold, the actual sum of concurrencies, the ideal concurrency for the tasks to be transmitted at the first priority level, and a third concurrency threshold comprises:
determining a first concurrency threshold according to the second concurrency threshold and the actual concurrency sum, wherein the first concurrency threshold is the sum of the remaining actual concurrency of the current transmitter;
determining the minimum value between the ideal concurrency of the tasks to be transmitted with the first priority level and the first concurrency threshold value as a second concurrency threshold value;
determining a maximum value between a first preset value and the second concurrency threshold value as a third concurrency threshold value, wherein the first preset value is an integer greater than 0;
and determining the minimum value between the third concurrency threshold and the third concurrency threshold, wherein the minimum value is the expected concurrency of the tasks to be transmitted with the first priority level.
6. The method of claim 3, further comprising:
and when the actual concurrency sum is determined to be greater than or equal to the first concurrency threshold, determining that the resources in the current transmitter can not support the transmission of the tasks to be transmitted with the first priority level, and determining to compare the actual concurrency sum with the first concurrency threshold again after preset time.
7. The method of claim 3, wherein when determining that the resources in the current transmitter can support transmission of the tasks to be transmitted at the first priority level according to the total concurrency threshold and the actual concurrency of each task, and determining that the current transmitter does not have the tasks to be transmitted at the first priority level, determining the expected concurrency of the tasks to be transmitted at the second priority level according to the actual concurrency of each task comprises:
when the actual concurrency sum is determined to be smaller than the first concurrency threshold, determining that the resources in the current transmitter can support the transmission of the tasks to be transmitted with the first priority level;
when it is determined that the current transmitter does not have the task to be transmitted with the first priority level, determining whether the actual concurrency sum is greater than or equal to a preset fourth concurrency threshold, wherein the fourth concurrency threshold is a difference value between a fifth concurrency threshold and a second preset threshold difference, the fourth concurrency threshold is smaller than the fifth concurrency threshold, and the fifth concurrency threshold is equal to the first concurrency threshold;
determining that the actual concurrency sum is smaller than the fourth concurrency threshold, and determining that the resources in the current transmitter can support the transmission of the tasks to be transmitted with the second priority level; and when determining that the current transmitter has the task to be transmitted with the second priority level, determining the expected concurrency of the task to be transmitted with the second priority level according to the fifth concurrency threshold and the actual concurrency sum.
8. The method of claim 7, wherein determining the desired concurrency for the tasks to be transmitted at the second priority level based on the fifth concurrency threshold and the actual concurrency sum comprises:
acquiring the ideal concurrency of the tasks to be transmitted at the second priority level, wherein the ideal concurrency represents the minimum number of resources which can be used by the tasks, and the transmission time of the tasks under the ideal concurrency is minimum;
determining the expected concurrency of the tasks to be transmitted at the second priority level according to the fifth concurrency threshold, the actual concurrency sum, the ideal concurrency of the tasks to be transmitted at the second priority level and a third concurrency threshold; wherein the third concurrency threshold is the maximum expected concurrency of a single task; the expected concurrency of the tasks to be transmitted with the second priority level is less than or equal to the ideal concurrency of the tasks with the second priority level, and the expected concurrency of the tasks to be transmitted with the second priority level is less than or equal to the third concurrency threshold.
9. The method of claim 8, wherein determining the desired concurrency for the tasks to be transmitted at the second priority level based on the fifth concurrency threshold, the actual sum of concurrencies, the ideal concurrency for the tasks to be transmitted at the second priority level, and a third concurrency threshold comprises:
determining a fourth concurrency threshold according to the fifth concurrency threshold, the actual concurrency sum and a second preset value, wherein the fourth concurrency threshold is the sum of the remaining actual concurrency of the current transmitter, and the second preset value is an integer greater than or equal to 1;
determining the minimum value between the ideal concurrency of the tasks to be transmitted with the second priority level and the fourth concurrency threshold value as a fifth concurrency threshold value;
determining a maximum value between a third preset value and the fifth concurrency threshold value as a sixth concurrency threshold value, wherein the third preset value is an integer greater than 0;
and determining the minimum value between the third concurrency threshold and the sixth concurrency threshold, wherein the minimum value is the expected concurrency of the tasks to be transmitted at the second priority level.
10. The method of claim 7, further comprising:
and when the actual concurrency sum is determined to be greater than or equal to the fourth concurrency threshold, determining that the resources in the current transmitter can not support the transmission of the tasks to be transmitted with the second priority level, and determining to compare the actual concurrency sum with the first concurrency threshold again after preset time.
11. The method of claim 7, further comprising:
determining that the actual concurrency sum is smaller than the fourth concurrency threshold, and determining that the resources in the current transmitter can support the transmission of the tasks to be transmitted with the second priority level; and when the current transmitter is determined not to have the task to be transmitted with the second priority level, determining that the current transmitter does not have the task to be transmitted, and determining that the actual concurrency sum and the first concurrency threshold are compared again after preset time.
12. The method according to any one of claims 3-11, further comprising:
acquiring the file size of each data file of each task in the current transmitter;
and determining the ideal concurrency of each task according to the file size of each data file of each task, wherein the ideal concurrency represents the minimum number of resources which can be used by the task, and the transmission time of the task under the ideal concurrency is minimum.
13. The method of claim 12, wherein the ideal concurrency is a sum of a number of resources of a resource that a data file in a task may occupy; determining the ideal concurrency of each task according to the file size of each data file of each task, wherein the method comprises the following steps:
determining each head file and each non-head file in each task according to the file size of each data file of each task, wherein the file size of each head file is greater than or equal to a preset threshold, the file size of each non-head file is smaller than the preset threshold, the preset threshold is the product of a preset first parameter and a preset second parameter, and the second parameter is the file size of the data file with the highest file size in the task;
determining each header file to occupy one resource respectively, and determining the resource number of the resource occupied by each non-header file according to the sum of the file sizes of the non-header files.
14. The method of claim 13, wherein determining the number of resources of the resources occupied by each non-header file based on the sum of the file sizes of each non-header file comprises:
and determining the ratio of the sum of the file sizes of the non-header files and the second parameter, which is the resource number of the resources occupied by the non-header files.
15. A data transmission apparatus based on a transmitter, the apparatus being applied to the transmitter, the apparatus comprising:
a first obtaining unit, configured to obtain a total threshold of concurrency of a current transmitter and an actual concurrency of each task in the current transmitter, where the tasks have priorities, the total threshold of concurrency is an upper limit value of a total actual concurrency, and the total actual concurrency is a sum of actual concurrency of each task;
a first determining unit, configured to determine expected concurrency degrees of the tasks to be transmitted with different priorities according to the total concurrency degree threshold and the actual concurrency degree of each task, where the expected concurrency degrees are represented by the number of resources allocated to the tasks to be transmitted, and the expected concurrency degrees of the tasks to be transmitted with different priorities are different;
and the starting unit is used for starting the resources corresponding to the number of the resources with the expected concurrency representation of each task to be transmitted and respectively transmitting each task to be transmitted.
16. An electronic device applied to a transmitter, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-14.
17. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-14.
18. A data transmission method based on a transmitter, the method being applied to the transmitter, the method comprising:
determining expected concurrency of tasks to be transmitted with different priorities according to a total concurrency threshold of a current transmitter and actual concurrency of each task in the current transmitter, wherein the tasks have priorities, the total concurrency threshold is an upper limit value of the total actual concurrency, and the total actual concurrency is the sum of the actual concurrency of each task; the expected concurrency is characterized by the number of resources allocated to the tasks to be transmitted, and the expected concurrency of the tasks to be transmitted with different priorities is different;
and starting resources corresponding to the number of the resources with the expected concurrency representation of each task to be transmitted, and respectively transmitting each task to be transmitted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010546956.7A CN111711688B (en) | 2020-06-16 | 2020-06-16 | Data transmission method, device and equipment based on transmitter and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010546956.7A CN111711688B (en) | 2020-06-16 | 2020-06-16 | Data transmission method, device and equipment based on transmitter and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111711688A true CN111711688A (en) | 2020-09-25 |
CN111711688B CN111711688B (en) | 2023-02-28 |
Family
ID=72540301
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010546956.7A Active CN111711688B (en) | 2020-06-16 | 2020-06-16 | Data transmission method, device and equipment based on transmitter and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111711688B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007058508A1 (en) * | 2005-11-18 | 2007-05-24 | Sk Telecom Co., Ltd. | Method for adaptive delay threshold-based priority queueing scheme for packet scheduling in mobile broadband wireless access system |
CN101170834A (en) * | 2007-12-06 | 2008-04-30 | 华为技术有限公司 | Resource distributing method and device |
EP2237630A2 (en) * | 2009-03-31 | 2010-10-06 | NTT DoCoMo, Inc. | Method and apparatus for resource scheduling in uplink transmission |
CN102769914A (en) * | 2012-04-29 | 2012-11-07 | 黄林果 | Fair scheduling method based on mixed businesses in wireless network |
CN102791032A (en) * | 2012-08-14 | 2012-11-21 | 华为终端有限公司 | Network bandwidth distribution method and terminal |
US20130219404A1 (en) * | 2010-10-15 | 2013-08-22 | Liqun Yang | Computer System and Working Method Thereof |
CN103841052A (en) * | 2012-11-27 | 2014-06-04 | 中国科学院声学研究所 | Bandwidth resource distribution system and method |
CN108462999A (en) * | 2018-01-10 | 2018-08-28 | 海信集团有限公司 | A kind of method and apparatus carrying out resource allocation |
-
2020
- 2020-06-16 CN CN202010546956.7A patent/CN111711688B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007058508A1 (en) * | 2005-11-18 | 2007-05-24 | Sk Telecom Co., Ltd. | Method for adaptive delay threshold-based priority queueing scheme for packet scheduling in mobile broadband wireless access system |
CN101170834A (en) * | 2007-12-06 | 2008-04-30 | 华为技术有限公司 | Resource distributing method and device |
EP2237630A2 (en) * | 2009-03-31 | 2010-10-06 | NTT DoCoMo, Inc. | Method and apparatus for resource scheduling in uplink transmission |
US20130219404A1 (en) * | 2010-10-15 | 2013-08-22 | Liqun Yang | Computer System and Working Method Thereof |
CN102769914A (en) * | 2012-04-29 | 2012-11-07 | 黄林果 | Fair scheduling method based on mixed businesses in wireless network |
CN102791032A (en) * | 2012-08-14 | 2012-11-21 | 华为终端有限公司 | Network bandwidth distribution method and terminal |
CN103841052A (en) * | 2012-11-27 | 2014-06-04 | 中国科学院声学研究所 | Bandwidth resource distribution system and method |
CN108462999A (en) * | 2018-01-10 | 2018-08-28 | 海信集团有限公司 | A kind of method and apparatus carrying out resource allocation |
Also Published As
Publication number | Publication date |
---|---|
CN111711688B (en) | 2023-02-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107066332B (en) | Distributed system and scheduling method and scheduling device thereof | |
CN109564528B (en) | System and method for computing resource allocation in distributed computing | |
CN112783659B (en) | Resource allocation method and device, computer equipment and storage medium | |
JP2013515991A (en) | Method, information processing system, and computer program for dynamically managing accelerator resources | |
US11119563B2 (en) | Dynamic power capping of multi-server nodes in a chassis based on real-time resource utilization | |
US11455187B2 (en) | Computing system for hierarchical task scheduling | |
US9037703B1 (en) | System and methods for managing system resources on distributed servers | |
CN113672391B (en) | Parallel computing task scheduling method and system based on Kubernetes | |
CN107423134B (en) | Dynamic resource scheduling method for large-scale computing cluster | |
CN113238848A (en) | Task scheduling method and device, computer equipment and storage medium | |
US8090903B2 (en) | Fair and dynamic disk input/output bandwidth distribution | |
CN112486642B (en) | Resource scheduling method, device, electronic equipment and computer readable storage medium | |
US8640131B2 (en) | Demand-based processor cycle allocation subsequent to equal group-based processor cycle distribution | |
CN110764887A (en) | Task rescheduling method and system, and related equipment and device | |
CN111240824A (en) | CPU resource scheduling method and electronic equipment | |
CN114968601A (en) | Scheduling method and scheduling system for AI training jobs with resources reserved according to proportion | |
CN109189581B (en) | Job scheduling method and device | |
CN105094945A (en) | Method, equipment and system for virtualization platform thread control | |
US20130014119A1 (en) | Resource Allocation Prioritization Based on Knowledge of User Intent and Process Independence | |
CN114764371A (en) | Task scheduling method and management system | |
CN111711688B (en) | Data transmission method, device and equipment based on transmitter and storage medium | |
CN115629854A (en) | Distributed task scheduling method, system, electronic device and storage medium | |
CN111708624B (en) | Concurrency allocation method, device, equipment and storage medium based on multiple transmitters | |
CN111711582B (en) | Transmission machine processing method, system and storage medium based on machine room system | |
KR20150089665A (en) | Appratus for workflow job scheduling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20211022 Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085 Applicant after: Apollo Intelligent Technology (Beijing) Co.,Ltd. Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085 Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |